In the article “New setup for image recognition Al lets a program think on its feet” by Maria Temming was published on December 4. The new artificial intelligence, made by the new virtual building block called capsules have an easier time identifying from different perspectives. This helps cars with self recognition, phones, medical staff and more. The new artificial neural network was presented on December 5th, at the Neural Information Processing Systems Conference at Long Beach, California. Neural networks are made up of webs with individual virtual neurons that ” learn to pick out objects in pictures by studying labeled example images.” The previous Al was not able to recognize an object from different views.
For instance if they have a face memorized by its eyes, nose, and shape it won’t realize a picture of a eye is a face. In response to this researchers gave the new Al more capsules that are able to detect more details of any object, because of this it’s easier for it to recognize image. Scientists made the program remember all the ways a face can look like.
With the new Al it doesn’t need as much training to recognize objects as the previous one before it. When both the old and new artificial neural network were tested with handwritten letters and their distortion, the new one got 79 percent right, and the old one 66 percent. In another experiment to recognize toys from different viewpoints, another capsule network was tested and it only got it wrong 1.4 percent of the time.This article is interesting because it can help with phones or other devices that have face recognition.
Technology is used everyday and is relied on the most so it’s good that they can improve something that is used in devices. The best way to let people know of this is by people at tech stores telling them about it. This way they’re interested and you know their listening, you’ll have their undivided attention and then they can pass on the info to family members.