Robots learn faster when we give them credit for getting things right
That's what AI go to school for
Teaching a computer to recognise objects in an image isn't easy - it involves complex pattern recognition and comparing those patterns with a database of other images.
But we're slowly getting better at it. The latest breakthrough comes from MIT engineers, who've shown that robots learn faster if they're given points when they get something almost right.
They wrote a script that crawls Flickr images and looks for tags that tend to occur together on the same photo - like "sunshine", "water" and "reflection", for example. Then, if a robot made an incorrect guess that was related to a correct answer, it would get partial credit - guessing "summer" instead of "sunshine", for example.
Possible Categories
In tests where the computer had to predict what tags a Flickr user had given an image, the MIT team's system outperformed conventional machine learning systems. It was extremely good at predicting tags that were semantically similar to the correct tags.
The next step is to begin to apply the concept of "ontologies" - how classifications are stacked - teaching a computer that dogs are animals, collies are dogs, and Lassie is a collie, for example.
"When you have a lot of possible categories, the conventional way of dealing with it is that, when you want to learn a model for each one of those categories, you use only data associated with that category," said Chiyuan Zhang, one of the paper's lead authors.
"It's treating all other categories equally unfavorably. Because there are actually semantic similarities between those categories, we develop a way of making use of that semantic similarity to sort of borrow data from close categories to train the model."
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Details of their method were published on the pre-print ArXiv server.