A team of researchers at the University of California has put together a robot that can learn how to do tasks through trial and error. The development is a key milestone in the field of artificial intelligence.
The algorithms allow the robot to build its knowledge slowly over time like humans do, rather than being pre-programmed from their moment of creation. The resulting 'neural nets' were inspired by the neural circuitry of the brain.
"The key is that when a robot is faced with something new, we won't have to reprogram it," said Pieter Abbeel, who helped develop the technique. "The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it."
In a series of experiments, a robot named BRETT (Berkeley Robot for the Elimination of Tedious Tasks) was given a series of motor assignments, like putting blocks through matching openings or stacking Lego blocks. As it got closer to the solution, the robot was "rewarded" with points allocated by the algorithm.
"We still have a long way to go before our robots can learn to clean a house or sort laundry," said Abbeel, "but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch."