A robot taught itself to play chess in just 72 hours

An AI researcher at Imperial College in London has built a deep-learning machine called 'Giraffe' that plays chess like humans do - in an entirely different way to conventional chess engines.

Matthew Lai used a neural network that can be trained using examples. He fed it a huge database of 175 million snapshots from real chess games - and asked the machine to look at the number and type of pieces in play, the locations of the pieces, and the places they could move.

A traditional chess machine would then manually evaluate each position and use that data to calculate the best move. But Lai taught his machine to instead learn to predict what moves were likely to be strong and weak - just like a human player does.

Complicated Positional Concepts

After just 72 hours of the learning process, Giraffe managed to attain the same level of play exhibited by the world's best chess machines - and the human grandmasters that work on them. It can predict the best move 46% of the time, and places the best move in its top three ranking 70% of the time.

"Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time," said Lai in an interview with Technology Review.

"This is especially important in the opening and endgame phases, where it plays exceptionally well."

Lai has published the details of his machine-learning algorithm on the preprint arXiv server.

Image credit: John Morgan // CC BY 2.0