Toronto Star

Artificial intelligen­ce smarter than you think

Google program defeats humans at Go, ancient and difficult board game

- KATE ALLEN SCIENCE & TECHNOLOGY REPORTER

Mastering arcade games seems cute by comparison.

Researcher­s at DeepMind, the Google-owned artificial intelligen­ce lab, announced Wednesday they had achieved a breakthrou­gh not thought possible for at least another decade: a computer program that defeats humans at Go, an enormously complicate­d strategy game.

Their work, published in the journal Nature, is a significan­t leap forward from the algorithm Google DeepMind unveiled last year, which could beat Atari games such as Breakout and Space Invaders.

The team is now gunning for an even bigger trophy. In March, almost exactly 20 years after IBM’s Deep Blue computer first faced off against Garry Kasparov in a historic chess match, their program will challenge Lee Sedol, the world Go champion.

Exponentia­lly more complicate­d than chess, Go is considered a “grand challenge” for artificial intelligen­ce because it cannot be solved by number-crunching alone. Other programs have been designed to play Go but with limited success, and AlphaGo, as Google DeepMind named their program, trounced them all. Then it swept the human European champion, Fan Hui, five games to none. “It’s amazing that they’ve finally done it,” said Geoffrey Hinton, an AI pioneer who works at Google and the University of Toronto.

“The applicatio­ns are enormous,” said Yoshua Bengio, head of the Montreal Institute for Learning Algorithms at the Université de Montréal. “Many companies want to build better personal assistants and have your phone or computer dialogue with you. This is a really important industrial problem, and this could serve as a basis for it.”

Go — also known as Igo, Weiqi or Baduk — may be the oldest board game in the world, originatin­g more than 2,500 years ago in China. Its simplicity belies its complexity. Players start with an equal number of black or white stones and alternate moves by placing them on a 19-by-19 board, trying to surround the most territory and opponent’s own stones.

“Go is probably the most complex game ever devised by humans,” said Demis Hassabis, a Google DeepMind founder, who is an accomplish­ed chess player and worked in gaming before becoming a machine learning researcher. “It has 10 to the power of 170 possible board positions, which is greater than the number of atoms in the universe. It takes a lifetime of study to master, and that’s what makes it so fascinatin­g for humans to play, and also such a great challenge for AI research.”

Because of the number of possible board positions and length of the game, Go has a much vaster “search space” than chess and cannot be solved by computing every possible outcome — there are just too many to use so-called “brute force” calculatio­n. That is also not the way humans learn to play games: we pick up patterns from watching others and from our own experience.

The Google DeepMind researcher­s overcame this challenge by combining a number of approaches from “deep learning,” a type of artificial intelligen­ce that has achieved major successes in important tasks such as image and speech recognitio­n.

AlphaGo uses two different deep neural networks, an architectu­re that is styled on the way the human brain processes informatio­n, to process board positions and potential moves. The neural networks were trained with both supervised learning (inputting expert human moves), and reinforcem­ent learning (playing against itself and developing its own strategies). By combining these approaches, AlphaGo is able to limit the number of possible choices it must evaluate by selecting them more intelligen­tly and evaluating them more precisely.

“This approach makes AlphaGo’s search much more human-like than previous approaches,” said David Silver, another Google DeepMind researcher and first author on the Nature paper.

DeepMind was a small British startup when it published work on an algorithm that could beat Atari games. Google quickly bought the company for a reported $400 million, a move widely seen in Silicon Valley as a decision to acquire some of the top minds in artificial intelligen­ce and deep learning rather than any commercial product.

The move set off a kind of gold rush, with Facebook and Microsoft, among others, hiring their own deep learning talent. Facebook has also been working on an algorithm that can master Go, but Google’s team got there first.

 ?? NATURE ?? Demis Hassabis and colleagues at Google DeepMind, an artificial intelligen­ce company, announced they had created a computer program that defeated a human at Go, the classic strategy game.
NATURE Demis Hassabis and colleagues at Google DeepMind, an artificial intelligen­ce company, announced they had created a computer program that defeated a human at Go, the classic strategy game.

Newspapers in English

Newspapers from Canada