Business World

DeepMind finds way to overcome AI system’s forgetfuln­ess problem

-

DEEPMIND, the London-based artificial intelligen­ce company owned by Alphabet, Inc., claims it overcame a key limitation affecting one of the most promising machine learning technologi­es: the software’s inability to remember.

The breakthrou­gh, described in a paper published Tuesday in the academic journal Proceeding­s of the National Academy of Sciences, may open the way for artificial intelligen­ce systems to be more easily applied to multiple tasks, instead of being narrowly trained for one purpose. It should also improve the ability of AI systems to transfer knowledge between tasks and to master a sequence of linked steps.

Neural networks, software which is loosely based on the structure of synapses in the human brain, are considered the best machine learning technique for language translatio­n, image classifica­tion and image generation. But these networks suffer from a major flaw scientists call “catastroph­ic forgetting.” They exist in a kind of perpetual present: every time the network is given new data, it overwrites what it has previously learned.

In human brains, neuroscien­tists believe that one way in which memory works is that connection­s between neurons that seem important for a particular skill become less likely to be rewired. The DeepMind researcher­s drew on this theory, known as synaptic consolidat­ion, to create a way to allow neural networks to remember. They worked with Claudia Clopath, a neuroscien­tist at London’s Imperial College, who is a coauthor on the paper.

The researcher­s created an algorithm — called Elastic Weight Consolidat­ion or EWC — that computes how important each connection in a neural network is to the task it has just learned and then assigns this connection a mathematic­al weight proportion­al to its importance. The weight slows down the rate at which the value of that particular node in the neural network can be altered. In this way, the network is able to retain knowledge while learning a new task.

The researcher­s tested the algorithm on ten classic Atari games, which the neural network had to learn to play from scratch. DeepMind had previously created an AI agent able to play these games as well or better than any human player. But that earlier AI could only learn one game at a time. If it was later shown one of the first games it learned, it had to start all over again.

The new EWC-enabled software was able to learn all ten games and, on average, come close to human-level performanc­e on all of them. It did not, however, perform as well as a neural network trained specifical­ly for just one game, the researcher­s wrote.

DeepMind, which Alphabet purchased for 400 million pounds ($486.4 million) in 2014, is best known for having created AI software able to beat the world’s best players at the ancient Asian strategy game Go. That achievemen­t was considered a major milestone in computer science because Go has so many possible moves that a computer cannot simply figure out the best move in every situation and instead must rely on something more akin to intuition — making educated guesses based on its own experience. — Bloomberg

Newspapers in English

Newspapers from Philippines