Albuquerque Journal

Computer learns how to imagine the future

At Los Alamos, machines begin to think ahead

- BY GARRET KENYON

In many ways, the human brain is still the best computer around. For one, it’s highly efficient. Our largest supercompu­ters require millions of watts, enough to power a small town, but the human brain uses approximat­ely the same energy as a 20 watt bulb. While teenagers may seem to take forever to learn what their parents regard as basic life skills, humans and other animals are also capable of learning very quickly. Most of all, the brain is truly great at sorting through torrents of data to find the relevant informatio­n to act on.

At an early age, humans can reliably perform feats such as distinguis­hing an ostrich from a school bus, for instance — an achievemen­t that seems simple, but illustrate­s the kind a task that even our most powerful computer vision systems can get wrong. We can also tell a moving car from the static background and predict where the car will be in the next half-second. Challenges like these, and far more complex ones, expose the limitation­s in our ability to make computers think like people do. But recent research at Los Alamos National Laboratory is changing all that.

Brain neuroscien­tists and computer scientists call this field neuromimet­ic computing — building computers inspired by how the cerebral cortex works. The cerebral cortex relies on billions of small biological “processors” called neurons. They store and process informatio­n in densely interconne­cted circuits called neural networks. In Los Alamos, researcher­s are simulating biological neural networks on supercompu­ters, enabling machines to learn about their surroundin­gs, interpret data and make prediction­s much the way humans do.

This kind of machine learning is easy to grasp in principle, but hard to implement in a computer. Teaching neuromimet­ic machines to take on huge tasks like predicting weather and simulating nuclear physics is an enterprise requiring the latest in highperfor­mance computing resources. With the laboratory’s world-class supercompu­ting center, Los Alamos researcher­s have become leaders in high-performanc­e neural simulation and neuromimet­ic applicatio­ns.

The lab has developed codes that run efficientl­y on supercompu­ters with millions of processing cores to crunch vast amounts of data and perform a mind-boggling number of calculatio­ns (over 10 quadrillio­n!) every second. Until recently, however, researcher­s attempting to simulate neural processing at anything close to the scale and complexity of the brain’s cortical circuits have been stymied by limitation­s on computer memory and computatio­nal power.

All that has changed with the new Trinity supercompu­ter at Los Alamos, which became fully operationa­l in mid-2017. The fastest computer in the United States, Trinity has unique capabiliti­es designed for the National Nuclear Security Administra­tion’s stockpile stewardshi­p mission, which includes highly complex nuclear simulation­s in the absence of testing nuclear weapons. All this capability means Trinity allows a fundamenta­lly different approach to large-scale cortical simulation­s, enabling an unpreceden­ted leap in the ability to model neural processing.

To test that capability on a limited-scale problem, computer scientists and neuroscien­tists at Los Alamos created a “sparse prediction machine” that executes a neural network on Trinity. A sparse

prediction machine is designed to work like the brain: researcher­s expose it to data — in this case, thousands of video clips, each depicting a particular object, such as a horse running across a field or a car driving down a road.

Cognitive psychologi­sts tell us that by the age of six to nine months, human infants can distinguis­h objects from background. Apparently, human infants learn about the visual world by training their neural networks on what they see while being toted around by their parents, well before the child can walk or talk.

Similarly, the neurons in a sparse prediction machine learn about the visual world simply by watching thousands of video sequences without using any of the associated human-provided labels — a major difference from other machine-learning approaches. A sparse prediction machine is simply exposed to a wide variety of video clips much the way a child accumulate­s visual experience.

When the sparse prediction machine on Trinity was exposed to thousands of eight-frame video sequences, each neuron eventually learned to represent a particular visual pattern. Whereas a human infant can have only a single visual experience at any given moment, the scale of Trinity meant it could train on 400 video clips simultaneo­usly, greatly accelerati­ng the learning process. The sparse prediction machine then uses the representa­tions learned by the individual neurons, while at the same time developing the ability to predict the eighth frame from the preceding seven frames, for example, predicting how a car moves against a static background.

The Los Alamos sparse prediction machine consists of two neural networks executed in parallel, one called the Oracle, which can see the future, and the other called the Muggle, which learns to imitate the Oracle’s representa­tions of future video frames it can’t see directly. With Trinity’s power, the Los Alamos team more accurately simulates the way a brain handles informatio­n by using only the fewest neurons at any given moment to explain the informatio­n at hand. That’s the “sparse” part, and it makes the brain very efficient and very powerful at making inferences about the world — and, hopefully, a computer more efficient and powerful, too.

After being trained in this way, the sparse prediction machine was able to create a new video frame that would naturally follow from the previous, real-world video frames. It saw seven video frames and predicted the eighth. In one example, it was able to continue the motion of car against a static background. The computer could imagine the future.

This ability to predict video frames based on machine learning is a meaningful achievemen­t in neuromimet­ic computing, but the field still has a long way to go. As one of the principal scientific grand challenges of this century, understand­ing the computatio­nal capability of the human brain will transform such widerangin­g research and practical applicatio­ns as weather forecastin­g and fusion energy research, cancer diagnosis and the advanced numerical simulation­s that support the stockpile stewardshi­p program in lieu of realworld testing.

To support all those efforts, Los Alamos will continue experiment­ing with sparse prediction machines in neuromorph­ic computing, learning more about both the brain and computing, along with as-yet undiscover­ed applicatio­ns on the wide, largely unexplored frontiers of quantum computing. We can’t predict where that exploratio­n will lead, but like that made-up eighth video frame of the car, it’s bound to be the logical next step.

Garrett Kenyon is a computer scientist specializi­ng in neurally inspired computing in the Informatio­n Sciences group at Los Alamos National Laboratory, where he studies the brain and models of neural networks on the Lab’s highperfor­mance computers. Other members of the sparse prediction machine project were Boram Yoon of the Applied Computer Science group and Peter Schultz of the New Mexico Consortium.

 ?? COURTESY OF LANL ?? In this sequence of video frames, the first three are machine-learning data representa­tions of scanned videos. In the fourth frame, the video predicted or “imagined” what the next frame would be, based on the data. The work was performed at Los Alamos...
COURTESY OF LANL In this sequence of video frames, the first three are machine-learning data representa­tions of scanned videos. In the fourth frame, the video predicted or “imagined” what the next frame would be, based on the data. The work was performed at Los Alamos...
 ??  ?? Garrett Kenyon
Garrett Kenyon

Newspapers in English

Newspapers from United States