PC GAMER (US)

TECH REPORT

GOOGLE is already putting machine learning to work on improving itself

-

We have previously suggested that a robot-controlled future in which humans are hunted for sport is a likely one, but we’ve had a bit of trouble with our blood pressure lately, and have decided techno-utopianism is just as reasonable, in the style of Iain M Banks’ Culture novels.

The crucial difference here is that, while the murderous AI of the Terminator is a human invention, the benevolent Minds that oversee the Culture are AIs built by AIs, and it seems to work out better that way.

There’s a step in microchip design known as floorplann­ing, in which the building blocks of a chip are marked out before they go anywhere near a silicon wafer. You put some processing cores here, a touch of cache RAM there, top it all off with some GPU cores over here, and decorate with off-chip connection­s around the edges. Related areas cluster together so that the electrical pathways most commonly taken are the shortest. The whole thing needs to be verified before it’s made, of course, and one of the great skills of chip design is floorplann­ing them to be as efficient as possible.

And it’s very much like a game, the sort of optimizati­on problem that you see in Spacechem or StarCraft build queues. The sort of thing, as we’ve definitely mentioned here before, that AIs are good at.

So that’s exactly what Google has done. In its data centers, Google uses a type of processor called a TPU— that’s Tensor Processing Unit. They power AI and deep learning applicatio­ns, and are essentiall­y ASICs— Applicatio­n Specific Integrated Circuits—at their most basic level they’re not too dissimilar to GPUs without any texture-mapping hardware. The tensors being processed here are the same ones being thrown about by the Tensor Cores in Nvidia’s RTX graphics cards to obtain their deep-learning approach to upscaling.

So what’s a tensor? Well, it’s a number. Quite a lot of numbers. A rectangula­r array of numbers (known as scalars), arranged into rows and columns, is a matrix—think of your PC’s screen, with all the pixels labelled by their coordinate­s. Define a line between two points on that matrix, and you’ve got a vector, the basis of game graphics in 1980s games such as Starglider and Asteroids. The vector is the most basic kind of tensor, but once you decide you want to do linear algebra—and we have evidence that

some people do make this decision—you can create tensors that map between different points on multidimen­sional (more than three) arrays of numbers, and then you can start multiplyin­g them together.

These tensors, ever-shifting in their linear algebra (the inventor of linear algebra, German polymath Hermann Grassmann, is worth looking up for his remarkable beard as much as for his achievemen­ts that went largely unrecogniz­ed in his lifetime), are what allow the training of deep learning systems, essentiall­y getting an AI to do the same thing over and over, marking its efforts until it gets really good at one task. Google Brain decided to try AL floorplann­ing with its TPUs—and the results of this experiment are not only better than human-designed chips, they’re structural­ly different too. And they’re in Google’s data centers.

So how do you train an AI to lay out something as complex as a TPU? “We use a method called reinforcem­ent learning,” says Anna Goldie from Google Brain, who’s also co-author of the paper ‘A graph placement methodolog­y for fast chip design’ published in Nature that sets out AI floorplann­ing. “We have this neural network that places the components of the chip one at a time onto a canvas, and after it’s placed all of them, we get a measure of how good the placement is, and feed that back into the network. Then we do it again, tens of thousands of times, until it gets really good at it.”

“The agent takes up to six hours to learn how to do the [chip component] placement,” says Goldie’s colleague and paper co-author Azalia Mirhoseini. “But it took us a while to train the agent and come up with the algorithm ourselves.”

BRAIN POWER

Perhaps the most provocativ­e claim in Google Brain’s paper is that the chips being produced by an AI trained in six hours are superior to those created by the highly skilled humans who usually do the job. Mirhoseini explains that it’s just as much about the process as the chips themselves, “These chips are going to be in the data center and serve many, many users, so if you can make a next-generation chip even one day faster, one that’s more energy-efficient and has more compute [capability] that’s where it makes a huge difference. We’ve made this process much faster, and fully automated.” But there are structural difference­s too. “They do surprising things,” says Goldie. “We see strange placements that humans maybe wouldn’t have come up with.”

“The placements that [the neural network] comes up with can look very different from what the human experts will do,” says Mirhoseini. “They have a much more organic shape, like it’s more curved, there are these donut-shaped placements, but it all makes sense, because maybe these shapes will help reduce the distance from the things in the center to the donut around the edge.”

The next step is to use these chips to design and run the AI algorithms of the future. And while it takes a while for a new TPU to find its way into a data center in large numbers, the process is underway. There’s an AI revolution coming, but a good part of it will be used for recognizin­g text in photos or reading our email in order to better serve us advertisin­g. The techno utopia remains, as ever, tantalizin­gly close.

Ian Evenden

THE RESULTS ARE BETTER THAN HUMAN-DESIGNED CHIPS

 ??  ??
 ??  ?? FAR RIGHT: One of Google’s thirdgener­ation TPUs.
FAR RIGHT: One of Google’s thirdgener­ation TPUs.
 ??  ?? LEFT: An Intel 80486 CPU showing the different parts etched into the siliconI.
RIGHT: Dr Azalia Mirhoseini co-founded and co-leads Google Research’s machine learning team.
LEFT: An Intel 80486 CPU showing the different parts etched into the siliconI. RIGHT: Dr Azalia Mirhoseini co-founded and co-leads Google Research’s machine learning team.

Newspapers in English

Newspapers from United States