Maximum PC

Supercompu­ters vs. PCs vs. the Human Brain

GO BACK A DECADE or two, and all the fastest supercompu­ters were running RISC processors. Lately, though, the Top 500 charts have been filled with systems running x86 chips, often helped with compute GPUs, such as Nvidia’s Tesla. For several years, the fa

- Jarred Walton Jarred Walton has been a PC and gaming enthusiast for over 30 years.

It uses a combinatio­n of 32,000 12-core Xeon CPUs paired with 48,000 Xeon Phi 3120P co-processors (57 cores each), giving it 3.12 million processing cores.

If that sounds a lot, the new TianhuLigh­t supercompu­ter nearly triples performanc­e, thanks to a custom-designed ShenWei SW26010 processor. We don’t know the manufactur­ing process, but we do know that each chip has 260 cores, with one chip per node. Combined, that gives TianhuLigh­t 10,649,600 cores, each running at 1.45GHz, and the result is a supercompu­ter capable of 93 PFLOPS in LINPACK.

That’s an interestin­g figure, as experts estimate simulating the human brain will require 50–1,000 PFLOPS. TianhuLigh­t has crossed that lower threshold, and we’re likely to see supercompu­ters in the 300–500 PFLOPS range by 2020.

How does this compare to Dream Machine 2016? It’s not looking so hot for PCs, with DM16 puttering along at around 0.02 PFLOPS. But just as we can’t compare FLOPS and the human brain (with a pen and paper, I’d rate about 0.03 FLOPS), comparing supercompu­ters with PCs providing real-time interactio­ns becomes a mess. Can TianhuLigh­t run Crysis? Not really—it’s not designed to handle real-time user input and graphics. But with the right software, it might be able to learn how to play Crysis.

That’s the real difficulty with supercompu­ters and artificial intelligen­ce: the software. All the processing potential in the world is useless if it’s not put to good use, and good computer algorithms can mean the difference between minutes and days (or years) when it’s time to solve problems. In the past, supercompu­ters have used expert systems—systems designed by experts from a specific field, with the goal of solving one particular problem (playing chess or checkers, for example). Now, the big hype is all about deep learning, where convolutio­nal neural networks can go many layers deep, and act more like a human brain than a bunch of hyper-fast FLOPS calculator­s.

One great example of this is the world of board games. In 1997, IBM’s Deep Blue was able to beat Garry Kasparov in a series of six games of chess. It was the first time a computer had been able to defeat a grandmaste­r chess player. It was done mostly by brute force, with Deep Blue searching and evaluating millions of positions to choose the best move. It was projected that, due to the increased complexity, it would be 100 years before a computer could do the same with the Chinese game Go. Fast forward to 2016, and Google’s DeepMind AlphaGo has done just that. The difference? Software.

Instead of brute force, AlphaGo uses deep learning, playing against itself for millions of games, and creating its own strategies. And the hardware behind AlphaGo is relatively tame in comparison to the top supercompu­ters, with just 1,202 CPUs and 176 GPUs. Take these algorithms and apply them to other tasks, and just imagine the sort of learning that could be done with the tens of thousands of processors.

SkyNet is one fearmonger­ing prospect, sure, but I’m more optimistic. I, for one, welcome our new digital overlords. Living in a virtual reality matrix doesn’t sound so bad. Now, where’s my Vive?

Can TianhuLigh­t run Crysis? No. But with the right software, it might learn how to play it.

 ??  ?? China’s TaihuLight has 40,960 nodes, with 260 cores per node.
China’s TaihuLight has 40,960 nodes, with 260 cores per node.
 ??  ??

Newspapers in English

Newspapers from United States