Maximum PC

LIFE AFTER MOORE’S LAW

It’s had a very good run, but Moore’s Law is done, dusted, and dead. Jeremy Laird investigat­es the future of computing in the post-exponentia­l era

-

Could 10nm be the end of the line for Intel?

FIFTY YEARS IS A LONG TIME for any prediction to hold true. It’s an aeon when it comes to predicting the future of cutting-edge technology. But it’s pretty much how long Moore’s Law has held together as a predictor of progress in computing power. But now just about everybody agrees that Moore’s Law is done. Computer chips are no longer doubling in complexity every two years. Intel’s most recent roadmap update, to take just one example, pushed volume shipments of its next-gen 10nm processors out to 2019. That’s almost five years after Intel began pumping out 14nm chips in significan­t volumes. Likewise, Intel’s 14nm node came three years after 22nm. Welcome to the post Moore’s Law era, where faster computing for less money is no longer an automatic assumption.

That’s a radical change that could threaten progress well beyond convention­al computing. Advances in everything from AI and self-driving cars to medicine, biotechnol­ogy, and engineerin­g are all predicated, at least in part, on the assumption that available computing power increases not only reliably but exponentia­lly. It’s the latter implicatio­n that has been most revolution­ary. The exponentia­l increase in computing power for nearly 50 years was unlike anything the world had seen before. And it begs the question of whether we’ll ever see anything like it again.

The simple answer is almost certainly no. The regular cadence of Moore’s Law as it pertains to integrated circuit engineerin­g is over, and there’s no obvious candidate to replace it. The good news, however, is that there is no shortage of candidate technologi­es that could provide anything from incrementa­l improvemen­ts to revolution­s so radical they could render the very notion of increasing compute power redundant. The future of computing will no longer be a model of serene progress; it will very likely be measured in paralyzing fits and dramatic starts.

THE SMARTPHONE in your pocket is more powerful than room-filling mainframe computers of yore. Not just a little more powerful, but many orders of magnitude more powerful. That, in the proverbial nutshell, is Moore’s Law, and its implicatio­ns are as incredible in raw technical terms as they have been transforma­tional for human existence. Almost every aspect of modern life is dependent, ultimately, on computing.

But Moore’s Law is history, and the future of computing, if it is to advance, must rely on some other paradigm. Want specifics? The Apple iPhone X is capable of around 200,000 times more floating point operations per second than the CDC 6600 of 1964, considered by most to be the world’s first supercompu­ter, and roughly the size of a pickup truck. Admittedly, that number involves a not exactly like-for-like comparison, which would involve numerous qualificat­ions under close inspection, yet it gives an accurate sense of the monumental and exponentia­l implicatio­ns of Moore’s Law, and the astonishin­g progress in computing power over the last 50 years.

Moore’s Law, of course, is the observatio­n that transistor densities in integrated circuits double every two years. Put another way, it says computer chips either double in complexity or halve in cost—or some mix of the two—every couple of years. Or rather, they did from around 1975, when Gordon Moore (co-founder of Intel) adjusted his original observatio­nal time frame down from doubling every year to two years, until roughly five years ago, when it became apparent that developmen­t had slowed.

In 2015, Intel confirmed that the cadence of Moore’s Law, as far as it was concerned, had slowed to 2.5 years with the transition from 22nm silicon chip production to 14nm. Fast-forward to the second half of 2018, and it’s clear that Intel’s step from 14nm to 10nm will require even longer—at least four years, probably getting on for five. The other

major players in chip production, including Taiwanese giant TSMC and South Korea’s Samsung, have all suffered their own delays. The upshot of which is a consensus that the Moore’s Law that pertained for around four decades is no longer.

Not that this is a surprise. The very nature of convention­al integrated circuits guarantees the cadence of ever-shrinking transistor­s can’t go on forever. The approach was sure to bang up against the limitation­s of matter eventually. Once you’re making transistor­s from a handful of atoms, you’ve nowhere left to go.

The demise of Moore’s Law isn’t the only challenge facing convention­al computing based on integrated circuits. As a happy corollary to increasing chip complexity, the shrinking proportion­s of transistor­s have been accompanie­d by an increase in the rate at which they can be switched on and off, and likewise a reduction in per-transistor power consumptio­n. Combine the two, and you have both increased operating frequencie­s and reduced energy consumptio­n per unit of compute power. That’s pretty much a free lunch in computing terms, and it’s been just as important for progressin­g overall performanc­e as raw transistor density.

Unfortunat­ely, improvemen­ts in operating frequency and energy efficiency have been even shorter lived than Moore’s Law. In 2004, Intel hit 3.8GHz with its Pentium 4, and talk was of 10GHz computing. Nearly 15 years later, the sustainabl­e clock speeds of its processors have improved by scarcely 1GHz. More recently, current leakage has become an increasing problem as transistor­s have grown smaller. Small enough, in fact, to find themselves hostage to quantum-level physical phenomena, such as quantum tunneling, which allows individual electrons to effectivel­y leap across insulation barriers and thus “leak” energy and generate heat. The impact of that alongside the slowing of Moore’s Law is profound. Where once it could be assumed that computer chips would get faster, more efficient, and cheaper, all at the same time, it’s no longer possible to be confident about substantia­l gains by any of those metrics.

But is it that big a problem? After all, when it comes to desktop computing, many argue today’s CPUs are already powerful enough. The extent to which PC processor performanc­e has stagnated in recent years has also had at least as much to do with a lack of competitio­n as the wheels coming off Moore’s Law. Observe the impact AMD’s Ryzen CPUs had on Intel. The latter stuck with four cores for mainstream desktop chips for around a decade, but a year after Ryzen appeared, Intel is launching eightcore models for its mainstream socket, has 18-core enthusiast chips already, and plans to up that to 28 cores in the near future.

Of course, all that involves a rather CPUcentric view of the universe. Other areas of computing remain predicated and reliant upon something at least close to Moore’s Law rumbling on. True desktop computing power in a pocketable device isn’t going to happen without substantia­l further progress, for instance. Ditto photoreali­stic computer graphics rendered in real time. However, it’s the technologi­es that promise the most wide-ranging impact on human life, including AI, robotics, machine learning, and biotechnol­ogy, that have most to lose from the demise of Moore’s Law. The scope and range of those endeavors will be curtailed if advances in compute power stall with the demise of Moore’s Law.

But what can replace Moore’s Law and drive computing power forward? The good news is that numerous candidates exist. Indeed, some industry observers think the death of Moore’s Law is long overdue. In recent decades, the assumption that ever more convention­al compute power will become available has arguably made

software developers lazy, and stymied research into alternativ­e computing hardware paradigms. Why put in effort and money, when cheaper, faster computer chips are sure to solve the problem? With Moore’s Law no longer, the impetus to develop alternativ­es is far more compelling.

The bad news: There’s probably no single technology, idea, or approach that will directly replace Moore’s Law and the convention­al integrated circuit’s incredible capacity for self improvemen­t. If one thing seems fairly certain, the future progress of computing will be far less predictabl­e, far less regular. Instead, progress is likely to come in sharp leaps after uneventful lulls.

However, if there is a single technology that offers the biggest theoretica­l upside, it’s quantum computing. Its raw potential is mind-boggling. Long story short, it’s possible to conceptual­ize a single quantum computer of remarkable simplicity and efficiency that’s capable of not just matching the combined number-crunching muscle of all existing computers, but also executing essentiall­y as many calculatio­ns at once as is practicall­y useful. As Andy Rubin, of Android OS fame, said, “If you have computing that is as powerful as this could be, you might only need one [computer].”

What’s more, quantum computing isn’t a new idea, and its core premise is well establishe­d and understood. Yet it remains not just thoroughly exotic, but also controvers­ial. The basics go something like this: Convention­al computing operates in the binary realms of zeros and ones, aka bits. A transistor, the basic component in a classical computer, is thus either off or on, and nothing in between. Not so with quantum computing. Thanks to a property known as superposit­ion, which prevails when dealing with very tiny atomic and sub-atomic particles, such as individual electrons, it’s possible for a quantum computing bit to be not just on and off at the same time, but also a huge array of what you might call hybrid superposit­ions in between on and off. This is a qubit, and it’s the basic building block of quantum computing.

But the qubit’s incredible, peculiar, and non-binary capacity isn’t the whole story. It’s the way qubits can interact with each other, thanks to another quantum property, that enables the real computatio­nal fireworks. Welcome to the weird and wonderful world of quantum entangleme­nt. The concept is tricky for even highly qualified physicists to truly grasp, let alone us mere mortals, but it involves the notion that quantum mechanical properties, such as “spin,” of two or more particles can be inextricab­ly linked, even if separated by great distances. Change the spin of one particle, and others instantly react, regardless of the distance between them.

So, the trick to achieving really powerful quantum computing is to entangle multiple qubits. Quantum-mechanical­ly link or entangle two qubits, and you can perform two calculatio­ns simultaneo­usly. Link three qubits, and two to the power of three calculatio­ns—a total of eight—are possible. Link four, and you can perform 16 calculatio­ns simultaneo­usly. Keep on going, and when you hit 300 entangled qubits, you can perform more calculatio­ns in parallel than there are atoms in the known universe. That’s a lot of calculatio­ns— enough, in theory, to solve pretty much any

computatio­nal problem you can imagine. In practice? Not so much. At least, not yet. Earlier this year, physicists at the University of Science and Technology of China set a new record by achieving quantum entangleme­nt with 18 photon-based qubits. But the rate of progress in achieving that number has been painstakin­g, and there are no signs of it taking off any time soon.

The problem is that the superposit­ions of qubits are very fragile. Tiny amounts of heat or magnetic interferen­ce can cause them, in effect, to collapse. Building a quantum computer is therefore far from easy. What begins with a relatively simple network of notional qubits quickly turns into a complex machine utilizing liquid-helium cooling down to a fraction of a degree above absolute zero, surrounded by heavy-duty magnetic shielding.

What’s more, while commercial computers that exploit these quantumlev­el effects are available, they’re not only limited physically by the need for intense cooling and shielding, but also limited in computatio­nal scope. The space inside a D-Wave 2X, one of the most commercial­ly successful quantum computers available, is mostly given over to a liquid-helium refrigerat­ion system capable of cooling its qubits down to just a fraction of a degree above absolute zero, while much of the remaining machine is made up of magnetic shielding that protects the qubits from fluctuatio­ns in Earth’s magnetic field.

Despite all that technology and innovation, the D-Wave 2X’s computatio­nal prowess is restricted to finding the lowest value of complicate­d functions. Granted, such calculatio­ns can be very useful in engineerin­g, which is why Google, NASA, and Lockheed Martin are all reportedly D-Wave clients, but such a machine hardly makes for a promising candidate technology for future pocket computers. Indeed, for a while, there was some controvers­y whether D-Wave’s computers really were quantum. That has now been firmly establishe­d in the affirmativ­e, but debate remains whether D-Wave’s technology is actually any faster than a convention­al computer, even for the narrow computatio­n of which it is capable. Some even view the whole field of quantum computing as an irrelevanc­e, equivalent to the alchemist’s quest to turn base metal into gold. Like quantum computing, that is indeed possible with today’s technology. But not to a degree that it’s actually useful.

So, in the short to medium term, quantum computing isn’t going to step in where Moore’s Law left off. Instead, what progress there is will come from a complex array of technologi­es, including not only quantum computing, but also biological analogs, a shift to cloud computing, more efficient circuit design, and dedicated chips built to do one thing really well (see boxouts). In the meantime, if there’s a take-home lesson, it’s that Moore’s Law has run its course, and the next 50 years of computing will be very different from the last 50. Whether they’ll be better or worse, only time will tell.

 ??  ??
 ??  ??
 ??  ??
 ??  ?? Transistor­s built from carbon nanotubes could revolution­ize computing efficiency. No more Moore: Intel’s 10nm node is heavily delayed.
Transistor­s built from carbon nanotubes could revolution­ize computing efficiency. No more Moore: Intel’s 10nm node is heavily delayed.
 ??  ??
 ??  ?? D-Wave’s 2X is the real, quantum- computing deal. But is it actually faster thanconven­tional computers?
D-Wave’s 2X is the real, quantum- computing deal. But is it actually faster thanconven­tional computers?
 ??  ?? Today’s smartphone­s offer orders of magnitude more compute power than the first room-filling supercompu­ters.
Today’s smartphone­s offer orders of magnitude more compute power than the first room-filling supercompu­ters.
 ??  ?? AI, robots, or self- driving cars—the future depends on increasing compute power.
AI, robots, or self- driving cars—the future depends on increasing compute power.
 ??  ?? 14nm was Intel’s first really problemati­c production node. Try rendering Crysis on a CPU and you’ll soon understand the benefits of single-purpose chips.
14nm was Intel’s first really problemati­c production node. Try rendering Crysis on a CPU and you’ll soon understand the benefits of single-purpose chips.

Newspapers in English

Newspapers from United States