LIFE AFTER MOORE’S LAW
It’s had a very good run, but Moore’s Law is done, dusted, and dead. Jeremy Laird investigates the future of computing in the post-exponential era
Could 10nm be the end of the line for Intel?
FIFTY YEARS IS A LONG TIME for any prediction to hold true. It’s an aeon when it comes to predicting the future of cutting-edge technology. But it’s pretty much how long Moore’s Law has held together as a predictor of progress in computing power. But now just about everybody agrees that Moore’s Law is done. Computer chips are no longer doubling in complexity every two years. Intel’s most recent roadmap update, to take just one example, pushed volume shipments of its next-gen 10nm processors out to 2019. That’s almost five years after Intel began pumping out 14nm chips in significant volumes. Likewise, Intel’s 14nm node came three years after 22nm. Welcome to the post Moore’s Law era, where faster computing for less money is no longer an automatic assumption.
That’s a radical change that could threaten progress well beyond conventional computing. Advances in everything from AI and self-driving cars to medicine, biotechnology, and engineering are all predicated, at least in part, on the assumption that available computing power increases not only reliably but exponentially. It’s the latter implication that has been most revolutionary. The exponential increase in computing power for nearly 50 years was unlike anything the world had seen before. And it begs the question of whether we’ll ever see anything like it again.
The simple answer is almost certainly no. The regular cadence of Moore’s Law as it pertains to integrated circuit engineering is over, and there’s no obvious candidate to replace it. The good news, however, is that there is no shortage of candidate technologies that could provide anything from incremental improvements to revolutions so radical they could render the very notion of increasing compute power redundant. The future of computing will no longer be a model of serene progress; it will very likely be measured in paralyzing fits and dramatic starts.
THE SMARTPHONE in your pocket is more powerful than room-filling mainframe computers of yore. Not just a little more powerful, but many orders of magnitude more powerful. That, in the proverbial nutshell, is Moore’s Law, and its implications are as incredible in raw technical terms as they have been transformational for human existence. Almost every aspect of modern life is dependent, ultimately, on computing.
But Moore’s Law is history, and the future of computing, if it is to advance, must rely on some other paradigm. Want specifics? The Apple iPhone X is capable of around 200,000 times more floating point operations per second than the CDC 6600 of 1964, considered by most to be the world’s first supercomputer, and roughly the size of a pickup truck. Admittedly, that number involves a not exactly like-for-like comparison, which would involve numerous qualifications under close inspection, yet it gives an accurate sense of the monumental and exponential implications of Moore’s Law, and the astonishing progress in computing power over the last 50 years.
Moore’s Law, of course, is the observation that transistor densities in integrated circuits double every two years. Put another way, it says computer chips either double in complexity or halve in cost—or some mix of the two—every couple of years. Or rather, they did from around 1975, when Gordon Moore (co-founder of Intel) adjusted his original observational time frame down from doubling every year to two years, until roughly five years ago, when it became apparent that development had slowed.
In 2015, Intel confirmed that the cadence of Moore’s Law, as far as it was concerned, had slowed to 2.5 years with the transition from 22nm silicon chip production to 14nm. Fast-forward to the second half of 2018, and it’s clear that Intel’s step from 14nm to 10nm will require even longer—at least four years, probably getting on for five. The other
major players in chip production, including Taiwanese giant TSMC and South Korea’s Samsung, have all suffered their own delays. The upshot of which is a consensus that the Moore’s Law that pertained for around four decades is no longer.
Not that this is a surprise. The very nature of conventional integrated circuits guarantees the cadence of ever-shrinking transistors can’t go on forever. The approach was sure to bang up against the limitations of matter eventually. Once you’re making transistors from a handful of atoms, you’ve nowhere left to go.
The demise of Moore’s Law isn’t the only challenge facing conventional computing based on integrated circuits. As a happy corollary to increasing chip complexity, the shrinking proportions of transistors have been accompanied by an increase in the rate at which they can be switched on and off, and likewise a reduction in per-transistor power consumption. Combine the two, and you have both increased operating frequencies and reduced energy consumption per unit of compute power. That’s pretty much a free lunch in computing terms, and it’s been just as important for progressing overall performance as raw transistor density.
Unfortunately, improvements in operating frequency and energy efficiency have been even shorter lived than Moore’s Law. In 2004, Intel hit 3.8GHz with its Pentium 4, and talk was of 10GHz computing. Nearly 15 years later, the sustainable clock speeds of its processors have improved by scarcely 1GHz. More recently, current leakage has become an increasing problem as transistors have grown smaller. Small enough, in fact, to find themselves hostage to quantum-level physical phenomena, such as quantum tunneling, which allows individual electrons to effectively leap across insulation barriers and thus “leak” energy and generate heat. The impact of that alongside the slowing of Moore’s Law is profound. Where once it could be assumed that computer chips would get faster, more efficient, and cheaper, all at the same time, it’s no longer possible to be confident about substantial gains by any of those metrics.
But is it that big a problem? After all, when it comes to desktop computing, many argue today’s CPUs are already powerful enough. The extent to which PC processor performance has stagnated in recent years has also had at least as much to do with a lack of competition as the wheels coming off Moore’s Law. Observe the impact AMD’s Ryzen CPUs had on Intel. The latter stuck with four cores for mainstream desktop chips for around a decade, but a year after Ryzen appeared, Intel is launching eightcore models for its mainstream socket, has 18-core enthusiast chips already, and plans to up that to 28 cores in the near future.
Of course, all that involves a rather CPUcentric view of the universe. Other areas of computing remain predicated and reliant upon something at least close to Moore’s Law rumbling on. True desktop computing power in a pocketable device isn’t going to happen without substantial further progress, for instance. Ditto photorealistic computer graphics rendered in real time. However, it’s the technologies that promise the most wide-ranging impact on human life, including AI, robotics, machine learning, and biotechnology, that have most to lose from the demise of Moore’s Law. The scope and range of those endeavors will be curtailed if advances in compute power stall with the demise of Moore’s Law.
But what can replace Moore’s Law and drive computing power forward? The good news is that numerous candidates exist. Indeed, some industry observers think the death of Moore’s Law is long overdue. In recent decades, the assumption that ever more conventional compute power will become available has arguably made
software developers lazy, and stymied research into alternative computing hardware paradigms. Why put in effort and money, when cheaper, faster computer chips are sure to solve the problem? With Moore’s Law no longer, the impetus to develop alternatives is far more compelling.
The bad news: There’s probably no single technology, idea, or approach that will directly replace Moore’s Law and the conventional integrated circuit’s incredible capacity for self improvement. If one thing seems fairly certain, the future progress of computing will be far less predictable, far less regular. Instead, progress is likely to come in sharp leaps after uneventful lulls.
However, if there is a single technology that offers the biggest theoretical upside, it’s quantum computing. Its raw potential is mind-boggling. Long story short, it’s possible to conceptualize a single quantum computer of remarkable simplicity and efficiency that’s capable of not just matching the combined number-crunching muscle of all existing computers, but also executing essentially as many calculations at once as is practically useful. As Andy Rubin, of Android OS fame, said, “If you have computing that is as powerful as this could be, you might only need one [computer].”
What’s more, quantum computing isn’t a new idea, and its core premise is well established and understood. Yet it remains not just thoroughly exotic, but also controversial. The basics go something like this: Conventional computing operates in the binary realms of zeros and ones, aka bits. A transistor, the basic component in a classical computer, is thus either off or on, and nothing in between. Not so with quantum computing. Thanks to a property known as superposition, which prevails when dealing with very tiny atomic and sub-atomic particles, such as individual electrons, it’s possible for a quantum computing bit to be not just on and off at the same time, but also a huge array of what you might call hybrid superpositions in between on and off. This is a qubit, and it’s the basic building block of quantum computing.
But the qubit’s incredible, peculiar, and non-binary capacity isn’t the whole story. It’s the way qubits can interact with each other, thanks to another quantum property, that enables the real computational fireworks. Welcome to the weird and wonderful world of quantum entanglement. The concept is tricky for even highly qualified physicists to truly grasp, let alone us mere mortals, but it involves the notion that quantum mechanical properties, such as “spin,” of two or more particles can be inextricably linked, even if separated by great distances. Change the spin of one particle, and others instantly react, regardless of the distance between them.
So, the trick to achieving really powerful quantum computing is to entangle multiple qubits. Quantum-mechanically link or entangle two qubits, and you can perform two calculations simultaneously. Link three qubits, and two to the power of three calculations—a total of eight—are possible. Link four, and you can perform 16 calculations simultaneously. Keep on going, and when you hit 300 entangled qubits, you can perform more calculations in parallel than there are atoms in the known universe. That’s a lot of calculations— enough, in theory, to solve pretty much any
computational problem you can imagine. In practice? Not so much. At least, not yet. Earlier this year, physicists at the University of Science and Technology of China set a new record by achieving quantum entanglement with 18 photon-based qubits. But the rate of progress in achieving that number has been painstaking, and there are no signs of it taking off any time soon.
The problem is that the superpositions of qubits are very fragile. Tiny amounts of heat or magnetic interference can cause them, in effect, to collapse. Building a quantum computer is therefore far from easy. What begins with a relatively simple network of notional qubits quickly turns into a complex machine utilizing liquid-helium cooling down to a fraction of a degree above absolute zero, surrounded by heavy-duty magnetic shielding.
What’s more, while commercial computers that exploit these quantumlevel effects are available, they’re not only limited physically by the need for intense cooling and shielding, but also limited in computational scope. The space inside a D-Wave 2X, one of the most commercially successful quantum computers available, is mostly given over to a liquid-helium refrigeration system capable of cooling its qubits down to just a fraction of a degree above absolute zero, while much of the remaining machine is made up of magnetic shielding that protects the qubits from fluctuations in Earth’s magnetic field.
Despite all that technology and innovation, the D-Wave 2X’s computational prowess is restricted to finding the lowest value of complicated functions. Granted, such calculations can be very useful in engineering, which is why Google, NASA, and Lockheed Martin are all reportedly D-Wave clients, but such a machine hardly makes for a promising candidate technology for future pocket computers. Indeed, for a while, there was some controversy whether D-Wave’s computers really were quantum. That has now been firmly established in the affirmative, but debate remains whether D-Wave’s technology is actually any faster than a conventional computer, even for the narrow computation of which it is capable. Some even view the whole field of quantum computing as an irrelevance, equivalent to the alchemist’s quest to turn base metal into gold. Like quantum computing, that is indeed possible with today’s technology. But not to a degree that it’s actually useful.
So, in the short to medium term, quantum computing isn’t going to step in where Moore’s Law left off. Instead, what progress there is will come from a complex array of technologies, including not only quantum computing, but also biological analogs, a shift to cloud computing, more efficient circuit design, and dedicated chips built to do one thing really well (see boxouts). In the meantime, if there’s a take-home lesson, it’s that Moore’s Law has run its course, and the next 50 years of computing will be very different from the last 50. Whether they’ll be better or worse, only time will tell.
Transistors built from carbon nanotubes could revolutionize computing efficiency. No more Moore: Intel’s 10nm node is heavily delayed.
D-Wave’s 2X is the real, quantum- computing deal. But is it actually faster thanconventional computers?
Today’s smartphones offer orders of magnitude more compute power than the first room-filling supercomputers.
AI, robots, or self- driving cars—the future depends on increasing compute power.
14nm was Intel’s first really problematic production node. Try rendering Crysis on a CPU and you’ll soon understand the benefits of single-purpose chips.