LIFE AF­TER MOORE’S LAW

It’s had a very good run, but Moore’s Law is done, dusted, and dead. Jeremy Laird in­ves­ti­gates the fu­ture of com­put­ing in the post-ex­po­nen­tial era

Maximum PC - - FRONT PAGE -

Could 10nm be the end of the line for In­tel?

FIFTY YEARS IS A LONG TIME for any pre­dic­tion to hold true. It’s an aeon when it comes to pre­dict­ing the fu­ture of cut­ting-edge tech­nol­ogy. But it’s pretty much how long Moore’s Law has held to­gether as a pre­dic­tor of progress in com­put­ing power. But now just about ev­ery­body agrees that Moore’s Law is done. Com­puter chips are no longer dou­bling in com­plex­ity ev­ery two years. In­tel’s most re­cent roadmap up­date, to take just one ex­am­ple, pushed vol­ume ship­ments of its next-gen 10nm pro­ces­sors out to 2019. That’s al­most five years af­ter In­tel be­gan pump­ing out 14nm chips in sig­nif­i­cant vol­umes. Like­wise, In­tel’s 14nm node came three years af­ter 22nm. Wel­come to the post Moore’s Law era, where faster com­put­ing for less money is no longer an au­to­matic as­sump­tion.

That’s a rad­i­cal change that could threaten progress well be­yond con­ven­tional com­put­ing. Ad­vances in ev­ery­thing from AI and self-driv­ing cars to medicine, biotech­nol­ogy, and en­gi­neer­ing are all pred­i­cated, at least in part, on the as­sump­tion that avail­able com­put­ing power in­creases not only re­li­ably but ex­po­nen­tially. It’s the lat­ter im­pli­ca­tion that has been most rev­o­lu­tion­ary. The ex­po­nen­tial in­crease in com­put­ing power for nearly 50 years was un­like any­thing the world had seen be­fore. And it begs the ques­tion of whether we’ll ever see any­thing like it again.

The sim­ple an­swer is al­most cer­tainly no. The reg­u­lar ca­dence of Moore’s Law as it per­tains to in­te­grated cir­cuit en­gi­neer­ing is over, and there’s no ob­vi­ous can­di­date to re­place it. The good news, how­ever, is that there is no short­age of can­di­date tech­nolo­gies that could pro­vide any­thing from in­cre­men­tal im­prove­ments to rev­o­lu­tions so rad­i­cal they could ren­der the very no­tion of in­creas­ing com­pute power re­dun­dant. The fu­ture of com­put­ing will no longer be a model of serene progress; it will very likely be mea­sured in par­a­lyz­ing fits and dra­matic starts.

THE SMART­PHONE in your pocket is more pow­er­ful than room-fill­ing main­frame com­put­ers of yore. Not just a lit­tle more pow­er­ful, but many or­ders of mag­ni­tude more pow­er­ful. That, in the prover­bial nut­shell, is Moore’s Law, and its im­pli­ca­tions are as in­cred­i­ble in raw tech­ni­cal terms as they have been trans­for­ma­tional for hu­man ex­is­tence. Al­most ev­ery as­pect of modern life is de­pen­dent, ul­ti­mately, on com­put­ing.

But Moore’s Law is his­tory, and the fu­ture of com­put­ing, if it is to ad­vance, must rely on some other par­a­digm. Want specifics? The Ap­ple iPhone X is ca­pa­ble of around 200,000 times more float­ing point op­er­a­tions per sec­ond than the CDC 6600 of 1964, con­sid­ered by most to be the world’s first su­per­com­puter, and roughly the size of a pickup truck. Ad­mit­tedly, that num­ber in­volves a not ex­actly like-for-like com­par­i­son, which would in­volve nu­mer­ous qual­i­fi­ca­tions un­der close in­spec­tion, yet it gives an ac­cu­rate sense of the mon­u­men­tal and ex­po­nen­tial im­pli­ca­tions of Moore’s Law, and the as­ton­ish­ing progress in com­put­ing power over the last 50 years.

Moore’s Law, of course, is the ob­ser­va­tion that tran­sis­tor den­si­ties in in­te­grated cir­cuits dou­ble ev­ery two years. Put an­other way, it says com­puter chips ei­ther dou­ble in com­plex­ity or halve in cost—or some mix of the two—ev­ery cou­ple of years. Or rather, they did from around 1975, when Gor­don Moore (co-founder of In­tel) ad­justed his orig­i­nal ob­ser­va­tional time frame down from dou­bling ev­ery year to two years, un­til roughly five years ago, when it be­came ap­par­ent that devel­op­ment had slowed.

In 2015, In­tel con­firmed that the ca­dence of Moore’s Law, as far as it was con­cerned, had slowed to 2.5 years with the tran­si­tion from 22nm sil­i­con chip pro­duc­tion to 14nm. Fast-for­ward to the sec­ond half of 2018, and it’s clear that In­tel’s step from 14nm to 10nm will re­quire even longer—at least four years, prob­a­bly get­ting on for five. The other

ma­jor play­ers in chip pro­duc­tion, in­clud­ing Tai­wanese gi­ant TSMC and South Korea’s Sam­sung, have all suf­fered their own de­lays. The up­shot of which is a con­sen­sus that the Moore’s Law that per­tained for around four decades is no longer.

Not that this is a sur­prise. The very na­ture of con­ven­tional in­te­grated cir­cuits guar­an­tees the ca­dence of ever-shrink­ing tran­sis­tors can’t go on for­ever. The ap­proach was sure to bang up against the lim­i­ta­tions of mat­ter even­tu­ally. Once you’re mak­ing tran­sis­tors from a hand­ful of atoms, you’ve nowhere left to go.

The demise of Moore’s Law isn’t the only chal­lenge fac­ing con­ven­tional com­put­ing based on in­te­grated cir­cuits. As a happy corol­lary to in­creas­ing chip com­plex­ity, the shrink­ing pro­por­tions of tran­sis­tors have been ac­com­pa­nied by an in­crease in the rate at which they can be switched on and off, and like­wise a re­duc­tion in per-tran­sis­tor power con­sump­tion. Com­bine the two, and you have both in­creased op­er­at­ing fre­quen­cies and re­duced en­ergy con­sump­tion per unit of com­pute power. That’s pretty much a free lunch in com­put­ing terms, and it’s been just as im­por­tant for pro­gress­ing over­all per­for­mance as raw tran­sis­tor den­sity.

Un­for­tu­nately, im­prove­ments in op­er­at­ing fre­quency and en­ergy ef­fi­ciency have been even shorter lived than Moore’s Law. In 2004, In­tel hit 3.8GHz with its Pen­tium 4, and talk was of 10GHz com­put­ing. Nearly 15 years later, the sus­tain­able clock speeds of its pro­ces­sors have im­proved by scarcely 1GHz. More re­cently, cur­rent leak­age has be­come an in­creas­ing prob­lem as tran­sis­tors have grown smaller. Small enough, in fact, to find them­selves hostage to quan­tum-level phys­i­cal phe­nom­ena, such as quan­tum tun­nel­ing, which al­lows in­di­vid­ual elec­trons to ef­fec­tively leap across in­su­la­tion bar­ri­ers and thus “leak” en­ergy and gen­er­ate heat. The im­pact of that along­side the slow­ing of Moore’s Law is pro­found. Where once it could be as­sumed that com­puter chips would get faster, more ef­fi­cient, and cheaper, all at the same time, it’s no longer pos­si­ble to be con­fi­dent about sub­stan­tial gains by any of those met­rics.

But is it that big a prob­lem? Af­ter all, when it comes to desk­top com­put­ing, many ar­gue to­day’s CPUs are al­ready pow­er­ful enough. The ex­tent to which PC pro­ces­sor per­for­mance has stag­nated in re­cent years has also had at least as much to do with a lack of com­pe­ti­tion as the wheels com­ing off Moore’s Law. Ob­serve the im­pact AMD’s Ryzen CPUs had on In­tel. The lat­ter stuck with four cores for main­stream desk­top chips for around a decade, but a year af­ter Ryzen ap­peared, In­tel is launch­ing eight­core mod­els for its main­stream socket, has 18-core en­thu­si­ast chips al­ready, and plans to up that to 28 cores in the near fu­ture.

Of course, all that in­volves a rather CPU­cen­tric view of the uni­verse. Other ar­eas of com­put­ing re­main pred­i­cated and re­liant upon some­thing at least close to Moore’s Law rum­bling on. True desk­top com­put­ing power in a pock­etable de­vice isn’t go­ing to hap­pen with­out sub­stan­tial fur­ther progress, for in­stance. Ditto pho­to­re­al­is­tic com­puter graph­ics ren­dered in real time. How­ever, it’s the tech­nolo­gies that prom­ise the most wide-rang­ing im­pact on hu­man life, in­clud­ing AI, ro­bot­ics, ma­chine learn­ing, and biotech­nol­ogy, that have most to lose from the demise of Moore’s Law. The scope and range of those en­deav­ors will be cur­tailed if ad­vances in com­pute power stall with the demise of Moore’s Law.

But what can re­place Moore’s Law and drive com­put­ing power for­ward? The good news is that nu­mer­ous can­di­dates ex­ist. In­deed, some in­dus­try ob­servers think the death of Moore’s Law is long over­due. In re­cent decades, the as­sump­tion that ever more con­ven­tional com­pute power will be­come avail­able has ar­guably made

soft­ware de­vel­op­ers lazy, and stymied re­search into al­ter­na­tive com­put­ing hard­ware par­a­digms. Why put in ef­fort and money, when cheaper, faster com­puter chips are sure to solve the prob­lem? With Moore’s Law no longer, the im­pe­tus to de­velop al­ter­na­tives is far more com­pelling.

The bad news: There’s prob­a­bly no sin­gle tech­nol­ogy, idea, or ap­proach that will di­rectly re­place Moore’s Law and the con­ven­tional in­te­grated cir­cuit’s in­cred­i­ble ca­pac­ity for self im­prove­ment. If one thing seems fairly cer­tain, the fu­ture progress of com­put­ing will be far less pre­dictable, far less reg­u­lar. In­stead, progress is likely to come in sharp leaps af­ter un­event­ful lulls.

How­ever, if there is a sin­gle tech­nol­ogy that of­fers the big­gest the­o­ret­i­cal up­side, it’s quan­tum com­put­ing. Its raw po­ten­tial is mind-bog­gling. Long story short, it’s pos­si­ble to con­cep­tu­al­ize a sin­gle quan­tum com­puter of re­mark­able sim­plic­ity and ef­fi­ciency that’s ca­pa­ble of not just match­ing the com­bined num­ber-crunch­ing mus­cle of all ex­ist­ing com­put­ers, but also ex­e­cut­ing es­sen­tially as many cal­cu­la­tions at once as is prac­ti­cally use­ful. As Andy Rubin, of An­droid OS fame, said, “If you have com­put­ing that is as pow­er­ful as this could be, you might only need one [com­puter].”

What’s more, quan­tum com­put­ing isn’t a new idea, and its core premise is well es­tab­lished and un­der­stood. Yet it re­mains not just thor­oughly ex­otic, but also con­tro­ver­sial. The ba­sics go some­thing like this: Con­ven­tional com­put­ing op­er­ates in the bi­nary realms of ze­ros and ones, aka bits. A tran­sis­tor, the ba­sic com­po­nent in a clas­si­cal com­puter, is thus ei­ther off or on, and noth­ing in be­tween. Not so with quan­tum com­put­ing. Thanks to a prop­erty known as su­per­po­si­tion, which pre­vails when deal­ing with very tiny atomic and sub-atomic par­ti­cles, such as in­di­vid­ual elec­trons, it’s pos­si­ble for a quan­tum com­put­ing bit to be not just on and off at the same time, but also a huge ar­ray of what you might call hy­brid su­per­po­si­tions in be­tween on and off. This is a qubit, and it’s the ba­sic build­ing block of quan­tum com­put­ing.

But the qubit’s in­cred­i­ble, pe­cu­liar, and non-bi­nary ca­pac­ity isn’t the whole story. It’s the way qubits can in­ter­act with each other, thanks to an­other quan­tum prop­erty, that en­ables the real com­pu­ta­tional fire­works. Wel­come to the weird and won­der­ful world of quan­tum en­tan­gle­ment. The con­cept is tricky for even highly qual­i­fied physi­cists to truly grasp, let alone us mere mor­tals, but it in­volves the no­tion that quan­tum me­chan­i­cal prop­er­ties, such as “spin,” of two or more par­ti­cles can be in­ex­tri­ca­bly linked, even if sep­a­rated by great dis­tances. Change the spin of one par­ti­cle, and oth­ers in­stantly re­act, re­gard­less of the dis­tance be­tween them.

So, the trick to achiev­ing re­ally pow­er­ful quan­tum com­put­ing is to en­tan­gle mul­ti­ple qubits. Quan­tum-me­chan­i­cally link or en­tan­gle two qubits, and you can per­form two cal­cu­la­tions si­mul­ta­ne­ously. Link three qubits, and two to the power of three cal­cu­la­tions—a to­tal of eight—are pos­si­ble. Link four, and you can per­form 16 cal­cu­la­tions si­mul­ta­ne­ously. Keep on go­ing, and when you hit 300 en­tan­gled qubits, you can per­form more cal­cu­la­tions in par­al­lel than there are atoms in the known uni­verse. That’s a lot of cal­cu­la­tions— enough, in the­ory, to solve pretty much any

com­pu­ta­tional prob­lem you can imag­ine. In prac­tice? Not so much. At least, not yet. Ear­lier this year, physi­cists at the Uni­ver­sity of Sci­ence and Tech­nol­ogy of China set a new record by achiev­ing quan­tum en­tan­gle­ment with 18 pho­ton-based qubits. But the rate of progress in achiev­ing that num­ber has been painstak­ing, and there are no signs of it tak­ing off any time soon.

The prob­lem is that the su­per­po­si­tions of qubits are very frag­ile. Tiny amounts of heat or mag­netic in­ter­fer­ence can cause them, in ef­fect, to col­lapse. Build­ing a quan­tum com­puter is there­fore far from easy. What be­gins with a rel­a­tively sim­ple net­work of no­tional qubits quickly turns into a com­plex ma­chine utiliz­ing liq­uid-he­lium cool­ing down to a frac­tion of a de­gree above ab­so­lute zero, sur­rounded by heavy-duty mag­netic shield­ing.

What’s more, while com­mer­cial com­put­ers that ex­ploit th­ese quan­tum­level ef­fects are avail­able, they’re not only lim­ited phys­i­cally by the need for in­tense cool­ing and shield­ing, but also lim­ited in com­pu­ta­tional scope. The space in­side a D-Wave 2X, one of the most com­mer­cially suc­cess­ful quan­tum com­put­ers avail­able, is mostly given over to a liq­uid-he­lium re­frig­er­a­tion sys­tem ca­pa­ble of cool­ing its qubits down to just a frac­tion of a de­gree above ab­so­lute zero, while much of the re­main­ing ma­chine is made up of mag­netic shield­ing that pro­tects the qubits from fluc­tu­a­tions in Earth’s mag­netic field.

De­spite all that tech­nol­ogy and in­no­va­tion, the D-Wave 2X’s com­pu­ta­tional prow­ess is re­stricted to find­ing the low­est value of com­pli­cated func­tions. Granted, such cal­cu­la­tions can be very use­ful in en­gi­neer­ing, which is why Google, NASA, and Lock­heed Mar­tin are all re­port­edly D-Wave clients, but such a ma­chine hardly makes for a promis­ing can­di­date tech­nol­ogy for fu­ture pocket com­put­ers. In­deed, for a while, there was some con­tro­versy whether D-Wave’s com­put­ers re­ally were quan­tum. That has now been firmly es­tab­lished in the af­fir­ma­tive, but de­bate re­mains whether D-Wave’s tech­nol­ogy is ac­tu­ally any faster than a con­ven­tional com­puter, even for the nar­row com­pu­ta­tion of which it is ca­pa­ble. Some even view the whole field of quan­tum com­put­ing as an ir­rel­e­vance, equiv­a­lent to the al­chemist’s quest to turn base metal into gold. Like quan­tum com­put­ing, that is in­deed pos­si­ble with to­day’s tech­nol­ogy. But not to a de­gree that it’s ac­tu­ally use­ful.

So, in the short to medium term, quan­tum com­put­ing isn’t go­ing to step in where Moore’s Law left off. In­stead, what progress there is will come from a com­plex ar­ray of tech­nolo­gies, in­clud­ing not only quan­tum com­put­ing, but also bi­o­log­i­cal analogs, a shift to cloud com­put­ing, more ef­fi­cient cir­cuit de­sign, and ded­i­cated chips built to do one thing re­ally well (see box­outs). In the mean­time, if there’s a take-home les­son, it’s that Moore’s Law has run its course, and the next 50 years of com­put­ing will be very dif­fer­ent from the last 50. Whether they’ll be bet­ter or worse, only time will tell.

Tran­sis­tors built from car­bon nan­otubes could rev­o­lu­tion­ize com­put­ing ef­fi­ciency. No more Moore: In­tel’s 10nm node is heav­ily de­layed.

D-Wave’s 2X is the real, quan­tum- com­put­ing deal. But is it ac­tu­ally faster thancon­ven­tional com­put­ers?

To­day’s smart­phones of­fer or­ders of mag­ni­tude more com­pute power than the first room-fill­ing su­per­com­put­ers.

AI, ro­bots, or self- driv­ing cars—the fu­ture de­pends on in­creas­ing com­pute power.

14nm was In­tel’s first re­ally prob­lem­atic pro­duc­tion node. Try ren­der­ing Cr­y­sis on a CPU and you’ll soon un­der­stand the ben­e­fits of sin­gle-pur­pose chips.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.