COM­PUTER & SU­PER­COM­PUTER: THE FU­TURE OF THE PC

Will we still have per­sonal com­put­ers in the fu­ture, and if so what forms might they take? Ian Even­den in­ves­ti­gates.

APC Australia - - Contents -

Will we still have PCs in the fu­ture, and what forms might they take?

Twenty years ago, when PCs were uni­ver­sally beige, and APC’s staff had a lot more hair, we prob­a­bly would have nod­ded in agree­ment if some­one had told us that the desk­top’s days were num­bered. We’d used lap­tops, we’d heard about Moore’s law, we knew things didn’t get much bet­ter than Win­dows 95. It stood to rea­son that por­ta­ble com­put­ers would take over, and that 17-inch CRT we could barely carry up the stairs would soon be on its way to the dump. That’s partly cor­rect, as we thank any pass­ing de­ity for flatscreens, but the desk­top PC is still with us, and is re­ally the only way to go if you want no-com­pro­mise com­put­ing power, be that for gam­ing, 3D con­tent cre­ation or 4K video edit­ing.

Lap­tops are fine, and our phones and tablets have come a phe­nom­e­nally long way, but there’s al­ways some­thing that holds them back, be it in­suf­fi­cient cool­ing, lim­ited RAM, the draw­backs of mo­bile pro­ces­sors or, in the case of touch­screen de­vices, an un­in­tu­itive user in­ter­face.

So the desk­top en­dures, look­ing much the same as it did in the days of the 80286, only ro­tated 90° into tower cases, and with more fans and lights, and a whop­ping great wa­ter block. Will it still be purring away on our desks in an­other 20 years, though?

One fu­ture route is al­ready hap­pen­ing with the ad­vent of cloud com­put­ing. Our PCs could be­come dumb ter­mi­nals, akin to the thin clients of old, tap­ping into the huge pro­cess­ing and stor­age po­ten­tial of data cen­tres us­ing high-speed wire­less con­nec­tions. They could all be­come touch­screen de­vices, small enough to tuck away in our pock­ets, then un­fold­ing to gi­gan­tic size like some Tony Stark cre­ation. They could be im­planted into us, like some­thing out of Iain M Banks’s Cul­ture nov­els or an episode of Black Mir­ror, con­nect­ing us to a cen­tral hub, and with a dis­play like that of a smart­phone con­stantly pro­jected into our vi­sion.

Or maybe they’ll just stay the same, dou­bling in power ev­ery two years, and need­ing a new GPU even sooner. We went look­ing for an­swers.

UP CLOSE AND PER­SONAL

The term ‘PC’ has evolved from mean­ing any type of per­sonal com­puter to specif­i­cally those in­volv­ing X86 pro­ces­sors and run­ning Win­dows. Else­where, X86 + Unix = ei­ther Mac or Linux, ARM + Unix = iOS or An­droid, and ARM + Win­dows = dis­con­tin­ued. But those ARM chips are what’s got Sam­sung poised to over­take In­tel as the world’s big­gest pro­ces­sor man­u­fac­turer, and isn’t the smart­phone in your pocket as much of a per­sonal com­puter as the one on your desk?

Many de­vel­op­ments in fu­ture PC com­po­nents are ex­ten­sions of things we’re al­ready fa­mil­iar with: greater ef­fi­ciency, us­ing less power, and in­creased par­al­lel­ism. A Ryzen 1800X may have eight cores, but a GTX 1080 Ti has 3,585. Yet while AMD’s sil­i­con is gen­eral-pur­pose, Nvidia’s is spe­cialised for graph­ics com­pu­ta­tions. Bring­ing this kind of par­al­lel com­put­ing power to the main­stream, via some­thing such as Nvidia’s CUDA, Mi­crosoft’s Direc­tCom­pute or other GPGPU pro­gram­ing lan­guages, is a step in the di­rec­tion of what’s pre­dicted for the im­me­di­ate fu­ture of our PCs. Then there’s 3D chips, such as those seen in Xpoint or the ver­ti­cal stacks of AMD’s HBM — these are about more ef­fi­cient use of space, as well as, in the case of Xpoint, mak­ing use of a new ap­proach to op­er­at­ing a mem­ory chip. These tech­nolo­gies are here now, even if they’re not widely avail­able. Fu­ture tech­nolo­gies will take these in­no­va­tions, and turn up the vol­ume.

Look­ing ahead, by 2035, we should have cracked ther­mo­dy­nam­i­cally re­versible com­put­ing — that is, log­i­cal op­er­a­tions that can be run back­ward from their re­sult, be­cause noth­ing is de­stroyed in their op­er­a­tion, and they don’t in­crease en­tropy. This sounds a bit crazy — and is re­ally only the tip of a whole ice­berg of crazy — so we spoke to Pro­fes­sor Robin Han­son of Ge­orge Ma­son Univer­sity in Vir­ginia, who is also a re­search as­so­ciate at the Fu­ture of Hu­man­ity In­sti­tute of Ox­ford Univer­sity in the UK, in an at­tempt to un­der­stand.

“When we run an engine, en­tropy is in­creas­ing,” he says, prompt­ing a trip to Google im­me­di­ately. En­tropy, it seems, is the amount of en­ergy in a sys­tem that’s wasted, un­avail­able to do work. The pro­fes­sor con­tin­ues: “But if we want to re­duce how much en­tropy in­creases, the slower we make the engine go, and the closer we can come to what’s called a re­versible process, where you could have made it go back­ward and got back to the orig­i­nal state. That’s also true for com­put­ers: Al­most all logic gate op­er­a­tions take two bits in and they send one bit out, there­fore im­plic­itly eras­ing one bit, and in­creas­ing en­tropy. Typ­i­cally, we’re eras­ing far more than one bit per gate op­er­a­tion, but that num­ber has been de­clin­ing over time.”

This is where the year 2035 comes in — if you plot the num­ber of bits erased by tran­sis­tor op­er­a­tions on a graph, and con­tinue the line into the fu­ture, it’s a mere 18 years un­til it reaches one. “At that point,” Prof Han­son con­tin­ues, “we could keep re­duc­ing it, but we’ll have to switch to re­versible com­put­ing, where we have two bits in and two bits out, with noth­ing erased. Once we switch to re­versible gates and re­versible com­put­ers, then we can con­tinue to re­duce the amount of en­tropy per gate op­er­a­tion, but it will be be­cause we run the gate more slowly. If you take the same gate, but take twice the time to do the gate op­er­a­tion, then it erases half as many bits. So when hard­ware gets cheaper, and en­ergy gets cheaper, you will spend half of it on hav­ing more hard­ware, and half of it on run­ning things more slowly.”

So what we are look­ing at are lots of slow, par­al­lel pro­ces­sors, rather than the small groups of scream­ingly fast cores we see to­day. But that’s not all: It will change the up­grade cy­cles, and what we budget for. “At that point, the rate at which tech­nol­ogy im­proves, or hard­ware costs go down, is half as fast as it has been so far,” says Prof Han­son. “Up un­til re­cently, en­ergy hasn’t re­ally been the main cost — that’s been the hard­ware it­self.

“The Tun­nel FET looks like a MOSFET, and you can build a lot of the same cir­cuitry with them.” Dr Kirsten Moselund

And so, ev­ery time the hard­ware is ca­pa­ble of run­ning twice as fast, the com­put­ers are twice as fast. But with re­versible com­put­ing, ev­ery time the hard­ware gets four times as fast, in­stead of run­ning the com­put­ers four times faster, we will in­stead have twice as many of them run­ning at half the speed. This means Moore’s law will slow down by a fac­tor of two.”

Moore’s law, which pre­dicts a dou­bling of com­put­ing ca­pac­ity over a pe­riod of two years, is al­ready start­ing to break down in ar­eas such as the speed of gate op­er­a­tions, but holds strong in the amount of en­ergy used per op­er­a­tion. Many sci­en­tists are look­ing at ways to im­prove the rate of speed in­crease, and de­crease the en­ergy us­age, through the use of new ma­te­ri­als in chip de­sign, and by chang­ing the de­sign of the tran­sis­tors them­selves. IBM re­searchers in the US are at­tempt­ing to press nan­otech­nol­ogy, such as car­bon nan­otubes, into ser­vice, while in Switzer­land, Big Blue is test­ing out other nat­u­rally oc­cur­ring el­e­ments, known as III-V ma­te­ri­als.

BACK TO SCHOOL

Pic­ture the pe­ri­odic ta­ble thumb­tacked to the wall of your science class in high school. Sil­i­con sits in the fourth col­umn, with col­umns III and V to ei­ther side. Many el­e­ments from both sides, when made into com­pounds, form very sta­ble chem­i­cal bonds, and are semi­con­duc­tors, like sil­i­con, which means that al­though, in their nat­u­ral state, they don’t al­low elec­tric­ity to flow through them, they can be ‘doped’ with an­other el­e­ment to al­low it, and this can be done in a con­trol­lable way. A lot of this is still ex­ploratory science, but you may al­ready own a piece of tech­nol­ogy based on this idea — the laser diodes in­side Blu-ray drives are made from gal­lium ni­tride — and hope­fully more should come to fruition be­fore 2035.

“The speed at which you can turn on a MOSFET is in­her­ently lim­ited,” says Dr Kirsten Moselund from IBM Re­search. This is the break­down in Moore’s law men­tioned above — the MOSFET, or metal ox­ide semi­con­duc­tor field ef­fect tran­sis­tor, is the type of tran­sis­tor most com­monly used in a sil­i­con chip, first patented back in 1925. “It doesn’t mat­ter what you make it out of, it’s a phys­i­cal lim­i­ta­tion,” she con­tin­ues. “As we scale down our de­vices, we also want to scale down volt­ages, but it’s very hard to scale down power beyond 60mV — you start to get into trou­ble. There are lots of peo­ple look­ing into this, be­cause it’s beyond all the tech­ni­cal dif­fi­cul­ties of scal­ing, it’s some­thing phys­i­cal. And scal­ing down the volt­ages is prob­a­bly the most im­por­tant pa­ram­e­ter for en­ergy ef­fi­ciency.”

The search for a way around this has led Dr Moselund and her team to Tun­nel FETs, a type of tran­sis­tor that ex­ploits the abil­ity of elec­trons to tun­nel through a bar­rier if that bar­rier is thin enough. This is quan­tum me­chan­ics in ac­tion, mak­ing use of a strange prop­erty of elec­trons — that they can be ei­ther waves or par­ti­cles.

“The Tun­nel FET looks like a MOSFET, and you can build a lot of the same cir­cuitry with them,” says Dr Moselund. “But as it op­er­ates on a dif­fer­ent prin­ci­ple, in the­ory you don’t have the same lim­i­ta­tions.” In­deed, a Tun­nel FET can op­er­ate on only about half the power of a MOSFET — in sim­u­la­tions, at least. These tran­sis­tors are made from, you guessed it, those III-V ma­te­ri­als — in­dium ar­senide and gal­lium an­ti­monide are com­mon choices. “III-Vs have a lot of re­ally nice ben­e­fits,” Moselund con­tin­ues. “There are lots of them, and they have dif­fer­ent prop­er­ties, but what’s gen­er­ally good about them is many have very high elec­tron mo­bil­i­ties [how quickly an elec­tron, and there­fore a cur­rent, can move through them], so you can trade speed for lower power, and many are op­ti­cally ac­tive, so you can make lasers out of them,

which you can’t do with sil­i­con.” Lasers are ob­vi­ously great, but mod­ern elec­tronic de­vices have an­other prob­lem — they leak. This is why they get hot and drain their bat­ter­ies when you’re not us­ing them. The gates in the tran­sis­tors don’t shut com­pletely, al­low­ing small amounts of power to trickle through like a leaky faucet. Tun­nel FETs and III-V ma­te­ri­als have a greater abil­ity to turn them­selves off all the way, de­creas­ing leak­age and power loss.

The new ma­te­ri­als don’t com­pletely re­place ex­ist­ing ones; they’re in­te­grated into the sil­i­con in a way that boosts its elec­tron mo­bil­ity, to in­crease per­for­mance at 7nm and smaller, mean­ing ex­ist­ing man­u­fac­tur­ing pro­cesses can still be used. IBM, Glob­alFoundries, and Sam­sung de­buted a sil­i­con wafer etched with a 5nm process ear­lier this year. The III-V ma­te­ri­als are grown as crys­tals on a sil­i­con sub­strate, then a process known as ‘epi­taxy’ de­posits more ma­te­rial on top, form­ing struc­tures such as nanowires and junc­tions, and even stack­ing them on top of one an­other. You can also mix up the recipe — what Moselund calls ‘tun­ing’ — blend­ing, say, 50% ar­senic with 20% gal­lium and 30% in­dium, with a spe­cific use in mind.

It’s not just the struc­ture of mi­crochips that will change; the way com­put­ers are put to­gether and treated is also in for a rev­o­lu­tion. Pro­fes­sor Han­son be­lieves that com­put­ers will one day be able to sim­u­late and go far beyond the ca­pa­bil­i­ties of the hu­man brain. He calls these em­u­la­tions ‘Ems’ and his con­cept is not a lit­tle thought­pro­vok­ing: When com­put­ing is ad­vanced enough, it will out-com­pete hu­man­ity.

“A nat­u­ral re­sult is that hu­mans will have to re­tire,” says Han­son. “They’re just not com­pet­i­tive. They could still work, they just can’t earn much money that way. Col­lec­tively, hu­mans get very rich very fast, that is, they own al­most all the cap­i­tal in this world, and if the econ­omy dou­bles al­most ev­ery month, hu­man wealth dou­bles ev­ery month. For in­di­vid­ual hu­mans who don’t own any wealth, that zero keeps dou­bling to zero, and those peo­ple are at risk of starv­ing un­less they ac­quire some in­sur­ance, as­sets or shar­ing ar­range­ments. But col­lec­tively hu­mans get very rich very fast.

EM POWER

“The Ems, how­ever, they do not get rich,” he says. “Their pop­u­la­tion quickly ex­pands and wages stay at sub­sis­tence lev­els, but they are mostly OK with that — sub­sis­tence wages have been the usual case in hu­man his­tory. They’re earn­ing enough to pay the power bill, the cool­ing bill, the hard­ware rental, the com­mu­ni­ca­tion line bill... which will pre­sum­ably come in one big pack­age.”

This is tread­ing rather closer to phi­los­o­phy and eco­nomic the­ory than we’re used to, so in an at­tempt to drag it back to more fa­mil­iar terms of ref­er­ence, we ask Han­son whether he thinks there’s a fu­ture for the desk­top PC. And he does. Kind of.

“A lot of peo­ple will con­tinue to do of­fice work at desks — it’s kind of com­fort­able to sit in a chair,” he says. “And they will want some­thing around them to act as the in­ter­face to the com­puter they work with. Now, whether the com­puter is just sit­ting on the wall, or in their hands, or on the desk, that’s much harder to say. They could have a box next to them or a server down the hall — it hardly matters from the point of view of them in­ter­act­ing with the com­puter. I like to use the same com­puter at home and in the of­fice, so I pre­fer a lap­top, but the in­cen­tive to use a lap­top will get less the more re­li­able cloud ser­vices get.”

Han­son goes on to imag­ine cities that glow red hot be­cause they’re oc­cu­pied by the hard­ware and cool­ing needed to run Ems, with hu­mans ban­ished to more hab­it­able parts of the globe, but the in­fer­ence is clear — in the near term, PCs will con­tinue to get more pow­er­ful, in­te­grat­ing new ma­te­ri­als and struc­tures into their de­signs. But as the cloud gets more im­por­tant and re­li­able, and com­mu­ni­ca­tion links get faster, a tran­si­tion to some­thing like to­day’s su­per­com­puter model could oc­cur, with vir­tual com­put­ers held in the cloud, and even­tu­ally vir­tual work­ers there, too. It’s safe for now, but the days of the desk­top PC could ul­ti­mately be num­bered. Sorry.

A Tun­nel FET can op­er­ate on about half the power of a tra­di­tional MOSFET tran­sis­tor.

Nvidia’s Xavier SOC uses 512 GPU and 8 CPU cores to be the brain of an au­tonomous car.

Also from Nvidia, the Tesla GPU in­tro­duces ad­di­tional par­al­lel­ism for AI use, and more.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.