In­side an In­tel In­cu­ba­tor

How to turn 7.2 bil­lion tran­sis­tors into a chip

Bloomberg Businessweek (Europe) - - CONTENTS - Story Max Chafkin & Ian King Pho­to­graphs Justin Fantl

Be­fore en­ter­ing the clean­room in D1D, as In­tel calls its 17 mil­lion-cu­bic-foot mi­cro­pro­ces­sor fac­tory in Hills­boro, Ore­gon, it’s a good idea to care­fully wash your hands and face. You should prob­a­bly also empty your blad­der. There are no bath­rooms in the clean­room. Makeup, per­fume, and cos­met­ics are for­bid­den. Writ­ing in­stru­ments are al­lowed, as long as they’re spe­cial ster­ile pens; pa­per, which sheds mi­cro­scopic par­ti­cles, is ab­so­lutely banned. If you want to write on some­thing, you’ll have to use what is known in the in­dus­try as “high­per­for­mance doc­u­men­ta­tion ma­te­rial,” a pa­per­like prod­uct that doesn’t re­lease fibers. Af­ter you put on a hair­net, your next stop is the gown­ing sta­tion, in­side a pres­sur­ized room that sits be­tween the out­side world and the clean­room it­self. A hard breeze, sent by a clean­ing sys­tem that takes up the equiv­a­lent of four and a half foot­ball fields, hits you as you walk in, re­mov­ing stray mat­ter—dust, lint, dog hairs, bac­te­ria. You put on pre-gown gloves, then a white body­suit with a hood and sur­gi­cal-style mouth cover, fol­lowed by a se­cond pair of gloves, a se­cond pair of shoe cov­ers, and safety glasses. None of these mea­sures are for your safety; they pro­tect the chips from you.

The air in the clean­room is the purest you’ve ever breathed. It’s class 10 pu­rity, mean­ing that for ev­ery cu­bic foot of air there can be no more than 10 par­ti­cles larger than half a mi­cron, which is about the size of a small bac­te­ria. In an ex­cep­tion­ally clean hospi­tal OR, there can be as many as 10,000 bac­te­ri­a­size par­ti­cles with­out cre­at­ing any spe­cial risk of in­fec­tion. In the out­side world, there are about 3 mil­lion.

The clean­room is nearly silent ex­cept for the low hum of the “tools,” as In­tel calls them, which look like gi­ant copy ma­chines and cost as much as $50 mil­lion each. They sit on steel pedestals that are at­tached to the build­ing’s frame, so that no vi­bra­tions—from other tools, for in­stance, or from your foot­falls—will af­fect the chips. You step softly even so. Some of these tools are so pre­cise they can be con­trolled to within half a nanome­ter, the width of two sil­i­con atoms.

It’s sur­pris­ingly dark, too. For decades, In­tel’s clean­rooms have been lit like dark­rooms, bathed in a deep, low yel­low. “That’s an anachro­nism,” says Mark Bohr, a small, se­ri­ous man who has spent his en­tire 38-year ca­reer mak­ing chips, and who’s now In­tel’s top man­u­fac­tur­ing sci­en­tist. “No­body’s had the courage to change it.”

Chips are made by cre­at­ing tiny pat­terns on a pol­ished 12-inch sil­i­con disk, in part by us­ing a process called pho­tolithog­ra­phy and de­posit­ing su­perthin lay­ers of ma­te­ri­als on top. These wafers are kept in sealed, mi­crowave oven-size pods called “foups” that are car­ried around by robots— hun­dreds of robots, ac­tu­ally—run­ning on tracks over­head, taking the wafers to var­i­ous tools. The air in­side a foup is class 1, mean­ing it prob­a­bly con­tains no par­ti­cles at all. Pe­ri­od­i­cally, the wafer is washed us­ing a form of water so pure it isn’t found in na­ture. It’s so pure it’s lethal. If you drank enough of it, it would pull es­sen­tial min­er­als out of your cells and kill you.

Over the next three months—three times the amount of time it takes Boe­ing to man­u­fac­ture a sin­gle Dream­liner— these wafers will be trans­formed into mi­cro­pro­ces­sors. They’ll make their way through more than 2,000 steps of lithog­ra­phy, etch­ing, ma­te­rial ap­pli­ca­tion, and more etch­ing. Each will then be chopped up into a hun­dred or so thumb­nail-size “dies,” each of which will be pack­aged in a ce­ramic en­clo­sure.

If ev­ery­thing func­tions prop­erly, none of the 100,000 or so peo­ple who work at In­tel will ever touch them. The end­point of this mech­a­nized mir­a­cle: the In­tel Xeon E5 v4, the com­pany’s lat­est server chip and the en­gine of the in­ter­net.

In­tel rarely talks about how it cre­ates a new chip. When

Bloomberg Busi­ness­week vis­ited the Hills­boro fab in May, we were given the most ex­ten­sive tour of the fac­tory since President Obama vis­ited in 2011. The ret­i­cence is un­der­stand­able, con­sid­er­ing that the development and man­u­fac­ture of a new mi­cro­pro­ces­sor is one of the big­gest, riski­est bets in busi­ness. Sim­ply build­ing a fab ca­pa­ble of pro­duc­ing a chip like the E5 costs at least $8.5 bil­lion, ac­cord­ing to Gart­ner, and that doesn’t in­clude the costs of re­search and development ($2 bil­lion­plus) or of de­sign­ing the cir­cuit lay­out (more than

$300 mil­lion). Even mod­est “ex­cur­sions”—In­tel’s eu­phemism for screw-ups—can add hun­dreds of mil­lions of dol­lars in ex­pense. The whole process can take five years or more. “If you need short-term grat­i­fi­ca­tion, don’t be a chip de­signer,” says Pat Gelsinger, chief executive of VMware and a long­time In­tel executive who most re­cently served as the com­pany’s chief technology of­fi­cer. “There are very few things like it.”

A top-of-the-line E5 is the size of a postage stamp, re­tails for $4,115, and uses about 60 per­cent more energy per year than a large Whirlpool re­frig­er­a­tor. You use them when­ever you search Google, hail an Uber, or let your kids stream Episode 3 of Un­break­able Kimmy Sch­midt in your car. These feats of com­puter science are of­ten at­trib­uted to the rise of the smart­phone, but the hard work is be­ing done on thou­sands of servers. And pretty much all of those servers run on In­tel chips.

In­tel, based in Santa Clara, Calif., created the first mi­cro­pro­ces­sor in 1971 and, un­der the lead­er­ship of Andy Grove, be­came a house­hold name in the 1990s, sell­ing the chips that ran most per­sonal com­put­ers. But PC sales have fallen over the past five years with the rise of smart­phones, and In­tel was slow to de­velop lower-power chips suited for those de­vices. The com­pany re­cently an­nounced lay­offs of 11 per­cent of its work­force, as CEO Brian Krzanich puts it, to “rein­vent our­selves.”

In­tel is still the world’s largest chip­maker, and it sells 99 per­cent of the chips that go into servers, ac­cord­ing to re­search firm IDC. Last year its data cen­ter group had revenue of about $16 bil­lion, nearly half of which was profit. This dom­i­nance is the re­sult of com­peti­tors’ fail­ings and In­tel’s will­ing­ness to spend what­ever it must to en­sure large, pre­dictable im­prove­ments to its prod­ucts, ev­ery sin­gle year. “Our cus­tomers ex­pect that they will get a 20 per­cent in­crease in per­for­mance at the same price that they paid last year,” says Diane Bryant, an In­tel executive vice president and general man­ager of the com­pany’s data cen­ter busi­ness. “That’s our mantra.”

In PCs and phones, this strat­egy has its lim­its: Con­sumers sim­ply don’t care that much about speed and ef­fi­ciency be­yond a cer­tain point. But in servers, where data cen­ters run by such com­pa­nies as Ama­ and Mi­crosoft com­pete for the right to han­dle data for the Net­flixes and Ubers of the world, per­for­mance is para­mount. The elec­tric­ity needed to run and cool

servers is by far the big­gest ex­pense at the av­er­age server farm. If In­tel can de­liver more com­put­ing power for the same amount of elec­tric­ity, data cen­ter own­ers will up­grade again and again.

There’s a lot rid­ing on that “if.” Each year, In­tel’s ex­ec­u­tives es­sen­tially bet the com­pany on the no­tion that they can keep push­ing the lim­its of cir­cuits, elec­tron­ics, and sil­i­con atoms, spend­ing bil­lions long be­fore they turn a profit. Even­tu­ally chips will go the way of in­can­des­cent light­bulbs, pas­sen­ger jets, and pretty much ev­ery other in­ven­tion as it ages; the pace of im­prove­ment will slow dra­mat­i­cally. “There will be a point where sil­i­con technology gets like that, but it’s not in the next cou­ple of decades,” Krzanich says con­fi­dently. “Our job is to push that point to the very last minute.”

Mi­cro­pro­ces­sors are ev­ery­where. They’re in your TV, car, Wi-Fi router, and, if they’re new enough, your re­frig­er­a­tor and ther­mo­stat. In­ter­net-con­nected light­bulbs and some run­ning shoes have chips. Even if you don’t think of them that way, these de­vices are in a sense com­put­ers, which means they’re made of tran­sis­tors.

A tran­sis­tor is a switch. But in­stead of re­quir­ing a fin­ger to turn it on or off, it uses small elec­tri­cal pulses—3 bil­lion per se­cond in the case of a pow­er­ful com­puter. What can you do with a switch? Well, you can use it to store ex­actly one bit of in­for­ma­tion. On or off, yes or no, 0 or 1—these are ex­am­ples of data that can be con­veyed in a sin­gle bit, which is, be­lieve it or not, a tech­ni­cal term. (There are 8 bits in a byte, 8 bil­lion in a gi­ga­byte.) The ear­li­est com­put­ers stored bits in punch cards—hole or no hole?—but that was lim­it­ing, be­cause if you want to do any­thing cool, you need a lot of bits. For in­stance, if you want your com­puter to store the words “God, this stuff is com­pli­cated,” it would need 8 bits for ev­ery let­ter, or 240 tran­sis­tors. An­other thing you can do with a switch is math. String seven switches to­gether in just the right or­der, and you can add two small num­bers; string 29,000 of them, and you have the chip that pow­ered the orig­i­nal IBM PC in 1981; pack 7.2 bil­lion on an E5, and you can pre­dict global weather pat­terns, se­quence a hu­man genome, and iden­tify oil and gas de­posits un­der the ocean floor.

Ev­ery three years or so, In­tel shrinks the di­men­sions of its tran­sis­tors by about 30 per­cent. It went from 32-nanome­ter pro­duc­tion in 2009 to 22nm in 2011 to 14nm in late 2014, the state of the art. Each of those jumps to smaller switches means chip de­sign­ers can cram about twice as many into the same area. This phe­nom­e­non is known as Moore’s Law, and it has, for half a cen­tury, en­sured that the chip you buy three years from now will be at least twice as good as the one you buy to­day.

The lat­est Xeon chips take ad­van­tage of re­search that be­gan in the 1990s, when Bohr’s team in Ore­gon be­gan try­ing to deal with quan­tum tun­nel­ing, or the ten­dency of elec­trons to jump through very small tran­sis­tors, even when they’re switched off. It was the lat­est front in In­tel’s on­go­ing war with physics. It had been con­ven­tional wis­dom that once sil­i­con tran­sis­tors shrunk to be­low 65nm, they’d stop work­ing prop­erly. Bohr’s so­lu­tion, un­veiled in 2007, was to coat parts of the tran­sis­tor with hafnium, a sil­very metal not found in na­ture, and then, start­ing in 2011, to build tran­sis­tors into lit­tle tow­ers known as fin-shaped field ef­fect tran­sis­tors, or FinFETs. “Our first FinFET, in­stead of be­ing nar­row and straight, it was more of a trape­zoid,” Bohr says with a hint of dis­ap­point­ment—trape­zoidal fins take up more room than rec­tan­gu­lar ones. “These are thin­ner and straighter,” he says proudly, ges­tur­ing at a re­cent pho­to­graph, taken with an elec­tron mi­cro­scope, that shows two stock-straight black shad­ows rest­ing eerily on a gray­ish base. The im­ages look like den­tal X-rays. In­tel peo­ple call them “baby pic­tures.”

Shrink­ing the tran­sis­tors is only part of the chal­lenge. An­other is man­ag­ing an ever more com­plex ar­ray of in­ter­con­nects, the criss­cross­ing fil­a­ments that link the tran­sis­tors to one an­other. The Xeon fea­tures 13 lay­ers of cop­per wires, some thin­ner than a sin­gle virus, made by etch­ing tiny lines into an in­su­lat­ing glass and then de­posit­ing metal in the slots. Whereas tran­sis­tors have tended to get more ef­fi­cient as they’ve shrunk, smaller wires by their na­ture don’t. The smaller they are, the less cur­rent they carry.

The man in charge of the Xeon E5’s wiring is Kevin Fis­cher, a mi­dlevel In­tel en­gi­neer who sat down in his Ore­gon lab in early 2009 with a sim­ple goal: Fix the con­duc­tiv­ity of two of the most densely packed lay­ers of wires, known as Metal 4 and Metal 6. Fis­cher, 45, who has a Ph.D. in elec­tri­cal en­gi­neer­ing from the Univer­sity of Wis­con­sin at Madi­son, started the way In­tel re­searchers usu­ally do, by scour­ing the aca­demic lit­er­a­ture. In­tel al­ready used cop­per, one of the most con­duc­tive met­als, so he de­cided to fo­cus on im­prov­ing the in­su­la­tors, or di­electrics, which tend to slow down the cur­rent mov­ing through the wires. One op­tion would be to use new in­su­la­tors that are spongier and thus cre­ate less drag. But Fis­cher sug­gested re­plac­ing the glass with noth­ing at all. “Air is the ul­ti­mate di­elec­tric,” he says, as if stunned by the el­e­gance of his so­lu­tion. The idea worked. Metal lay­ers 4 and 6 now move 10 per­cent faster.

Chip de­sign is mostly a lay­out prob­lem. “It’s kind of like de­sign­ing a city,” says Mooly Eden, a re­tired In­tel en­gi­neer who ran the com­pany’s PC divi­sion. But the ur­ban-plan­ning anal­ogy may un­der­sell the dif­fi­culty. A chip de­signer must some­how fit the equiv­a­lent of the world’s pop­u­la­tion into 1 square inch—and ar­range ev­ery­thing in such a way that the com­puter has ac­cess to each in­di­vid­ual tran­sis­tor 3 bil­lion times per se­cond.

The build­ing blocks of a chip are mem­ory con­trollers, cache, in­put/out­put cir­cuits, and, most im­por­tant of all, cores. On the Pen­tium III chip you owned in the late 1990s, the core and the chip were more or less one and the same, and chips gen­er­ally got bet­ter by in­creas­ing the clock rate— the num­ber of times per se­cond the com­puter can switch its tran­sis­tors on and off. A decade ago, clock rates maxed out at about 4 gi­ga­hertz, or 4 bil­lion pulses per se­cond. If chips were to cy­cle any faster, the sil­i­con tran­sis­tors would over­heat and mal­func­tion. The chip in­dus­try’s an­swer was to start adding cores, es­sen­tially lit­tle chips within the chip, which can run si­mul­ta­ne­ously, like mul­ti­ple out­board mo­tors on a speed­boat. The plan for the new E5 called for up to 22 of them, six more than the pre­vi­ous ver­sion, which would be de­signed at In­tel’s development cen­ter in Haifa, Is­rael.

An­other way to make a chip faster is to add spe­cial cir­cuits that only do one thing, but do it ex­tremely quickly. Roughly 25 per­cent of the E5’s cir­cuits are spe­cial­ized for, among other tasks, com­press­ing video and en­crypt­ing data. There are other spe­cial cir­cuits on the E5, but In­tel can’t talk about those be­cause they’re created for its largest cus­tomers, the so-called Su­per 7: Google, Ama­zon, Face­book, Mi­crosoft, Baidu, Alibaba, and Ten­cent. Those com­pa­nies buy—and of­ten as­sem­ble for them­selves—Xeon-pow­ered servers by the hun­dreds of thou­sands. If you buy an offthe-shelf Xeon server from Dell or HP, the Xeon in­side will con­tain technology that’s off-lim­its to you. “We’ll in­te­grate [a cloud cus­tomer’s] unique fea­ture into the prod­uct, as long as it doesn’t make the die so much big­ger that it be­comes a cost bur­den for ev­ery­one else,” says Bryant. “When we ship it to Cus­tomer A, he’ll see it. Cus­tomer B has no idea that fea­ture is there.”

It takes a year for In­tel’s ar­chi­tects—the most se­nior de­sign­ers, who work closely with cus­tomers as well as re­searchers in Ore­gon—to pro­duce a spec, a sev­er­alt­hou­sand-page doc­u­ment that ex­plains the chip’s func­tions in ex­treme de­tail. It takes an ad­di­tional year and a half to trans­late the spec into a kind of soft­ware code com­posed of ba­sic logic in­struc­tions such as AND, OR, and NOT, and then trans­late that into a schematic show­ing the in­di­vid­ual cir­cuits. The fi­nal and most labor-in­ten­sive part

of this process, mask de­sign, in­volves fig­ur­ing out how to cram the cir­cuits into a phys­i­cal lay­out. The lay­out is even­tu­ally trans­ferred onto masks, the sten­cils used to burn tiny pat­terns on the sil­i­con wafer and ul­ti­mately make a chip. For the E5, mask de­sign­ers based in Ban­ga­lore, In­dia, and Fort Collins, Colo., used a com­puter-aided de­sign pro­gram to draw poly­gons to rep­re­sent each tran­sis­tor, or copied in pre­vi­ously drawn cir­cuit de­signs from a sort of dig­i­tal li­brary. “You have to have the abil­ity to vi­su­al­ize what you’re work­ing on in 3D,” says Cor­rina Mellinger, a vet­eran In­tel mask de­signer.

Un­like most of the tech­ni­cal jobs at In­tel, mask de­sign doesn’t re­quire an ad­vanced de­gree in en­gi­neer­ing. The work is learned as a trade; Mellinger took a sin­gle class in chip lay­out at a com­mu­nity col­lege af­ter join­ing In­tel in 1989 as an ad­min­is­tra­tive assistant. The fi­nal few weeks of a mask de­sign are al­ways the most in­tense, as de­sign­ers con­tin­u­ally up­date their work to ac­com­mo­date last-minute ad­di­tions to the lay­out. “It never fits at first,” says Pa­tri­cia Kumm­row, an In­tel VP and man­ager of the Fort Collins de­sign team. The best mask de­sign­ers can look at the poly­gons and in­stan­ta­neously see how to shrink the de­sign by rerout­ing cir­cuits onto dif­fer­ent lay­ers. “It’s like you’ve fin­ished a puz­zle, and now you come and tell me I need to add 10 more pieces,” says Mellinger. “I’m like, ‘OK, let me see what kind of magic I can work.’ ”

In­tel’s chip de­sign­ers are com­mit­ted ra­tio­nal­ists. Logic is lit­er­ally what they do, ev­ery day. But if you get them talk­ing about their work, they tend to fall back on lan­guage that bor­ders on mys­ti­cal. They use the word “magic” a lot.

Gelsinger, the for­mer CTO, says he found God a few months af­ter start­ing at In­tel in 1979. “I’ve al­ways thought they went hand in hand,” he says, re­fer­ring to semi­con­duc­tor de­sign and faith. Maria Lines, an In­tel prod­uct man­ager, be­comes emo­tional when she re­flects on the past few years of her ca­reer. “The prod­uct that I was on sev­eral gen­er­a­tions ago was about 2 bil­lion tran­sis­tors, and now the prod­uct I’m on to­day has 10 bil­lion tran­sis­tors,” she says. “That’s like, as­tound­ing. It’s in­cred­i­ble. It’s al­most as mag­i­cal as hav­ing a baby.”

The mo­ment of birth of a chip is known as first sil­i­con. For the E5, first sil­i­con hap­pened in 2014. A team in Ban­ga­lore sent a 7.5-gi­ga­byte file con­tain­ing the full de­sign to In­tel’s mask shop in Santa Clara. The masks, 6-by-6-inch quartz plates that fea­ture slightly blown-up ver­sions of the tran­sis­tors to be printed on each chip, were shipped the fol­low­ing week to an In­tel fab near Phoenix that is an ex­act copy of the Ore­gon fa­cil­ity, and the ma­chines be­gan their slow, ex­act­ing work.

Af­ter all of the round-the-clock scram­bling, de­sign­ers spent most of 2015 wait­ing for new pro­to­types to test. Each “rev,” or re­vi­sion, takes three months or so to make. “It’s te­dious,” says Stephen Smith, an In­tel vice president and general man­ager of the data cen­ter en­gi­neer­ing group. This, for all the in­tri­cacy of the cir­cuits, is what makes mi­crochip development among the high­est-stakes bets in all of busi­ness. If you have more than a few ex­cur­sions by the time you get to first sil­i­con, there will be long de­lays and lost revenue. And with ev­ery gen­er­a­tion of ever smaller tran­sis­tor, the stakes get higher. Krzanich notes that it takes twice as long to fab a chip to­day as it did 10 years ago. “Mak­ing some­thing smaller is a prob­lem of physics, and there are al­ways ways to solve that,” he says. “The trick is, can you de­liver that part at half the cost?”

The last step in the man­u­fac­tur­ing process hap­pens at assem­bly plants in Malaysia, China, and Viet­nam. There, di­a­mond saws cut the fin­ished wafers into squares, which are then pack­aged and tested. In fall 2015, In­tel shipped more than 100,000 chips, gratis, to the Su­per 7 and other big cus­tomers. Last-minute tweaks were made to the soft­ware that ships with each chip, and In­tel spent six weeks or so do­ing fi­nal tests. Full man­u­fac­tur­ing of the new E5 didn’t be­gin un­til ear­lier this year, in Ari­zona and at an­other iden­ti­cal fab in Leixlip, Ire­land. Over the next 12 months, In­tel will sell mil­lions of them.

If cus­tomers are lucky, they’ll prob­a­bly never see those chips, much less con­sider how they were made. But if you opened up a new server, you’d even­tu­ally find a healthy chip, hot to the touch and sealed in ce­ramic pack­ag­ing that bears a blue In­tel logo. If you looked in­side the hous­ing, you’d find the 13 lay­ers of in­ter­con­nects, which to the naked eye look like noth­ing more than a dull metal plate. Many lay­ers be­low would be the sil­i­con, shim­mer­ing in blues and or­anges and pur­ples—a tiny, teem­ing maze of cir­cuits that some­how makes our whole world work. It’s beau­ti­ful, you might think.

Bohr, In­tel’s lead man­u­fac­tur­ing re­searcher, some­times thinks the same thing. But as a sci­en­tist, he un­der­stands that what he sees aren’t re­ally col­ors—they’re just light, re­flected and re­fracted by the de­signs he and his col­leagues have im­printed on the sil­i­con. The in­di­vid­ual tran­sis­tors them­selves are smaller than any wave of light. “When you get di­men­sions that small, color has no mean­ing,” he says, and then ex­cuses him­self.

He’s late for a meet­ing to dis­cuss In­tel’s 5-nanome­ter chips, two gen­er­a­tions from the cur­rent E5. Five nanome­ters is re­garded by many in the chip busi­ness as the point af­ter which it won’t be pos­si­ble to scale down fur­ther, when Moore’s Law will fi­nally fail. In­tel hopes to use some­thing called ex­treme ul­travi­o­let light, a new technology that the in­dus­try has yet to har­ness ef­fec­tively, to help get there. Be­yond 5nm there will be new ma­te­ri­als—some think that car­bon nan­otubes will re­place sil­i­con tran­sis­tors—and per­haps en­tirely new tech­nolo­gies, such as neu­ro­mor­phic com­put­ing (cir­cuits de­signed to mimic the hu­man brain) and quan­tum com­put­ing (in­di­vid­ual atomic par­ti­cles in lieu of tran­sis­tors).

“We’re nar­row­ing down the op­tions—a lot of wild and crazy ideas,” Bohr says. “Some of them just won’t work out.” But, he adds with ut­ter cer­tainty, one or two will. <BW>

12" wafer will be chopped up into 122 Xeon E5 chips. They sell for as much as $4,115 apiece. Each E5 has as many as 7.2 bil­lion tran­sis­tors. The chip in the orig­i­nal IBM PC had 29,000. Just build­ing a fac­tory ca­pa­ble of mak­ing these wafers costs at least $8.5 bil­lion.

It takes about three months to man­u­fac­ture a sin­gle E5 chip Mak­ing an E5 in­volves some 2,000 steps of etch­ing and de­posit­ing ma­te­ri­als, some­times in lay­ers as thin as a sin­gle atom.

An un­pro­cessed sil­i­con wafer costs about $300. It’ll be worth more than $300,000 when the fab is fin­ished.

A Google self-driv­ing car might have three server chips on board; a sin­gle Google search might use thou­sands of them. Un­der late CEO Andy Grove, In­tel created the “copy ex­actly” phi­los­o­phy, which means all fabs are iden­ti­cal. A hu­man red blood cell is 7,000 nanome­ters across. A virus is 100nm. In­tel’s fabs work on a 14nm scale.

Ac­cord­ing to Gart­ner, a chip de­sign needs to gen­er­ate $3 bil­lion over its first two years to be eco­nom­i­cally vi­able.

It takes five years to make a new server chip—and just three years for that chip to be­come ob­so­lete.

Newspapers in English

Newspapers from Bahrain

© PressReader. All rights reserved.