Maximum PC

NUCLEAR REACTION!

How the bomb led to the rise of supercompu­ters

-

As anyone who’s ever tried to work out a restaurant bill, including drinks, taxes, and tip, already knows, some math is difficult. Expand that by several orders of magnitude, and suddenly you’re simulating the effects of a nuclear bomb, or protein folding, or calculatin­g how many oil rigs to send up the Bering Strait before winter, and your needs go beyond mere computers. You need a supercompu­ter.

Establishe­d in the 1960s, supercompu­ters initially relied on vector processors before changing into the massively parallel machines we see today in the form of Japan’s Fugaku (7,630,848 ARM processor cores producing 442 petaflops) and IBM’s Summit (202,752 POWER9 CPU cores, plus 27,648 Nvidia Tesla V100 GPUs, producing 200 petaflops).

But how did we get to these monsters? And what are we using them for? The answers to that used to lie in physics, especially the explodey kind that can level a city. More recently,

however, things like organic chemistry and climate modeling have taken precedence. The computers themselves are on a knife-edge, as the last drops of performanc­e are squeezed out of traditiona­l architectu­res and materials, and the search begins for new ones.

This, then, is the story of the supercompu­ter, and its contributi­on to human civilizati­on.

DEFINE SUPER

What exactly is a supercompu­ter? Apple tried to market its G4 line as ‘personal supercompu­ters’ at around the turn of the millennium, but there’s more to it than merely having multiple cores (although that certainly helps). Supercompu­ters are defined as being large, expensive, and with performanc­e that hugely outstrips the mainstream.

Apple’s claim starts to make more sense when you compare the 20 gigaflops of performanc­e reached by the hottest, most expensive, dual-processor, GPU-equipped G4 PowerMac to the four gigaflops of the average early-2000s Pentium 4. For context, Control Data’s CDC Cyber supercompu­ter ran at 16 gigaflops in 1981, a figure reached by ARMv8 chips in today’s high-end cell phones.

Before supercompu­ters there were simply computers, though some of them were definitely super. After World War II, many countries found ways to automate code-breaking and other intensive mathematic­al tasks, such as those involved in building nuclear weapons. So let’s begin in 1945, and the ENIAC.

This programmab­le mass of valves and relays was designed to compute artillery trajectori­es, and it could do a calculatio­n in 30 seconds that would take a human 20 hours. Its first test run, however, was commandeer­ed by John von Neumann of the Los Alamos National Laboratory and consisted of calculatio­ns for producing a hydrogen bomb. ENIAC was programmed, and provided its output, using punch cards, and a single Los Alamos run used a million cards.

ENIAC was upgraded throughout its life, and when finally switched off in 1956 (having run continuous­ly since 1947, pausing only to replace the tubes that blew approximat­ely every two days) it contained 18,000 vacuum tubes, 7,200 crystal diodes, 1,500 relays, 70,000 resistors, 10,000 capacitors, and around half a million joints, all of them soldered by hand. It weighed 27 tons, and took up 1,800 sq ft, while sucking down 150kW of power. Its computatio­nal cycle took 200 microsecon­ds to complete, during which time it could write a number to a register, read a number from a register, or add/ subtract two numbers. Multiplica­tion took the number of digits plus four cycles, so multiplyin­g 10-digit numbers took 14 cycles, or 357 per second.

Early computers owed much to the design of the ENIAC and the British Colossus. Breaking enemy codes was still a high priority, as was finding ever more efficient ways to blow things up with both high explosives and pieces of uranium. It’s around the early 1960s, though, that things such as processors and memory became recognizab­le. Take the UNIVAC LARC, or Livermore Advanced Research Computer, a dual-CPU design delivered in 1960 to help make nuclear bombs, and the fastest computer in the world until 1961. The LARC weighed 52 tons and could add two numbers in four microsecon­ds.

TECH EXPLOSIONS

There was a burst of computer developmen­t in the early 1950s. IBM had been in the game since WWII, its Harvard Mark I electromec­hanical machine coming online in 1944, with one of its first programs, again run by Von Neumann

to aid the Manhattan Project. In 1961, Big Blue would release the IBM 7030, known as Stretch, the first transistor­ized supercompu­ter and the fastest computer in the world until 1964 (a customized version of Stretch, known as Harvest, was used by the NSA for cryptanaly­sis from 1962 until 1976, when mechanical parts in its magnetic tape system wore out).

Von Neumann was behind another computer, sometimes named for him, at the Institute for Advanced Study in Princeton, which was in operation until 1958. This computer was the basis for a new generation, including the IBM 701, the ILLIAC I at the University of Illinois, and Los Alamos’ alarmingly named MANIAC I, which became the first computer to beat a human at a chess-like game (on a 6x6 board with no bishops to suit the limitation­s of the machine).

The ILLIAC line would become highly influentia­l, with ILLIAC II completed before Stretch and the open-source nature of its design leading to suspicions of ‘borrowing’. Certainly, the two computers are early examples of a pipelined design, and feature heavy use of transistor­s instead of vacuum tubes. When faculty member Donald B Gillies programmed ILLIAC II to search for Mersenne prime numbers, it found three new ones.

The first massively parallel computer design was the ILLIAC 4 (the MkIII machine was designed to detect nuclear particles in bubble chambers, and was destroyed in a fire). Originally meant to have 256 floating-point units, and four CPUs, it could run at a billion operations a second, but due to budget constraint­s, only one CPU and 64 FPUs were completed in 1972. Even so, this quarter-computer still managed 50 Mflops, making it the fastest in the world. ILLIAC 4 was also the first networked supercompu­ter, being connected to the ARPAnet in 1975, a year before the Cray-1.

THE GENIUS OF CRAY

There’s a name that shook the world of supercompu­ting. The whole story would be nothing without the influence of Seymour Cray, who joined Engineerin­g Research Associates in 1951 to build code-breaking machines, but left in 1957 to co-found Control Data, and would later found Cray Research, Inc (now part of HP) in 1972. Cray was gifted, some might say eccentric (he dug a tunnel under his home, and attributed his successes to the advice of ‘elves’ who visited him there), and was given to spending hours in deep concentrat­ion to solve a problem. The reason early Cray machines are circular or C-shaped, for example, is so that every electrical interconne­ct can be the same length, so an electrical signal always takes the same amount of time to travel down each one.

In his book The Super men: Seymour C ray and the Technical Wizards Behind the Supercompu­ter, author Charles J Murray relates this anecdote: “After a rare speech at the National Center for Atmospheri­c Research in Boulder, Colorado, in 1976, programmer­s in the

audience had suddenly fallen silent when Cray offered to answer questions. He stood there for several minutes, waiting for their queries, but none came. When he left, the head of NCAR’s computing division chided the programmer­s: ‘Why didn’t someone raise a hand?’ After a few moments, one programmer replied, ‘How do you talk to God?’”

John Levesque, the Director of Cray’s Supercompu­ter Center of Excellence based at Los Alamos National Laboratory (home of the Manhattan Project and much US nuclear research since), remembers him: “He was very shy. It was extremely difficult to get him to give a talk. But when he did, it was outstandin­g. When I met him, all I did was shake his hand. He didn’t say anything.”

Levesque has worked on all the greats of early supercompu­ting and remains in the sector today. He began his career working on the ILLIAC 4: “The UK had a machine called the DAP [Internatio­nal Computers Limited’s Distribute­d Array Processor, the first commercial­ly available parallel computer, delivered to its first customer in 1979] at the same time as the ILLIAC, and similar to it, but it didn’t have the support that the ILLIAC did. I know there are a lot of people who felt it was a complete failure, because it never went into production, and they only really developed a quarter of the machine.”

This wasn’t Levesque’s first brush with supercompu­ters, however. “I started working at Sandia National Laboratori­es in 1968. And I was working for the Undergroun­d Physics Department, and then I went to work at the Air Force weapons lab in Albuquerqu­e for three years. Then I worked for a government contractor in 1972 in southern California called R&D Associates, and while I was there, I got a contract from DARPA [Defense Advanced Research Projects Agency, the branch of the DoD interested in new materials and ideas] to monitor ILLIAC 4 code developmen­t efforts.

“In 1976, Cray gave a computer to Los Alamos—it was serial number one— with the intent of convincing them that the machine would be extremely good for their applicatio­ns. So because of the experience that my team had on the ILLIAC 4, and back to this point we probably had a team of five or six people, Los Alamos hired us to port and optimize their principal applicatio­n to that Cray and so in 1977, we started working on that. We had a cross-compiler but, initially, there was no vectorizin­g compiler, we were using what’s known as Cray vector primitives to load registers, perform operations, and store results. When Cray came out with a vectorizin­g compiler, we wrote Fortran do-loops that could be optimized and used vector instructio­ns on the Cray-1.”

“Los Alamos was primarily using CDC 7600s [from Control Data] which were also designed by Seymour Cray”, Levesque says. Peaking at 36 Mflops, the 7600 had a clock cycle of 27.5ns, for a speed of 36.4MHz on its strange 60-bit processor, and its base configurat­ion sold for $5 million in 1967. It wasn’t Cray’s first successful design, the Control Data 6000 series had outperform­ed IBM’s Stretch by a factor of three, and the 6600 was the fastest computer in the world from 1964 to 1969, when it lost the crown to the 7600.

The last of his Control Data designs was the 8600, essentiall­y four 7600s welded together running at a faster cycle speed of 8ns; it was never released, and problems with its design and budget prompted Cray to leave the company in 1972.

He didn’t go far, setting up Cray Research in the same Wisconsin town as Control Data. The Cray-1, announced in 1975, was 5.5 tons of C-shaped genius running at 80MHz, with a cycle speed of 12.5ns on the faster, inside edge of the C (where interconne­cts could be shorter), it was slower on paper than the bruteforce power of the 8600 but made up for it with cunning design, full 64bit processing and limited parallelis­m for a peak output of 160 Mflops. The National Center for Atmospheri­c Research (NCAR) estimates it was 4.5 times faster than the 7600.

Levesque says Cray ‘gave’ the machine to Los Alamos, but there was a bidding war between the New Mexico lab and its rival, the Lawrence Livermore National Laboratory at the University of California, Berkeley. Los Alamos won and received the machine for a six-month trial. NCAR got one in 1977 (which it used until 1989), the NSA had one for code-breaking, possibly even before Los Alamos—a machine with serial number zero ended up in the British Atomic Weapons Establishm­ent. In total, 80 Cray-1 machines were sold, for up to $8 million each.

VECTOR PROCESSING

The key to the Cray-1’s success was its combinatio­n of all-round high performanc­e with vector processing. Traditiona­l scientific code was written as loops in Fortran, where the operands (the objects of a mathematic­al operation) were processed one at a time. Vector processing opened up the abilities of parallel computers by carrying out the same operation on multiple pairs of operands at once, making it much more efficient. The trick was to convert the Fortran loops to vectors, which could be done automatica­lly.

“The whole idea is that the user is writing in Fortran,” says Levesque. “And so the compiler has to identify where it can use a ray operation. And the logical place is in a Fortran do-loop. The compiler

has to determine if all of the operations are independen­t of one another. The main thing is that each add, or multiply, or multiply add, needs to be able to be done for the full extent of the do-loop. So the compiler determines that it can do that, and it generates vector instructio­ns. There are things like NCAR’s code, which had ambiguous subscripts and was extremely difficult to vectorize because of the way they wrote their loops. Cray told them, ‘You have to rewrite your loops’. And NCAR said, ‘We don’t have enough manpower to rewrite our code’. And so, Cray came up with the very first compiler directive, which is a common line to any other compiler. It was DIR $ I V E P, and it stood for ‘ignore vector dependenci­es’. Once this directive was placed in front of the loops of compiler-generated vector code, it ran fast. NCAR bought a Cray.”

“Then, since we had experience on the Cray, we got numerous contracts to help people with moving their code to the Cray. We struck oil in the early ‘80s. At that time, there was a company by the name of ARCO that had been bought by BP. And there was a fella who gave a talk at a conference who said that his solution technique could not be vectorized. And so I got the code, vectorized it, and showed him, and ARCO ended up giving us a contract to port all of their reservoir simulators to the Cray.

It was kinda interestin­g because one time, the guy called me up and said, ‘It’s very important that you have this one code

running fast’. And I asked why, and he said, ‘Oh, we have to figure out how many oil drilling rigs to move up through the Bering Strait before it freezes over’. And we were successful. We even gave Cray training courses on the Cray.”

COOLING COMPONENTS

The Cray-1 generated a lot of heat, so its cooling system was almost as lovingly designed as its integrated circuits. Circuit boards were placed back to back, with a sheet of copper in between. This spread heat to its edges, where it met stainless steel pipes containing liquid Freon, which carried the heat away to a cooling system mounted under the C-shaped main unit.

There were two updated versions of the Cray-1, the 1S, and 1M. These had larger memories, faster cycle times, the addition of MOS RAM and even solid-state storage. The Cray itself was supervised by a second computer, Data General Supernova or Eclipse models, which fed them their operating systems (Cray OS at first, then later a version of UNIX) at boot time and could act as a front-end—these changed through the years as well. Further Cray machines, developed by a different team under designer Steve Chen (“Outgoing and personable,” according to Levesque), were released, each taking their turn as the fastest machine in the world, but it wasn’t until the Cray-2 in 1985 that Cray himself returned to the top.

The first Cray design with multiple CPUs, four custom vector processors, the Cray-2 used novel 3D wiring techniques and a ‘waterfall’ cooler that’s practicall­y a work of art, but the new design had trouble beating 1982’s Cray X-MP, developed from the Cray-1, and its successor the Y-MP. Sales were poor. The Cray-3, meant to be 12 times as powerful as the Cray-2, saw Seymour Cray and his company part ways once again, with Cray research continuing to work on the Cray C90 (a developmen­t of Y-MP tech that ran at 244MHz/4.1ns in 1991) and a spin-off, the Cray Computer Company, taking the Cray-3 tech and its single customer, the Lawrence Livermore Laboratory, with it. The laboratory would later cancel its order. Cray lent the sole Cray-3 built to NCAR as a demonstrat­or, but bankruptcy followed.

This didn’t stop Cray, whose Cray-4 scaled from four to 64 processors each running at 1GHz. A 16-processor system came with 8GB of memory, provided 32 Gflops, and cost $11 million. Nobody was buying. The company stopped work in 1994, and Cray died following a car crash in 1996, aged 71. “It seems impossible to exaggerate the effect he had on the industry,” said Joel Birnham, former CTO of Hewlett Packard, in tribute to Cray. “Many of the things that high-performanc­e computers now do routinely were at the farthest edge of credibilit­y when Seymour envisioned them.”

His company would pass through a number of hands, including those of Silicon Graphics, Sun Microsyste­ms, and Tera (which renamed itself Cray, Inc after the acquisitio­n). Hewlett Packard Enterprise (HPE) acquired the company for $1.3 billion in 2019, and today is building the LUMI supercompu­ter in Finland with a theoretica­l maximum performanc­e of 550 petaflops, slotting into the top five fastest computers in the world.

ONE HORSE RACE?

While Cray was dominating, others hadn’t been sitting around. MIT’s Connection Machine, the result of work into alternativ­e computer architectu­res that strayed from the orthodoxy put together by Von Neumann, started with 1985’s CM-1 which had up to 65,536 individual processors, each extremely simple, processing one bit at a time, and a striking visual design for its casing that led to many programs being written just to blink its many LEDs. The CM range ended up, in the form of the CM-5, on top of the world computer speed rankings in 1993, with a 1,024-processor machine putting out 131.0 Gflops. One even appears in the Jurassic Park movie (though it’s a Cray in the novel). Also topping the rankings in 1993 was Intel, cramming up to 4,000 i860 RISC processors into Paragon for up to 143.4 Gflops. Fujitsu’s Numerical Wind Tunnel supercompu­ter used 166 vector processors to gain the top spot in 1994 with a peak speed of 170 Gflops.

IBM, in particular, had been releasing mainframes and minicomput­ers all through the 1950s to the ‘80s, as well as its PC line from the 1981 launch of the 5150 through the XT, AT, and more. A lot of what are thought of as its brands are actually made by Lenovo, coincident­ally also the maker of 184 of the

top 500 supercompu­ters, the company to which IBM sold its PC business in 2005. Mainframes, being suited for bulk data processing, are not the same as supercompu­ters, which tend to concentrat­e on one extremely complex task at a time.

Blue Gene, and its predecesso­r the Scalable POWERparal­lel (SP) series, changed all that in the 1990s. An example of cluster computing, computers that work together so closely they can be considered a single unit, they mark the emergence of IBM’s Power architectu­re (known to Mac owners as PowerPC G3, G4, and G5, beloved of Xbox 360 and Nintendo Wii gamers, and also trundling about on Mars in the Curiosity and Perseveran­ce rovers).

ASCI

Nuclear weapons rear their head too at around this time, with the creation of the Advanced Simulation and Computing Initiative (ASCI), a supercompu­ting program to extend the lifetime of the US’s aging stockpile of nukes by simulating the way a nuclear weapon will react under different conditions. Essentiall­y, if we leave these things in the cupboard for 50 more years, will they still go bang if we want them to?

“A big change in computing came around in the early 1990s, and that was due to the Nuclear Test Ban Treaty,” says Jim Sexton, an IBM Fellow and Director of Data Centric Systems at the company. “In the 90s, it became clear that computing would allow them to manage the stockpile of nuclear weapons with simulation­s rather than having to go and actually explode the bombs. When you have a mission of that order, and that complexity, people get very focused on developing computing systems to support your work. Up until then, the way people had been designing computers was just playing around, trying different things.

“ASCI kicked off,” Sexton adds, “and has had a sequence of supercompu­ters developed ever since. There were a

number of players in the early days: IBM was providing some of the computers, Cray was providing some of the computers, a couple of other names too, and what quickly happened is that there was an insatiable demand for computing power. And there was a limit on how much power you could deliver to a laboratory to run a computer. We’re up to 40 or 50 megawatts today, the biggest ones in the world are our own national labs in the US.”

This hunger for power has led to a new considerat­ion in supercompu­ter design— efficiency. Born out of a five-year plan to build a massively parallel computer to address protein folding and other biomolecul­ar phenomena, the project had a secondary aim of exploring new ideas in parallel computing architectu­re, the problem with supercompu­ters being that new ones are hard to simulate on existing hardware. In the early 2000s, there was a bit of a plateau in computer speeds. NEC’s Earth Simulator had been at the top of the rankings from March 2002 until November 2004, its 35.86 teraflops pushing out ASCI White’s 7.226 teraflops that had been top for a year previously.

“Blue Gene was designed to be powereffic­ient,” says Sexton of a machine that forced its way to the top with 70.72 teraflops before losing the title to another IBM machine, Roadrunner at Los Alamos, in June 2008. “We were building systems with 100,000 CPU cores in the early 2000s, and that was unheard of at the time, to be able to get that many computer cores to work coherently together.

“And then towards the end of that time, what emerged was GPU accelerati­on. It turns out that GPUs are actually quite power-efficient, and a significan­t strain on the design of computers is your ability to manage power. A lot of systems design is about how to get more and more computing power into a system, but staying within a given power. The Blue Gene architectu­re used incredibly simple, lightweigh­t, low power cores, but by the time we got to around 2014 or 2015, it became clear we would not be able to continue to improve performanc­e with that technology.”

MODERN DAY

Trading power consumptio­n for speed, Blue Gene/L (originally ‘Blue Light’) contains two PowerPC 440 cores running at 700MHz with floating-point accelerato­rs in each node, with 1,024 nodes in each 19in rack, up to a minimum of 64 racks (65,536 nodes). A lightweigh­t Linux OS further pares back the overhead. Blue Gene/P further developed this, with its PowerPC 450 cores running at 850MHz and with twice the chip-to-chip bandwidth of the L model, but with four cores per node and 4,096 cores per rack.

Blue Gene/Q dispenses with PowerPC, using instead IBM’s A2 open-source architectu­re. This means 18-core chips with four threads per core and a speed of 1.6GHz. Each rack contains 1,024 compute nodes, 16,384 user cores (16 are available, the 17th runs the OS, and the 18th is either a redundant spare or there to increase manufactur­ing yields) and 16 TB RAM.

Blue Gene is also notable for sprouting Deep Blue, the computer that beat Grandmaste­r Gary Kasparov at chess in 1996. With its roots reaching back to 1985, and earlier attempts to teach a computer to play chess, Deep Blue is more of an ASIC (Applicatio­n Specific Integrated Circuit) as it uses specialize­d chips alongside IBM Power2 processors to brute-force solutions to chess problems rather than displaying modern artificial intelligen­ce.

“We are at the limit of what it is physically possible to program and manage, so then we did a switch to work with Nvidia and built the fastest computers in the US National Labs: Summit and Sierra. They have tightly coupled CPUs and GPUs, and are at 200 petaflops today.”

Planned machines will produce more than an exaflop of computing power, comparable to that put out by Folding@ home during the great push to discover vulnerabil­ities in the protein-coding of the SARS-COV2 virus. “We’re at a disruptive time in computing,” says Sexton. “We’re running out of technology, and yet the demands of computing continue to grow. The ability for AI to do things, to get you more access to knowledge, it requires more and more compute. If you think of a self-driving car, something a lot of people talk about, getting a computer to drive a car is actually quite a lot, so you’re seeing huge amounts of concrete everywhere.

“The history from 1995 to today and into the next few years has been driven primarily by the mission activities that have supported nuclear weapons,” he continues, “and all the science that goes around understand­ing materials, understand­ing climate, understand­ing weather, understand­ing biological systems, so we’re now at the point where you can actually do serious simulation. Every five years, we were seeing a tenfold increase in peak performanc­e. We’re now at an exaflop, so we’ve gone three orders of magnitude, in terms of performanc­e, in 15 years. It’s absolutely amazing that that’s possible, but hard to see how it’s going to continue.

“On the other hand, the computing that we do is changing drasticall­y now. It’s less and less about trying to focus on understand­ing basic physics and more and more about AI, and data analytics. It’s been a very interestin­g ride, but I don’t think the future of computing is going to be around nuclear weapons. I think we’ve reached the limits of traditiona­l computing, but we have all these new capabiliti­es coming too.”

FUGAKU KING

This may explain how Fugaku, a system made of an ungodly number of ARM-based CPU cores and no GPUs, was able to overtake Summit in June 2020 to take the top spot on the big list of really fast computers. Competitio­n between nations for the prestige of hosting the world’s greatest supercompu­ter may still be a factor, but we may have to look elsewhere, such as distribute­d computing systems, to catch the really big numbers in future.

Or we could change the way we design our computers. A resurgence in analog computing, maybe? “We as humans, we don’t really work as precise ones and zeros,” says Sexton. “Our brain computes analog, and it does a pretty good job. When you look at a lot of the domain out there, do I need a computer of ones and zeroes to drive a car? Would it be better to go to an interestin­g, analog, future?

“An analog computer deals with a range of values,” he continues. “It doesn’t give you a precise binary value for something. And then there’s a different way of calculatin­g, and you do that by having different materials analyzing the different programs or occasional model. In-memory computing is another one. More and more challengin­g computing is moving away from the old Von Neumann model, where you have your data in memory and move it to a computer to compute. We’re looking for ways around that because a significan­t amount of the power in the computer is going into moving data.”

Whatever happens, as quantum computers approach from left-field or new materials other than silicon become more common in microchip manufactur­e, we may never leave the age of the supercompu­ter behind. They may just fade into the background of our society, giant machines that help when great scientific problems need solving and spend the rest of their time predicting the weather. But that’s another story.

 ?? ?? On July 25, 1946, the US military carried out a nuclear weapon test at Bikini Atoll, Micronesia
On July 25, 1946, the US military carried out a nuclear weapon test at Bikini Atoll, Micronesia
 ?? ??
 ?? ?? The CDC 6600 from 1964 is often considered to be the first supercompu­ter.
The CDC 6600 from 1964 is often considered to be the first supercompu­ter.
 ?? ?? Parts of the Harvard Mark I computer on display. Made by IBM, and proposed in 1937, John Von Neumann ran the first program on it in 1944 under the Manhattan Project.
Parts of the Harvard Mark I computer on display. Made by IBM, and proposed in 1937, John Von Neumann ran the first program on it in 1944 under the Manhattan Project.
 ?? ?? The ENIAC at the Ballistic Research Laboratory, Pennsylvan­ia, circa 1950.
The ENIAC at the Ballistic Research Laboratory, Pennsylvan­ia, circa 1950.
 ?? ?? Fathers of the atomic bomb: Robert Oppenheime­r (left) and John von Neumann at the October 1952 dedication of the computer built for the Institute for Advanced Study.
Fathers of the atomic bomb: Robert Oppenheime­r (left) and John von Neumann at the October 1952 dedication of the computer built for the Institute for Advanced Study.
 ?? ?? John Levesque, head of Cray’s Supercompu­ter Center at Los Alamos National Laboratory, the home of US nuclear research.
John Levesque, head of Cray’s Supercompu­ter Center at Los Alamos National Laboratory, the home of US nuclear research.
 ?? ?? The Cray-1 supercompu­ter on display at the Computer Museum of America, Roswell, Georgia.
The Cray-1 supercompu­ter on display at the Computer Museum of America, Roswell, Georgia.
 ?? ?? Boards from the ILLIAC 4 computer, state of the art in 1966.
Boards from the ILLIAC 4 computer, state of the art in 1966.
 ?? ?? The Cray-2’s internal wiring. And you thought your PC cables were a mess!
The Cray-2’s internal wiring. And you thought your PC cables were a mess!
 ?? ?? Right: Seymour Cray, oddly looking like a cardboard cutout, with his Cray-1 computer.
Right: Seymour Cray, oddly looking like a cardboard cutout, with his Cray-1 computer.
 ?? ?? Left: Jim Sexton, an IBM Fellow and Director of Data Centric Systems.
Left: Jim Sexton, an IBM Fellow and Director of Data Centric Systems.
 ?? ?? A Cray-2 ( left) and its cooling system ( right) on display at the computer History Museum
A Cray-2 ( left) and its cooling system ( right) on display at the computer History Museum
 ?? ?? A Cray-2 logic module, showing the tight packing of components
A Cray-2 logic module, showing the tight packing of components
 ?? ?? The Cray XC-40, also known as Hazel Hen, at the High Performanc­e Computing Center of the University of Stuttgart. It began operation in 2015.
The Cray XC-40, also known as Hazel Hen, at the High Performanc­e Computing Center of the University of Stuttgart. It began operation in 2015.
 ?? ?? Above: One node of the Fugaku supercompu­ter.
Above: One node of the Fugaku supercompu­ter.
 ?? ?? Left: Part of IBM’s Summit supercompu­ter.
Left: Part of IBM’s Summit supercompu­ter.
 ?? ?? The CM-1 at the Computer Museum of America, with its plethora of red LEDs.
The CM-1 at the Computer Museum of America, with its plethora of red LEDs.
 ?? ?? Blue Gene/P at the Argonne National Laboratory, in Lemont, Illinois.
Blue Gene/P at the Argonne National Laboratory, in Lemont, Illinois.

Newspapers in English

Newspapers from United States