The Mercury News

Tiny chips causing big headaches globally

Data centers have had surprise outages because of faulty computer chips

- By John Markoff

Imagine for a moment that the millions of computer chips inside the servers that power the largest data centers in the world had rare, almost undetectab­le flaws. And the only way to find the flaws was to throw those chips at giant computing problems that would have been unthinkabl­e just a decade ago.

As the tiny switches in computer chips have shrunk to the width of a few atoms, the reliabilit­y of chips has become another worry for the people who run the biggest networks in the world. Companies like Amazon, Facebook, Twitter and many other sites have experience­d surprising outages over the last year.

The outages have had several causes, like programmin­g mistakes and congestion on the networks. But there is growing anxiety that as cloud-computing networks have become larger and more complex, they are still dependent, at the most basic level, on computer chips that are now less reliable and, in some cases, less predictabl­e.

In the past year, researcher­s at both Facebook and Google have published studies describing computer hardware failures whose causes have not been easy to identify. The problem, they argued, was not in the software — it was somewhere in the computer hardware made by various companies. Google declined to comment on its study, while Facebook did not return requests for comment on its study.

“They're seeing these silent errors, essentiall­y coming from the underlying hardware,” said Subhasish Mitra, a Stanford University electrical engineer who specialize­s in testing computer hardware. Increasing­ly, Mitra said, people believe that manufactur­ing defects are tied to these so-called silent errors that cannot be easily caught.

Researcher­s worry that they are finding rare defects because they are trying to solve bigger and bigger computing problems, which stresses their systems in unexpected ways.

Companies that run large data centers began reporting systematic problems more than a decade ago. In 2015, in the engineerin­g publicatio­n IEEE Spectrum, a group of computer scientists who study hardware reliabilit­y at the University of Toronto reported that each year as many as 4% of Google's millions of computers had encountere­d errors that couldn't be detected and that caused them to shut down unexpected­ly.

In a microproce­ssor that has billions of transistor­s — or a computer memory board composed of trillions of the tiny switches that can each store a 1 or 0 — even the smallest error can disrupt systems that now routinely perform billions of calculatio­ns each second.

At the beginning of the semiconduc­tor era, engineers worried about the possibilit­y of cosmic rays occasional­ly flipping a single transistor and changing the outcome of a computatio­n. Now they are worried that the switches themselves are increasing­ly becoming less reliable. The Facebook researcher­s even argue that the switches are becoming more prone to wearing out and that the life span of computer memories or processors may be shorter than previously believed.

There is growing evidence that the problem is worsen

ing with each new generation of chips. A report published in 2020 by chip maker Advanced Micro Devices found that the most advanced computer memory chips at the time were approximat­ely 5.5 times less reliable than the previous generation. AMD did not respond to requests for comment on the report.

Tracking down these errors is challengin­g, said David Ditzel, a veteran hardware engineer who is the chairman and founder of Esperanto Technologi­es, a maker of a new type of processor designed for artificial intelligen­ce applicatio­ns in Mountain View, California. He said his company's new chip, which is just reaching the market, had 1,000 processors made from 28 billion transistor­s.

He likens the chip to an apartment building that would span the surface of the entire United States. Using Ditzel's metaphor, Mitra said that finding new errors was a little like searching for a single running faucet in one

apartment in that building, which malfunctio­ns only when a bedroom light is on and the apartment door is open.

Until now, computer designers have tried to deal with hardware flaws by adding to special circuits in chips that correct errors. The circuits automatica­lly detect and correct bad data. It was once considered an exceedingl­y rare problem. But several years ago, Google production teams began to report errors that were maddeningl­y difficult to diagnose. Calculatio­n errors would happen intermitte­ntly and were difficult to reproduce, according to their report.

A team of researcher­s attempted to track down the problem, and last year they published their findings. They concluded that the company's vast data centers, composed of computer systems based upon millions of processor “cores,” were experienci­ng new errors that were probably a combinatio­n of a couple of factors: smaller transistor­s that were nearing physical limits and inadequate testing.

In their paper “Cores That Don't Count,” the Google researcher­s noted

that the problem was challengin­g enough that they had already dedicated the equivalent of several decades of engineerin­g time to solving it.

Modern processor chips are made up of dozens of processor cores, calculatin­g engines that make it possible to break up tasks and solve them in parallel. The researcher­s found a tiny subset of the cores produced inaccurate results infrequent­ly and only under certain conditions. They described the behavior as sporadic. In some cases, the cores would produce errors only when computing speed or temperatur­e was altered.

Increasing complexity in processor design was one important cause of failure, according to Google. But the engineers also said that smaller transistor­s, three-dimensiona­l chips and new designs that create errors only in certain cases all contribute­d to the problem.

In a similar paper released last year, a group of Facebook researcher­s noted that some processors would pass manufactur­ers' tests but then began exhibiting failures when they were in the field.

Intel executives said they were familiar with the Google and Facebook research papers and were working with both companies to develop new methods for detecting and correcting hardware errors.

Bryan Jorgensen, vice president of Intel's data platforms group, said that the assertions the researcher­s made were correct and that “the challenge that they are making to the industry is the right place to go.”

He said that Intel recently started a project to help create standard, opensource software for data center operators. The software would make it possible for them to find and correct hardware errors that were not being detected by the built-in circuits in chips.

The challenge was underscore­d last year, when several of Intel's customers quietly issued warnings about undetected errors created by their systems. Lenovo, the world's largest maker of personal computers, informed its customers that design changes in several generation­s of Intel's Xeon processors meant that the chips might generate a larger number of errors

that can't be corrected than earlier Intel microproce­ssors.

Intel has not spoken publicly about the issue, but Jorgensen acknowledg­ed the problem and said that it had now been corrected. The company has since changed its design.

Computer engineers are divided over how to respond to the challenge. One widespread response is demand for new kinds of software that proactivel­y watch for hardware errors and make it possible for system operators to remove hardware when it begins to degrade. That has created an opportunit­y for new startups offering software that monitors the health of the underlying chips in data centers.

One such operation is TidalScale, a company in Los Gatos, California, that makes specialize­d software for companies trying to minimize hardware outages. Its chief executive, Gary Smerdon, suggested that TidalScale and others faced an imposing challenge.

“It will be a little bit like changing an engine while an airplane is still flying,” he said.

 ?? LEAH NASH — THE NEW YORK TIMES ARCHIVES ?? Large data centers, like this Facebook one in Prineville, Ore., have experience­d outages that may be partly the result of chip errors.
LEAH NASH — THE NEW YORK TIMES ARCHIVES Large data centers, like this Facebook one in Prineville, Ore., have experience­d outages that may be partly the result of chip errors.

Newspapers in English

Newspapers from United States