Pittsburgh Post-Gazette

Error bars

It’s silly to compare the art of predicting elections with the science of climate change, writes FAYE FLAM

- Faye Flam, a former staff writer for Science magazine, is a Bloomberg View columnist (fflam1@bloomberg.net).

Anew argument has started to crop up in debates over climate change. It goes like this: Science couldn’t predict the outcome of the last election, or the bumps in the economy, so why should we believe scientists when they try to predict the future of Earth’s climate?

For example, a recent New York Times column — the first from Bret Stephens (published in Monday’s Post-Gazette) — starts with a cautionary tale about the failure of data analytics to guide Team Clinton to victory in 2016, then segues into a discussion of climate-change skepticism. Given the “inherent uncertaint­ies of data,” Mr. Stephens argues, doubters have a right to distrust “overweenin­g scientism.” He writes:

“We live in a world in which data convey authority. But authority has a way of descending to certitude, and certitude begets hubris. From Robert McNamara to Lehman Brothers to Stronger Together, cautionary tales abound.”

But to put this in context, science makes all kinds of prediction­s that do hold up. Consider last year’s finding of gravitatio­nal waves: Scientists reported that they’d detected ripples in space-time generated by a collision of two black holes some 1.3 billion light years away. The invisible waves were predicted by Einstein’s theory of general relativity a century ago.

Even if someone later finds this individual claim was in error, it’s part of a body of knowledge. If physics weren’t on reasonably good footing, we wouldn’t be walking around with devices that talk to satellites to pinpoint our locations. If not for a general trust in physics, airlines would have to drag people kicking and screaming to get them on their planes.

Why, then, can some areas of science predict invisible spacetime ripples, but others can’t predict elections? I’ve been talking to scientists, philosophe­rs and historians about this situation for months. There are, it turns out, some common characteri­stics of scientific pursuits that make good prediction­s.

One is the tradition scientists in some fields have of submitting to peer review, and making their procedures transparen­t so other people can reproduce their results. This creates an interconne­cted body of knowledge. Great science combines great minds. Einstein himself wavered over whether his theory predicted the existence of the gravitatio­nal waves. Other scientists realized that it did, and they dreamed up a creative way to detect them.

Fields of science with good track records for prediction often work by discerning patterns and insights that explain the world. The better the insights, the better the prediction­s — on subjects ranging from eclipses to chemical reactions to the behavior of ants to the existence of black

holes.

In contrast, many datadriven algorithms developed by private companies and used to, say, predict election results, are opaque. They aren’t peer-reviewed. Their claims aren’t subject to replicatio­n. They don’t reveal insights or explanatio­ns that others can test.

Establishe­d fields of science also gain predictive power by requiring scientists to quantify their uncertaint­ies. For some, this isn’t just good practice but part of the very definition of science. When scientists graph their measuremen­ts, they draw vertical lines — error bars — which indicate how inherently imprecise their measuremen­t systems are.

There are good cautionary tales about failure to use error bars. One comes from forensic science — the use of fingerprin­ts, hair analysis and the like to solve crimes. A group of scientists looking into forensics for a recent government report concluded that it shouldn’t be considered a science at all, because people are doing such a poor job of calculatin­g error bars. Expert witnesses mislead juries with statements about “matches” when all they have are probabilit­ies.

So it’s important to look closely at climate science and make sure scientists are not making the same mistakes. And investigat­ions by the National Academy of Sciences and others don’t reveal the kinds of problems that plague forensics.

Climate science grew out of physics and chemistry — discipline­s with explicit rules for dealing with uncertaint­y. The first climate model came from the calculatio­ns of Swedish chemist Svante Arrhenius in 1896. The basic principles behind his model have been tested in laboratory experiment­s and used to predict temperatur­es on Venus and Mars.

Earth is more complex than its neighbors because it’s covered in water. Atmospheri­c temperatur­es affect the state of the water — ice, liquid or vapor — which in turn affects the temperatur­e. But that’s OK — scientists are allowed to deal in complex phenomena as long as they do a good job of calculatin­g their uncertaint­ies.

Individual scientists make mistakes, like everyone else, but if you really want a cautionary tale that’s relevant to climate change, it should involve a whole field misleading the public and being used to make harmful policy. It’s hard to find a better example than the now-discredite­d belief that dietary fat is killing people. As journalist Gary Taubes described it in Science in 2001, and later in The New York Times Magazine, the idea had political appeal with those on the left upset by consumptio­n, cruelty to animals and the environmen­tal toll of raising animals for meat.

As Mr. Taubes tells it, scientists were in disagreeme­nt and lacked the kind of long-range health data they needed to understand the effects of dietary fat. The National Academy of Sciences investigat­ed and was blasted for failing to approve the anti-fat belief. Back in the labs, scientists were coming across evidence that different fats had different physiologi­cal effects, some quite beneficial. But demand was growing for a simple recommenda­tion. “Once politician­s, the press, and the public had decided dietary fat policy,” Mr. Taubes wrote, “the science was left to catch up.”

If there are lessons to be learned from the fat debacle, it’s that the press and policymake­rs shouldn’t get ahead of scientific consensus. Scientists do make mistakes, but scientific methods in many fields guard against unwarrante­d certainty. (Science can make some prediction­s with near-certainty — that solar eclipse will certainly happen on Aug. 21.) And of course, there is a consensus on climate change. Scientists shouldn’t be trusted blindly, but stubborn distrust in the face of evidence defeats the purpose.

 ??  ??

Newspapers in English

Newspapers from United States