MIT’s unscientific, catastrophic climate forecast
When we drive on a long bridge over a river or fly in a passenger aircraft, we expect the bridge and the plane to have been designed and built in ways that are consistent with proven scientific principles. Should we expect similar standards to apply to forecasts that are intended to help policymakers make important decisions that will affect people’s jobs and even their lives? Of course we should. Such standards exist. But are they being followed?
The Financial Post asked us to look at a report last month from the Massachusetts Institute of Technology (MIT) Joint Program on the Science and Policy of Global Change, titled “Probabilistic Forecast for 21st Century Climate based on uncertainties in emissions (without policy) and climate parameters.”
The MIT report authors predicted that, without massive government action, global warming could be twice as severe as previously forecast, and more severe than the official projections of the United Nations’ Intergovernmental Panel on Climate Change (IPCC). The MIT authors said their report is based in part on 400 runs of a computer model of the global climate and economic activity.
While the MIT group espouses lofty-sounding objectives to provide leadership with “independent policy analysis and public education in global environmental change,” we found their procedures inconsistent with important forecasting principles. No more than 30% of forecasting principles were properly applied by the MIT modellers and 49 principles were violated. For an important problem such as this, we do not think it is defensible to violate a single principle.
For example, MIT forecasters should have shrunk forecasts of change in the face of uncertainty about predictions of the explanatory variables; in this case the variables postulated to influence temperatures. More generally, they should also have been conservative in this situation of high uncertainty and instability. They were not.
We recognize that judgement is required in rating forecasting procedures. Evidence for our principles, however, is in the form of findings from scientific experiments comparing reasonable alternative methods, and accepted practice (see link below).
So what’s really wrong with the MIT report? The phrase “global environmental change” provides a clue. The group’s objective implicitly rejects the possibility of no or unimportant change or, despite mention of uncertainties, the possibility of unpredictable change. People who do research on forecasting know that a forecast of “no change” can be hard, if not impossible, to beat in many circumstances. A forecast of no change does not mean that one should necessarily expect things not to vary. Such a forecast can be appropriate even when a great deal of change is possible but the direction, extent or duration is uncertain.
When one looks at long series of Earth’s temperatures, one finds that they have gone up and down irregularly, over long and short periods, on all time scales from years to millennia. Moreover, science has not been able to tell us why. There is much uncertainty about past climate changes and about the strength and even direction of causal relationships. To wit, do warm- ing temperatures result in more carbon dioxide in the atmosphere or is it the other way round — or maybe a bit of both? Does warming of the atmosphere result in negative or positive feedback from clouds? There are many more such questions without answers. All this strongly suggests that a no-change forecast is the appropriate benchmark long-term forecast.
With Dr. Willie Soon of the Harvard-Smithsonian Center for Astrophysics, we found that simply predicting that global mean temperatures will not change results in quite small forecast errors. In our validation study that covered the period 1851 to 2007, we compared the no-change forecast with the IPCC global warming forecast that temperatures will climb at a rate of 0.03C per year. We compared the IPCC projection of 0.03C per year with what actually happened after 1850. The errors from the IPCC projection were 12 times larger than no-change benchmark. Consider the accuracy of the no-change model: On average the 50-years ahead forecasts differed by only 0.24C from the global mean temperature as measured by the Hadley Centre in the U.K.
Based on our analysis, we expect the annual global mean temperature for every year for the rest of the 21st Century to be within plus-or-minus 0.5C of the 2008 mean.
The MIT approach to forecasting is in substance the same as the approach adopted by the IPCC. Our forecasting audit of the IPCC approach and its conclusion therefore applies as well to the MIT forecasting effort: The forecasting procedures were not valid and there is no reason for policymakers to take their forecasts seriously. It also leads to the conclusion that the MIT forecast errors will be much larger even than the IPCC’s forecast errors.
Policymakers and the public should be made aware that the forecasts from the MIT modellers, as well as those used by the IPCC, are merely the opinions of some scientists and computer modellers. It is not proper to claim that these are truly scientific forecasts.
Dr. Kesten C. Green is a senior research fellow of the Business and Economic Forecasting Unit at Monash University in Australia. Dr. J. Scott Armstrong is Professor of Marketing at The Wharton School, University of Pennsylvania. Armstrong and Green are co-directors of the public service Web site forecastingprinciples.com sponsored by the International Institute of Forecasters.