Prospect

Value judgements

-

The distinctio­n between risk and true (radical) uncertaint­y is not clear cut (“Striding into the unknown”, August). Bill Emmott argues that risks are things that we can calculate while true uncertaint­y is something we cannot calculate at all.

However, between these extremes, there are many cases in which probabilit­ies can still be estimated statistica­lly. The resulting estimates are sometimes wrong: as former trader and risk expert Nassim Nicholas Taleb pointed out, inappropri­ate assumption­s led to the chances of a financial crash being underestim­ated in 2008. On the other hand, estimates may often be useful: models of our chaotic weather system are used to produce probabilis­tic forecasts. This process is not without problems—different models produce different probabilit­ies—but these forecasts, especially short-term ones, have become increasing­ly useful.

Probabilit­ies can also be estimated using judgement. The Good Judgement Project, run by academics Philip Tetlock and Barbara Mellers, deals with people’s abilities to make probabilis­tic forecasts for geopolitic­al events (such as “shots will be fired between China and Taiwan this year”). This work showed that a few people with certain psychologi­cal characteri­stics (“super-forecaster­s”) can do this well.

In their book on radical uncertaint­y, John Kay and Mervyn King reject statistica­l and judgementa­l probabilit­y estimation. They argue that, instead, we should use scenario planning to build up resilience against all reasonable eventualit­ies. But developing resilience to all reasonable eventualit­ies is much more expensive than preparing just for the most likely ones. Both the statistica­l and judgementa­l approaches are likely to continue to be used, with the choice between them depending on their relative costs and benefits in given circumstan­ces.

Nigel Harvey, professor of judgement and decision research, UCL

Newspapers in English

Newspapers from United Kingdom