Value judgements
The distinction between risk and true (radical) uncertainty is not clear cut (“Striding into the unknown”, August). Bill Emmott argues that risks are things that we can calculate while true uncertainty is something we cannot calculate at all.
However, between these extremes, there are many cases in which probabilities can still be estimated statistically. The resulting estimates are sometimes wrong: as former trader and risk expert Nassim Nicholas Taleb pointed out, inappropriate assumptions led to the chances of a financial crash being underestimated in 2008. On the other hand, estimates may often be useful: models of our chaotic weather system are used to produce probabilistic forecasts. This process is not without problems—different models produce different probabilities—but these forecasts, especially short-term ones, have become increasingly useful.
Probabilities can also be estimated using judgement. The Good Judgement Project, run by academics Philip Tetlock and Barbara Mellers, deals with people’s abilities to make probabilistic forecasts for geopolitical events (such as “shots will be fired between China and Taiwan this year”). This work showed that a few people with certain psychological characteristics (“super-forecasters”) can do this well.
In their book on radical uncertainty, John Kay and Mervyn King reject statistical and judgemental probability estimation. They argue that, instead, we should use scenario planning to build up resilience against all reasonable eventualities. But developing resilience to all reasonable eventualities is much more expensive than preparing just for the most likely ones. Both the statistical and judgemental approaches are likely to continue to be used, with the choice between them depending on their relative costs and benefits in given circumstances.
Nigel Harvey, professor of judgement and decision research, UCL