Scientific modelling is valuable – but remember the limitations
The lessons to be learned from the coronavirus pandemic are so numerous they will keep scholars busy for decades to come. Chief among them is the value of modelling and the fact that an uncritical reliance on their findings can lead you badly astray.
A recent model from Oxford University assessed how well different outbreak scenarios fitted the rise in Covid-19 deaths in the UK and Italy. The most extreme UK scenario assumed only a fraction of people were at risk of serious illness and estimated that, as of last week, 68% of the population had been exposed to the virus. The study, which has not been published or peer reviewed, unleashed a flurry of headlines declaring that coronavirus may have infected half the people in Britain. That’s 34 million people.
But as infectious disease modellers and public health experts, including the Oxford team themselves, have pointed out, the model used assumptions because there is no hard data. No one knows what fraction of the public is at risk of serious illness. The study merely demonstrates how wildly different scenarios can produce the same tragic pattern of deaths – and emphasises that we urgently need serological testing for antibodies against the virus, to discover which world we are in.
Paul Klenerman, one of the Oxford researchers, called the 68% figure “the most extreme” result and explained that “there is another extreme which is that only a tiny proportion have been exposed”. He added that the true figure – which is unknown – was “likely somewhere in between”. In other words, the number of people infected in Britain is either very large, very small, or middling. This may sound unhelpful, but it is precisely the point. “We need much more data about who has been exposed to inform policy,” Klenerman said.
The modelling from Imperial College that underpinned the government’s belief that the nation could ride out the epidemic by letting the infection sweep through, creating “herd immunity” on the way, was more troubling. The model, based on 13-year-old code for a long-feared influenza pandemic, assumed the demand for intensive care units would be the same for both infections. Data from China soon showed this was dangerously wrong, but the model was only updated when more data poured out of Italy, where ICUs were swiftly overwhelmed and deaths shot up.
It wasn’t the only shortcoming of the Imperial model. It did not consider the impact of widespread, rapid testing; or contact tracing and isolation, which can be used in the early stages of an epidemic, or in lockdown conditions, to keep infections down to such an extent that when restrictions are lifted the virus should not rebound.
It is not a question of whether models are flawed, but in what ways are they flawed. That does not make them useless: models can be enormously valuable if their shortcomings are appreciated. But, as with other sources of information, they should never be used alone.