The Sunday Telegraph
Modelling behind lockdown was an unreliable buggy mess, claim experts
Data that predicted 500,000 could die in UK unless extreme measures were taken are impossible to replicate, say scientific teams
THE Covid-19 modelling that sent Britain into lockdown, shutting the economy and leaving millions out of work has been criticised by experts.
Prof Neil Ferguson’s Imperial College computer coding was derided as “totally unreliable” by leading figures, who warned it was “something you wouldn’t stake your life on”.
The model, credited with forcing the Government to U-turn and introduce a nationwide lockdown, is a “buggy mess, which looks more like a bowl of angel hair pasta than a finely tuned piece of programming”, said David Richards, the co-founder of British data technology company WANdisco.
“In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.”
The comments are likely to reignite a row over whether the UK was right to go into lockdown, with conflicting models suggesting people may have already acquired substantial herd immunity and Covid-19 may have hit Britain earlier than first thought.
Scientists have also been split on the fatality rate of Covid-19, which has resulted in vastly different models.
Up until now, significant weight has been attached to Imperial’s model, which placed the fatality rate higher than others and predicted 510,000 in the UK could die without a lockdown.
It was said to have prompted a dramatic change in government policy, causing businesses, schools and restaurants to be shut immediately in March. The Bank of England has predicted that the economy could take a year to return to normal, after its worst recession in more than three centuries.
The Imperial model works by using code to simulate transport links, population size, social networks and healthcare provisions to predict how coronavirus would spread. However, questions have emerged over whether the model is accurate, after researchers released its code, which in its original form was “thousands of lines” developed over more than 13 years.
In its initial form the code was unreadable, developers claimed, with some parts looking “like they were machine translated from Fortran”, an old coding language, according to John Carmack, a US developer, who helped clean the code before it was published.
Yet, the problems appear to go much deeper than messy coding. Many have claimed that it is almost impossible to reproduce the same results from the same data, using the same code.
Scientists from the University of Edinburgh said they got different results when they used different machines, and even in some cases using the same machines. “There appears to be a bug in either the creation or re-use of the network file. If we attempt two completely identical runs, only varying in that the second should use the network file produced by the first, the results are quite different,” the Edinburgh researchers wrote on the Github online forum. After a discussion with a Github developer, a fix was provided.
It is said to be one of a number of bugs discovered within the system. Github developers said that the model was “stochastic” (random), and “multiple runs with different seeds should be undertaken to see average behaviour”.
It has prompted questions from specialists, who say “models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters ... otherwise, there is simply no way of knowing whether they will be reliable.” It comes amid a wider debate over whether the Government should have relied more heavily on numerous models before making policy decisions.
Writing for the Telegraph online, Prof Sir Nigel Shadbolt, the principal of Jesus College, Oxford, and chairman of the Open Data Institute – which he cofounded with World Wide Web inventor Sir Tim Berners-Lee – said: “Having a diverse variety of models, particularly those that enable policymakers to explore predictions under different assumptions, and with different interventions, is incredibly powerful.”
Like the Imperial code, a rival model by Prof Sunetra Gupta at the University of Oxford works on a so-called “SIR approach” in which the population is divided into those that are susceptible, infected and recorded. However, while Prof Gupta assumed that 0.1 per cent of infected people would die, Prof Ferguson worked on 0.9 per cent. That led to a dramatic reversal in government policy from attempting to build “herd immunity” to a full-on lockdown.
Concerns over Prof Ferguson’s model have been raised, with Dr Konstantin Boudnik, the VP of architecture at WANdisco, saying his track record did not inspire confidence. In the early 2000s, Prof Ferguson’s models incorrectly predicted up to 136,000 mad cow disease deaths, 200million from bird flu and 65,000 from swine flu.
“The facts from the early 2000s are just yet another confirmation that their modelling approach was flawed to the core,” says Dr Boudnik. “We don’t know for sure if the same model/code was used, but we clearly see their methodology wasn’t rigorous then and surely hasn’t improved now.”
A spokesman for Imperial’s Covid-19 team said: “The Government has never relied on a single disease model to inform decision-making. As has been repeatedly stated, decision-making around lockdown was based on a consensus view of the scientific evidence, including several modelling studies by different academic groups.
“Multiple groups using different models concluded that the pandemic would overwhelm the NHS and cause unacceptably high mortality in the absence of extreme social distancing measures. Within the Imperial team, we use several models of differing levels of complexity, all of which produce consistent results. We are working with ... legitimate academic groups and technology companies to develop, test and further document the simulation code referred to. However, we reject the partisan reviews of a few clearly ideologically motivated commentators.
“Epidemiology is not a branch of computer science and the conclusions around lockdown rely not on any mathematical model but on the scientific consensus that Covid-19 is highly transmissible with an infection fatality ratio exceeding 0.5 per cent in the UK.”
‘In our commercial reality, we would fire anyone for developing code like this’
‘Any business that relied on it to produce software for sale would likely go bust’
‘It looks more like a bowl of angel hair pasta than a finely tuned piece of programming’
‘The early 2000s were yet another confirmation that their modelling approach was flawed to the core’