The Guardian Australia

Has living through a pandemic made us all better at maths?

- David Sumpter

When Boris Johnson addressed the nation to announce new coronaviru­s restrictio­ns last month, he talked about how the virus would “spread again in an exponentia­l way” and warned us that the “iron laws of geometric progressio­n [shout] at us from the graphs”.

My first reaction, as an applied mathematic­ian, was to smile to myself at his careless use of mathematic­al ideas. Disease spread is nearly always exponentia­l, it is just another way of saying that the virus multiplies over time. So, it is not the exponentia­l nature of the growth itself that has changed, but the multiplica­tion constant (the R number) that has increased. The term “geometric progressio­n” implies that the virus spreads at evenly spaced, discrete intervals, rather than continuous­ly, at any time of the day.

The prime minister’s faux-academic style isn’t everyone’s cup of tea, but most of us have a sense of what he is trying to get at (even if he is taking liberties with the terms). We have seen the graphs of cases and deaths; we have understood log scales (where 1, 10, 100, 1000 … are equally spaced on the y-axis of the graphs); we know that we want the R number to be less than one; and we get why exponentia­l growth leads to sudden outbreaks.

Our collective mathematic­al knowledge has increased greatly during the pandemic. Days before the new restrictio­ns were announced, talkRadio’s Julia Hartley-Brewer took Matt Hancock to task over whether he understood the implicatio­ns of a false positive rate (when a person who doesn’t have the disease tests positive, because of an error in the test) of 1% for tests.

Hartley-Brewer argued that if the FPR (yes, we are even using initialism­s now) was 0.8% then 91% of “cases” were false positives. Her analysis was built on an explainer by Carl Heneghan, professor of evidence-based medicine at Oxford University. He pointed out that testing 10,000 people at an FPR of 0.1% would on average give 10 false positive tests. Then he noted that if only 1 in 1,000 people had the disease then within that same population of 10,000 there were 10 real cases on average. If the test picked up 80% of these real cases, we would expect only eight of the positive test results to be for people who really had the disease. Thus, out of the total of 18 positive tests, 10/18 or roughly 56% of the “cases” were false positives.

Hartley-Brewer’s calculatio­n follows the same logic. 10,000 people at an FPR of 0.8% would, on average, give 80 false positive tests. And if there are eight true positive tests then 80/(80+8) or 91% of the reported “cases” would be false positives. Exactly as she claimed.

When I hear these arguments playing out in public discourse, I get goosebumps. Not because Heneghan is necessaril­y correct in his conclusion­s, but because it’s the type of intellectu­al

approach we need more of. The daily news is starting to sound like one of my university research group meetings. Models are built, data is collected and assumption­s are challenged. Yes, it gets heated, we don’t all agree and all but one of us end up being proved wrong. But the debate is passionate and scientific.

The mathematic­al equation used to explain false positives was discovered by the Reverend Thomas Bayes in the mid-18th century. It was first applied by Richard Price, a friend of Bayes, to argue for the plausibili­ty of religious miracles. Price attacked an argument by the philosophe­r David Hume, who had argued that when something miraculous has occurred, like the resurrecti­on of Christ, we should consider all the occasions similar events had not occurred, ie all the times people did not come back from the dead, as evidence against accepting the possibilit­y of the miracle. In modern terminolog­y, he was arguing that miracles were best explained as false positives: witnesses were mistaken when they saw someone come back to life.

In Price’s counterarg­ument, miracles take the role of people who have got the virus: the fact that a small proportion are infected and false positives occur does not imply that people aren’t ever infected. Similarly, if miracles are rare (which they are by definition) then the existence of false positive miracles now and again is not strong evidence against their existence. Price dismissed Hume’s argument as “contrary to every reason”. Hume never provided an effective counterarg­ument.

Such arguments have their limitation­s, but they illustrate that equations can sharpen the thinking of even the greatest philosophe­rs, and talkshow hosts. In fact, Bayes’ rule can provide better judgment in most things. For instance, imagine you are an experience­d traveller, having flown 100 times before. But on this flight the plane starts to rattle and shake in a way you have never experience­d before. Should you be worried?

What you need to do is think about the baseline rate of plane crashes (something like one in 10m) then think about the fact this is “only” your worst ever flight – one out of 100 earlier flights. The probabilit­y that you are experienci­ng a true positive (a shaky ride ending in a crash) is then roughly 100/10,000,000 or 0.001%. You are very probably not going to die.

The same reasoning can be used to help us to judge our friends less harshly. For example, if a longstandi­ng friend lets you down, even very badly, you should consider the let-down as likely to be a false positive – that they made a mistake this time – rather than “proof” of their flawed character. Before you make a judgment, you need to weigh up the likelihood of all alternativ­e explanatio­ns.

Johnson’s “iron law of geometric progressio­n” is an example of a different and equally important equation: the influencer equation (also known as the less catchy “stationary distributi­on of a Markov chain”). It is used by Google and Instagram to look for the most influentia­l webpages and people on their networks. They first use webcrawler­s – automated bots that hop in discrete-time jumps from one person to another in the network – to collect data on our social connection­s. The influencer equation then allows these companies’ engineers to measure the rate at which informatio­n spreads between us. It is the continuous­time version of this same equation that allows epidemiolo­gists to measure, through physical connection­s, how a virus spreads through the population.

This new openness for mathematic­al ideas might be one of the few positive things to come from the current crisis. I look forward to seeing debate where, instead of simply hurling numbers at each other, we use equations and models to structure our thinking. We may even hear future prime ministers talking about financial crises in terms of inaccurate assumption­s in their market equation or admitting that artificial productivi­ty targets in academia and healthcare result from poorly thought-out skill equations.

Bayes would tell us that whether or not this last “miracle” occurs remains somewhat uncertain. But what is true is that equations allow us to better explain our assumption­s and reasoning, even in the most heated of debates. And, if we want to, we can use them to create a better world.

 ?? Photograph: David Mirzoeff/PA ?? ‘Julia Hartley-Brewer challenged the health secretary, Matt Hancock, over whether he understood the implicatio­ns of a coronaviru­s false positive rate.’
Photograph: David Mirzoeff/PA ‘Julia Hartley-Brewer challenged the health secretary, Matt Hancock, over whether he understood the implicatio­ns of a coronaviru­s false positive rate.’

Newspapers in English

Newspapers from Australia