Breast cancer scare recalls the value of collecting and evaluating evidence
It is hard to know which is more frustrating: last week’s announcement that over the past nine years 450,000 British women were accidentally not invited for breast cancer screening; or the widespread indifference of a howling media to the evidence that such screening is of doubtful benefit anyway.
Mammograms lengthen the lives of some women and shorten the lives of others: they allow the early detection and treatment of tumours, but they also produce many false positives, leading to the unnecessary and risky treatment of tumours that would never have caused any problems. The best evidence we have, after weighing up several high-quality clinical trials, is that the harms and the benefits are finely balanced.
When UK women are offered breast screening, they are sent a leaflet explaining the advantages and the risks so that they can make an informed choice. That choice should not have been denied to them by an administrative blunder. Still, we should be grateful that the error did not disrupt cervical cancer screening instead, which has been shown to save lives.
We should draw two lessons from the affair, beyond the obvious, which is that British institutions need to get a grip. The first lesson is that it pays to collect the best evidence that we can. The second is that having the best evidence is seldom enough.
Still, the evidence is a start. The world is full of sensibleseeming ideas that disappoint — along with some odd-seeming ideas that turn out to work.
Among the latter is the idea that lemon juice prevents and cures scurvy, a disease so debilitating that ships could lose half their crews. In 1747, James Lind, a Scottish doctor, conducted one of the most celebrated early clinical trials proving the efficacy of lemon juice. This is not what common sense might have suggested. The mechanism was obscure: a chemical in lemons — later dubbed “vitamin C” — makes the difference between life and death in tiny doses.
Randomised trials have become entrenched in medicine as the obvious way to assess what works. As, just as importantly, have reviews that systematically assemble, evaluate and summarise all the available trials in one place. This did not happen easily, since few doctors enjoy being embarrassed by an unexpected trial result.
Such trials have also become an important way to assess ideas in education, criminal justice and economic development. Their use is far more patchy and more controversial but is still yielding dividends.
A new book, Randomistas, by Andrew Leigh, an Australian economist turned politician, gives plenty of examples. One — notorious in geek circles — is Scared Straight, a programme designed to deter juvenile offenders by taking them to prison to be bullied by terrifying inmates. Scared Straight was so fashionable in the late 1970s that a documentary film about the policy won an Oscar; randomised trials showed it to be counterproductive.
That is often the way. Three decades ago the sociologist Peter Rossi quipped that the more rigorously a social programme evaluation was designed, the more likely it was to show a net benefit of zero. Unfortunately, Rossi may well have been right, but showing which ideas do not work is one of the most important roles of high-quality trials.
And not every idea fails. A randomised trial of police protocols for domestic violence in Minneapolis in 1981 demonstrated that the police needed to be tougher on domestic abusers, arresting them rather than having a quiet word, if they wanted to prevent future assaults.
Randomised trials of cash transfers to entrepreneurs in developing countries have shown excellent results, including a trial in which some Nigerian entrepreneurs with highquality business plans were randomly chosen to receive $50,000 to realise their ideas.
This research is useless, however, if the people making the decisions are not aware of it. The academic’s cliché, “more research is needed”, is not necessarily true. Often all the necessary research has been done, but it has not been assembled and systematically reviewed. Or — as in the case of breast screening — it has been systematically reviewed, but not enough people have noticed.
Lind’s trial of lemon juice is instructive here. As early as 1601, James Lancaster of the East India Company had demonstrated that lemon juice was proof against scurvy. It took two centuries for the Royal Navy to make it part of sailors’ rations.
Yet as voyages grew shorter, and still lacking a convincing theory for why lemon juice vanquished scurvy, we simply forgot. In 1911, 300 years after Lancaster’s demonstration, Robert Scott’s expedition to the South Pole — including a Royal Navy surgeon — did not know how to prevent scurvy. They suffered grievously as a result.
Knowledge can be gained; it can also be ignored, or forgotten. /©
RANDOMISED TRIALS HAVE BECOME ENTRENCHED IN MEDICINE AS THE OBVIOUS WAY TO ASSESS WHAT WORKS