Worming a way to the hard facts
Questions over varying approaches to scientific study, writes Tim Harford
IT WAS one of the most influential economics studies to have been published in the past 20 years, with a simple title: Worms. Now, its findings are being questioned in an exchange that somehow manages to be encouraging and frustrating all at once. Development economics is growing up, and getting acne.
The authors of Worms, economists Edward Miguel and Michael Kremer, studied a deworming project in an area of western Kenya in which parasitic intestinal worms were a serious problem in 1998. They concluded three things from the randomised trial. First, deworming treatments produced not just health benefits but educational ones, because healthier children were able to attend school. Second, the treatments were cracking value for money. Third, there were useful spillovers: when a school was treated for worms, infection rates in nearby schools also fell.
The study was influential in two very different ways. Activists campaigned for wider use of deworming treatments. Development economists drew a separate lesson: that running randomised trials was an excellent way to figure out what worked.
In this, they were following in the footsteps of epidemiologists. Yet it is the epidemiologists who are now asking the awkward questions. Alexander Aiken and three colleagues from the London School of Hygiene and Tropical Medicine have just published a pair of articles in the International Journal of Epidemiology that examine the worms experiment, and find it wanting.
Their first article follows the original methodology and uncovers some errors, one of which calls into question the claim that deworming produces spillover benefits. The second article uses epidemiological methods rather than the statistical techniques preferred by economists. It raises the concern that the central findings may be a fluke.
Everyone agrees that there were some errors in the original paper, but on the key questions there is little common ground. Miguel and Kremer defend their findings, arguing that the epidemiologists have gone through statistical contortions to make the results disappear. Yet epidemiologists are uneasy. The Cochrane Collaboration, an independent network of health researchers, has published a review of deworming evidence, concluding many deworming studies produce weak evidence of benefits.
What explains this difference of views? Partly this is a clash of academic best practices.
Consider the treatment of spillover effects. To Miguel and Kremer, these were the whole point of the cluster study. Aiken, however, says that an epidemiologist thinks of such effects as “contamination” — an undesirable source of statistical noise. Miguel believes this may explain the disagreement.
The epidemiologists fret about the statistical headaches the spillovers cause, while the economists are enthused by the prospect that these spillovers will help improve childhood health. Another cultural difference is this: epidemiologists have been able to run trials but, with big money sometimes at stake, they have had to defend the integrity of the trials against bias.
Economists, by contrast, are used to having to make the best of noisier data. Consider a centuryold intervention, when John D Rockefeller funded a programme of hookworm eradication in the US. A few years ago, economist Hoyt Bleakley teased apart census data from the early 20th century to show that this programme had led to big gains in schooling and in income. To an economist, that is clever work. To an epidemiologist, it is of limited scientific value.
My sympathies lie with the economists. I suspect that the effects that Miguel and Kremer found are quite real, even if their methods do not quite match the customs of epidemiologists. But the bigger question is why so large a policy push needs to be based on a handful of clinical trials.
(c) 2015 The Financial Times Limited
Question is why so large a policy push needs to be based on a handful of clinical trials