CAN WE TRUST SCIENTISTS IN A POST-TRUTH WORLD?
IN THESE TESTING TIMES, WE NEED OUR SCIENCE TO BE SOLID
It’s the big question being asked around the world. In these post-truth, fake news, alternative-fact times, who can we trust? Most people are pretty sure of one thing: it’s not politicians or the media. For years, they’ve been at the bottom of surveys of trustworthiness. Amazingly, a recent global poll revealed that what little trust they once enjoyed has now plunged to the lowest level ever recorded.
Fortunately, those same polls also highlight the existence of the ultimate source of reliable insight: science. Not surprisingly, the current crisis of trust has prompted high-minded academics to pen pieces insisting it’s time we all put our trust in the methods of science.
What’s striking about these calls to arms is their naivety. While science has an impressive track record of debunking misconceptions, blunders and plain lies, it doesn’t follow that we should therefore put our complete trust in scientists. For that assumes scientists can be trusted to know what they’re doing. And sadly, that’s just not the case. Too many researchers seem to think that hard data alone is the hallmark of reliable science. Yet hard data from badly designed studies is quite capable of giving compelling support for claims that are just plain wrong.
For example, imagine there’s a new idea for reducing juvenile crime: take the worst offenders to a tough jail to see what awaits them if they don’t mend their ways. To test the idea, we can simply check to see if the visits trigger a fall in re-arrest rates among those taking part.
Chances are the data will show the idea works – but that doesn’t mean it actually does. That’s because of an effect that’s called ‘regression to the mean’, which rears its head when dealing with extreme cases.
Those young offenders were chosen to take part precisely because they were arrested an extreme number of times. But that’s partly the result of chance: they just ran out of luck too often. Once they’ve had their prison visit, their spate of bad luck is likely to ‘regress’ back to a more average rate. As a result, they’ll evade re-arrest – and thus appear to have mended their ways, when in reality they haven’t.
This isn’t some esoteric possibility either. For decades a scheme called Scared Straight was used in the US following claims it dramatically cut re-offending rates. It’s now clear that the apparently rock-solid evidence was anything but. When the idea was tested using studies designed to cope with regression to the mean, the benefit vanished. Indeed, a major review of the evidence published in 2013 showed it was actually worse than useless, and increased offending rates.
Over the years, regression to the mean has fooled researchers in fields from medicine and business to psychology and finance. Which wouldn’t be so bad, except the phenomenon has been known about since Victorian times.
And that’s one of the striking things about these traps. Warnings about them have been circulating for years, seemingly with little effect. That’s because many – perhaps even most – working scientists have a surprisingly poor understanding of how to avoid the many pitfalls in turning data into reliable insights.
To be fair, a lot of scientists recognise this. A recent poll in the journal Nature ranked ‘better understanding of statistics’ top among factors that would lead to more reliable science.
There has never been a greater need for trustworthy evidence on issues that affect us. The scientific process is without question the best way to gather such evidence. But those claiming to use its techniques need to up their game if they are to justify our trust in them.
“TOO MANY RESEARCHERS SEEM TO THINK
THAT HARD DATA ALONE IS THE HALLMARK OF RELIABLE
Robert Matthews is a visiting professor in science at Aston University, Birmingham