National Post (National Edition)
The dangers of peer pressure
Whether the topic is suicide prevention in Canada, bullying n Norway or climate change around the world, we’re routinely assured that government policies are “evidence-based.” Science itself guides our footsteps.
But what does this actually mean? In recent decades we’ve been encouraged to equate academic research with accuracy and reliability. A 2015 report prepared for the U.S. National Science Foundation, however, restates the obvious: A scientific finding “cannot be regarded as an empirical fact” until it has been “independently verified” by third-party researchers who’ve followed the same steps and achieved similar results. Findings that haven’t yet been reproduced in this manner are, in scientific terms, merely tentative.
But this sort of due diligence almost never happens. What would seem to be a logical first step in establishing evidence-based policy is routinely skipped over. No systematic reproduction of research takes place before its conclusions begin shaping government policies.
When making decisions that affect human lives, bureaucrats and politicians have long substituted the judgment of people who work in academic publishing for proper scientific verification. Journals have a vetting process called “peer review.” Research findings that pass peer review, so the reasoning goes, are good enough for prime time.
But peer review is a tool invented by publishers for their own purposes. It helps them sift through hundreds of manuscripts a month. It helps them identify intriguing findings that will burnish their own prestige. Since most published research is never subjected to independent verification, journals exist in a universe that threatens them with no meaningful penalties should their peer-review process be faulty.
It often is. Referees at prestigious journals have given the green light to research that was later found to be wholly fraudulent. Eugenie Samuel Reich’s 2010 book, Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World, reports that the elite journal, Nature, published seven papers based on data faked by a hotshot young physicist named Jan Hendrik Schön. accuracy or the computer code for errors. Peer review doesn’t guarantee that proper statistical analyses were employed, or that lab equipment was used properly.
Anyone can start an academic journal; there are an estimated 25,000 of them now. Journals are at liberty to define peer review however they please. No minimum standards apply, and no enforcement mechanisms exist.
In 2013, Irene Hames, a spokesperson for the U.K.based Committee on Publication Ethics, told a Times Higher Education reporter in three instances, but the remaining nine papers underwent review by two referees each. The 16 referees (89 per cent) who recommended rejection didn’t cite lack of originality as a concern. Instead, the manuscripts were rejected “primarily for reasons of methodology and statistical treatment.”
Only one of the nine papers was deemed worthy of seeing the light of day the second time it was examined by reviewers at the same journal.
Small wonder that Richard Smith, a former editor of the British Medical Journal, describes peer review as a roulette wheel, a lottery, and a black box. A great deal of effort has been expended, he says, trying to demonstrate that peer review improves scientific rigour. But the evidence just isn’t there.
In 2011, Richard Horton, editor of The Lancet, told a U.K. parliamentary committee that “Those who make big claims for peer review need to face up to this disturbing absence of evidence.”
The Internet is currently full of stories about science’s reproducibility crisis. In a 2015 editorial, Horton publicly declared what many others have also been admitting: “much of the scientific literature, perhaps half, may simply be untrue.” If peer review were an effective quality control mechanism, such a statement would be unthinkable.
In a world in which published research is as likely to be wrong as it is to be right, beware the claim that the latest government policy is evidence-based. Unless the research in question has been painstakingly reproduced, we have no way of knowing if it’s reliable or risible. food labels don’t capture the costs of digestion that are lower for processed foods. The method used to measure caloric content is something called the Atwater system, developed in the 19th century.
By burning samples of food one can measure the number of calories by the heat released. This is how food manufacturers measure calorie content. But our digestive systems use foods differently, even though two foods may have the same number of calories. According to this method, a 28-gram serving of almonds has about 170 calories but the real energy content is around 129 calories, considerably less than labelled. Nutrition scientist Rachel Carmody from Harvard reported calorie differences could be as high as 50 per cent. In other words, calorie labelling is a very crude way to measure how our bodies use the energy released in foods, making government labelling all but useless. More information isn’t always better information.
This is one area where health public policy is far behind the science of nutrition and behavioural economics. If governments try to improve our eating habits and reduce weight, this isn’t the scientific way to do it.