National Post (National Edition)

The dangers of peer pressure

- DONNA LAFRAMBOIS­E Donna Laframbois­e is the author of the 2016 report, Peer Review: Why skepticism is essential. Patrick Luciani is senior fellow at the Atlantic Institute for Market Studies.

Whether the topic is suicide prevention in Canada, bullying n Norway or climate change around the world, we’re routinely assured that government policies are “evidence-based.” Science itself guides our footsteps.

But what does this actually mean? In recent decades we’ve been encouraged to equate academic research with accuracy and reliabilit­y. A 2015 report prepared for the U.S. National Science Foundation, however, restates the obvious: A scientific finding “cannot be regarded as an empirical fact” until it has been “independen­tly verified” by third-party researcher­s who’ve followed the same steps and achieved similar results. Findings that haven’t yet been reproduced in this manner are, in scientific terms, merely tentative.

But this sort of due diligence almost never happens. What would seem to be a logical first step in establishi­ng evidence-based policy is routinely skipped over. No systematic reproducti­on of research takes place before its conclusion­s begin shaping government policies.

When making decisions that affect human lives, bureaucrat­s and politician­s have long substitute­d the judgment of people who work in academic publishing for proper scientific verificati­on. Journals have a vetting process called “peer review.” Research findings that pass peer review, so the reasoning goes, are good enough for prime time.

But peer review is a tool invented by publishers for their own purposes. It helps them sift through hundreds of manuscript­s a month. It helps them identify intriguing findings that will burnish their own prestige. Since most published research is never subjected to independen­t verificati­on, journals exist in a universe that threatens them with no meaningful penalties should their peer-review process be faulty.

It often is. Referees at prestigiou­s journals have given the green light to research that was later found to be wholly fraudulent. Eugenie Samuel Reich’s 2010 book, Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World, reports that the elite journal, Nature, published seven papers based on data faked by a hotshot young physicist named Jan Hendrik Schön. accuracy or the computer code for errors. Peer review doesn’t guarantee that proper statistica­l analyses were employed, or that lab equipment was used properly.

Anyone can start an academic journal; there are an estimated 25,000 of them now. Journals are at liberty to define peer review however they please. No minimum standards apply, and no enforcemen­t mechanisms exist.

In 2013, Irene Hames, a spokespers­on for the U.K.based Committee on Publicatio­n Ethics, told a Times Higher Education reporter in three instances, but the remaining nine papers underwent review by two referees each. The 16 referees (89 per cent) who recommende­d rejection didn’t cite lack of originalit­y as a concern. Instead, the manuscript­s were rejected “primarily for reasons of methodolog­y and statistica­l treatment.”

Only one of the nine papers was deemed worthy of seeing the light of day the second time it was examined by reviewers at the same journal.

Small wonder that Richard Smith, a former editor of the British Medical Journal, describes peer review as a roulette wheel, a lottery, and a black box. A great deal of effort has been expended, he says, trying to demonstrat­e that peer review improves scientific rigour. But the evidence just isn’t there.

In 2011, Richard Horton, editor of The Lancet, told a U.K. parliament­ary committee that “Those who make big claims for peer review need to face up to this disturbing absence of evidence.”

The Internet is currently full of stories about science’s reproducib­ility crisis. In a 2015 editorial, Horton publicly declared what many others have also been admitting: “much of the scientific literature, perhaps half, may simply be untrue.” If peer review were an effective quality control mechanism, such a statement would be unthinkabl­e.

In a world in which published research is as likely to be wrong as it is to be right, beware the claim that the latest government policy is evidence-based. Unless the research in question has been painstakin­gly reproduced, we have no way of knowing if it’s reliable or risible. food labels don’t capture the costs of digestion that are lower for processed foods. The method used to measure caloric content is something called the Atwater system, developed in the 19th century.

By burning samples of food one can measure the number of calories by the heat released. This is how food manufactur­ers measure calorie content. But our digestive systems use foods differentl­y, even though two foods may have the same number of calories. According to this method, a 28-gram serving of almonds has about 170 calories but the real energy content is around 129 calories, considerab­ly less than labelled. Nutrition scientist Rachel Carmody from Harvard reported calorie difference­s could be as high as 50 per cent. In other words, calorie labelling is a very crude way to measure how our bodies use the energy released in foods, making government labelling all but useless. More informatio­n isn’t always better informatio­n.

This is one area where health public policy is far behind the science of nutrition and behavioura­l economics. If government­s try to improve our eating habits and reduce weight, this isn’t the scientific way to do it.

 ??  ??

Newspapers in English

Newspapers from Canada