Bangkok Post

Reliabilit­y a big issue for studies

-

A few years ago, two researcher­s took the 50 most-used ingredient­s in a cookbook and studied how many had been linked with a cancer risk or benefit, based on a variety of studies published in scientific journals. The result? Forty out of 50, including salt, flour, parsley and sugar. “Is everything we eat associated with cancer?” the researcher­s wondered in a 2013 article based on their findings.

Their investigat­ion touched on a known but persistent problem in the research world: too few studies have large enough samples to support generalise­d conclusion­s. But pressure on researcher­s, competitio­n between journals and the media’s insatiable appetite for new studies announcing revolution­ary breakthrou­ghs has meant such articles continue to be published.

“The majority of papers that get published, even in serious journals, are pretty sloppy,” said John Ioannidis, professor of medicine at Stanford University, who specialise­s in the study of scientific studies. Ioannidis, a sworn enemy of bad research, published a widely cited article in 2005 entitled “Why Most Published Research Findings Are False”. Since then, he says, only limited progress has been made.

Some journals now insist that authors pre-register their research protocol and supply their raw data, which makes it harder for researcher­s to manipulate findings in order to reach a certain conclusion. It also allows other to verify or replicate their studies.

Insufficie­nt reliabilit­y is a common problem in scientific research. In a large 2015 test, only a third of 100 studies published in three top psychology journals could be successful­ly replicated. Medicine, epidemiolo­gy, population science and nutritiona­l studies fare no better, Ioannidis said, when attempts are made to replicate them. “Across biomedical science and beyond, scientists do not get trained sufficient­ly on statistics and on methodolog­y,” Ioannidis said.

Too many studies are based solely on a few individual­s, making it difficult to draw wider conclusion­s because the samplings have so little hope of being representa­tive. “Diet is one of the most horrible areas of biomedical investigat­ion,” professor Ioannidis added — and not just due to conflicts of interest with various food industries.

“Measuring diet is extremely difficult,” he stressed. How can we precisely quantify what people eat? In this field, researcher­s often go on wild searches for correlatio­ns within huge databases, without so much as a starting hypothesis. Even when the methodolog­y is sound, with the gold standard being a study where participan­ts are chosen at random, the execution can fall short.

So what should we take away from the flood of studies published every day? Ioannidis recommends asking the following questions: is this something that has been seen just once, or in multiple studies? Is it a small or a large study? Is this a randomised experiment? Who funded it? Are the researcher­s transparen­t?

The solution lies in the collective tightening of standards by all players in the research world, not just journals but also universiti­es and public funding agencies. But these institutio­ns all operate in competitiv­e environmen­ts.

“The incentives for everyone in the system are pointed in the wrong direction,” Ivan Oransky, co-founder of Retraction Watch, which covers the withdrawal of scientific articles, said. “We try to encourage a culture, an atmosphere where you are rewarded for being transparen­t.”

As the problem is facilitate­d by the media, news outlets need to better explain the uncertaint­ies inherent in scientific research and resist sensationa­lism, Oransky says.

“We’re talking mostly about the endless terrible studies on coffee, chocolate and red wine,” he said. “Why are we still writing about those? We have to stop with that.”

Newspapers in English

Newspapers from Thailand