Los Angeles Times

The misuse of research

-

Astaple in college psychology courses is the story of polio and “spongy tar.” It seems that decades ago, researcher­s noticed that there were higher polio rates during times when tar was spongier in children’s playground­s. Based on that, they mistakenly concluded that spongy tar causes polio and raised an alarm; in response, some schools went so far as to dig up their tar playground­s — before scientists realized that the tar and the polio were both symptoms of something else: hotter temperatur­es. Polio tended to be a summertime problem, and tar softens in the hot weather.

The lesson: Correlatio­n does not imply causality. Just because two things are happening at the same time doesn’t mean that one led to the other.

In this era of more robust and carefully vetted science, when policy decisions supposedly spring from research rather than hunches, the spongy-tar theory would probably not get much traction. Even so, studies aren’t always what they seem — or what they’re made out to be. Sometimes they’re misanalyze­d, and the wrong message is gleaned from them. Sometimes they can’t be reproduced by subsequent studies, or the results aren’t as clear-cut as the first studies suggested. (Vitamin E, anyone?) At times there are unintended flaws or biases in the study design, and at other times the findings are misreprese­nted or overblown.

One of the most recent reminders of this came from a massive study of studies. As reported in August in the journal Science, a group of scientists attempted to reproduce 100 psychology studies, all of which had been published in leading journals. Their finding: More than half the time, the results of those studies couldn’t be replicated. Many were considered seminal studies frequently cited in other research.

The conclusion of the Reproducib­ility Project caused some snickering among those who had thought the “soft sciences” of psychology and sociology were sketchy fields to start with. But even research in the hard sciences, including medicine, often has similar problems. A 2012 study found that when medical research finds a “very large effect,” follow-up studies usually find a much smaller effect — or none at all.

Science is essential to our daily functionin­g and to our ability to understand the universe, nature and ourselves. Its benefits are almost unfathomab­le, especially when scientists build a body of multiple studies that support and round each other out. That’s how people learned about the horrific effects of smoking, and were given warnings about climate change long before the glaciers began to disappear. But scientific studies have their limitation­s; problems arise when the results are mishandled by the scientific community or when politician­s and advocacy groups seize on studies that back their own beliefs without waiting for more research. We are confronted by mounting piles of studies about standardiz­ed testing, green coffee-bean extract, paleo diets, vitamin use by older men and sexual assault on campus. And in too many cases, before the findings have been confirmed by other studies, they become the basis for approving new drugs, or they set off new diet fads or lead to new policies.

A report that a nutritiona­l supplement “may” lead to weight loss or reduced risk of Alzheimer’s disease is enough to send many people running to the store, even when the findings are preliminar­y and call for more research.

Various factors add to the problem. Researcher­s, attempting to attract grants and have their work accepted for publicatio­n, focus their research on sexy subjects that they hope will yield novel, dramatic findings, says John P.A. Ioannidis, a Stanford University epidemiolo­gist. They pursue the aspects of their research that in preliminar­y stages show provocativ­e results — and, conversely, when they find little or no effect, they tend to abandon those avenues of research, even though those results can be important, too. Journals also are looking for hot new topics, and university press offices tend to hype their faculty’s research in an attempt to draw new funding, which is in ever-tighter supply. The media, to fill their websites, have at times published press releases verbatim, or failed to report on the limitation­s in the research. Sometimes there are unintended biases in studies, or the researcher­s examined too small a group or set of questions. And occasional­ly, the pressure to be a research star leads to outright fraud.

What’s more, there isn’t much grant money around for trying to follow up on the research published by others. No one wants to fund replicatio­n studies, Ioannidis said.

Consider the case of Robert Martinson, a sociologis­t who in 1974 published an attention-grabbing essay based on a survey of 231 studies of prison rehabilita­tion programs. Though his role in the survey was relatively minor compared with those of his two colleagues, Martinson was the one who became famous for his assertion in the followup essay that almost no rehabilita­tion programs work. That conclusion was different from that of the underlying research, which found that although the specific programs studied didn’t work, that didn’t mean the techniques might not be sound.

Martinson thought his essay, which came to be known as the “Nothing Works” paper, would lead to less-harsh sentencing, by making the public realize that prison wasn’t the best place to try to rehabilita­te convicts. Instead, to his horror, it became the justificat­ion for wiping out prison rehabilita­tion programs.

Scientific research has transforme­d the world for the better, and has the potential to do far more. But policymake­rs, academics and educators must make certain that scientific caution isn’t lost in the race for the newest, hottest finding. Schools should be teaching students how to think critically about research findings that are presented to them. And politician­s and regulators should avoid the temptation to turn each new bit of research into policy, without widespread scientific consensus that the matter at hand has been proven.

Newspapers in English

Newspapers from United States