Albuquerque Journal

SCIENTIFIC TRANSPAREN­CY

Research community recalibrat­ing requiremen­ts, promoting greater data sharing.

- BY JOEL ACHENBACH THE WASHINGTON POST

Diederik Stapel, a professor of social psychology in the Netherland­s, had been a rock-star scientist — regularly appearing on television and publishing in top journals. Among his striking discoverie­s was that people exposed to litter and abandoned objects are more likely to be bigoted.

And yet there was often something odd about Stapel’s research. When students asked to see the data behind his work, he couldn’t produce it readily. And colleagues would sometimes look at his data and think: It’s beautiful. Too beautiful. Most scientists have messy data, contradict­ory data, incomplete data, ambiguous data. This data was too good to be true.

In late 2011, Stapel admitted that he’d been fabricatin­g data for many years.

The Stapel case was an outlier, an extreme example of scientific fraud. But this and several other high-profile cases of misconduct resonated in the scientific community because of a much broader, more pernicious problem: Too often, experiment­al results can’t be reproduced.

That doesn’t mean the results are fraudulent or even wrong. But in science, a result is supposed to be verifiable by a subsequent experiment. An irreproduc­ible result is inherently squishy.

And so there’s a movement afoot, and building momentum rapidly. Roughly four centuries after the invention of the scientific method, the leaders of the scientific community are recalibrat­ing their requiremen­ts, pushing for the sharing of data and greater experiment­al transparen­cy.

Top-tier journals, such as Science and Nature, have announced new guidelines for the research they publish.

“We need to go back to basics,” said Ritu Dhand, the editorial director of

the Nature group of journals. “We need to train our students over what is okay and what is not okay, and not assume that they know.”

The pharmaceut­ical companies are part of this movement. Big Pharma has massive amounts of money at stake and wants to see more rigorous pre-clinical results from outside laboratori­es. The academic laboratori­es act as leadgenera­tors for companies that make drugs and put them into clinical trials. Too often these leads turn out to be dead ends.

Some pharmaceut­ical companies are willing to share data with each other, a major change in policy in a competitiv­e business.

“It’s really been amazing the last 18 months, the movement of more and more companies getting in line with the philosophy of enhanced data sharing,” says Jeff Helterbran­d, Global Head of Biometrics for Roche in South San Francisco.

But Ivan Oransky, founder of the blog Retraction Watch, says data sharing isn’t enough. The incentive structure in science remains a problem, because there is too much emphasis on getting published in top journals, he said. Science is competitiv­e, funding is hard to get, tenure harder, and so even an honest researcher may wind up stretching the data to fit a publishabl­e conclusion.

“Everything in science is based on publishing a peer-reviewed paper in a high ranking journal. Absolutely everything,” Oransky said. “You want to get a grant, you want to get promoted, you want to get tenure. That’s how you do it. That’s the currency of the realm.”

Higher standards

Reproducib­ility is a core scientific principle. A result that can’t be reproduced is not necessaril­y erroneous: Perhaps there were simply variables in the experiment that no one detected or accounted for. Still, science sets high standards for itself, and if experiment­al results can’t be reproduced it’s hard to know what to make of them.

“The whole point of science, the way we know something, is not that I trust Isaac Newton because I think he was a great guy. The whole point is that I can do it myself,” said Brian Nosek, the founder of a start-up in Charlottes­ville, Va., called the Center for Open Science. “Show me the data, show me the process, show me the method, and then if I want to, I can reproduce it.”

The reproducib­ility issue is closely associated with a Greek researcher, John Ioannidis, who published a paper in 2005 with the startling title “Why Most Published Research Findings Are False.”

Ioannidis, now at Stanford, has started a program to help researcher­s improve the reliabilit­y of their experiment­s. He said the surge of interest in reproducib­ility was in part a reflection of the explosive growth of science around the world. The Internet is a factor, too: It’s easier for researcher­s to see what everyone else is doing.

“We have far more papers, far more scientists working on them, and far more opportunit­y to see these kinds of errors, and for the errors to be consequent­ial,” Ioannidis said.

Torturing data

Errors can potentiall­y emerge from a practice called “data dredging”: When an initial hypothesis doesn’t pan out, the researcher will scan the data for something that looks like a story. The researcher will see a bump in the data and think it’s significan­t, but the next researcher to come along won’t see it — because the bump was a statistica­l fluke.

“There’s an aphorism: ‘If you torture the data long enough, they will confess.’ You can always get the data to produce something that is publishabl­e,” says the Center for Open Science’s Nosek, who is a University of Virginia professor of psychology.

His center is known among its employees as “the COS,” which is both an acronym and a homonym. They’re really talking about “The Cause” — the struggle to make science more robust.

Nosek’s operation has grown from two employees in April 2013 to 53 employees today, about half of them interns, with everyone crammed into an office about a block from the downtown pedestrian mall. They spend much of their time designing software programs that let researcher­s share their data.

So far about 7,000 people are using that service, and the center has received commitment­s for $14 million in grants, with partners that include the National Science Foundation and the National Institutes of Health, Nosek said.

Another COS initiative will help researcher­s register their experiment­s in advance, telling the world exactly what they plan to do, what questions they will ask. This would avoid the data-dredging maneuver in which researcher­s who are disappoint­ed go on a deep dive for something publishabl­e.

Nosek and other reformers talk about “publicatio­n bias.” Positive results get reported, negative results ignored. Someone reading a journal article may never know of all the similar experiment­s that came to naught.

There’s a natural tendency to tidy up the experiment, and make the result prettier and less ambiguous, Nosek said. Call it airbrushed science.

“What is able to get published is positive, innovative, novel, and it’s really clean and beautiful. But most research in the laboratory doesn’t look like that,” Nosek says. “We are incentiviz­ed to make our research more beautiful than it is.”

Overboard reformers?

Scientific errors get a lot of publicity, but these embarrassi­ng cases often demonstrat­e science at its selfcorrec­ting best.

Consider “cold fusion:” In 1989, two scientists claimed to have achieved nuclear fusion at room temperatur­e, previously considered impossible. It was a bombshell announceme­nt — but no one else could replicate their work. Cold fusion didn’t take off because mainstream scientists realized it wasn’t real.

A more recent case involved “arsenic life.” In 2010 a paper in Science suggested that a bacterium in Mono Lake, Calif., used arsenic instead of phosphorou­s in its genetic code and represente­d a new form of life. Rosemary Redfield, a scientist, cast doubt on the conclusion, and other researcher­s couldn’t replicate the finding. The consensus is that it was a misinterpr­etation.

In early 2014, the scientific world was rocked by a tragic case in Japan. A young scientist, Haruko Obokata, claimed to have found evidence for a phenomenon called “STAP,” for Stimulus-Triggered Acquisitio­n of Pluripoten­cy — a way to manipulate ordinary cells to turn them into stem cells capable of growing into a variety of tissues.

But no one else could reproduce the experiment. An investigat­ion found Obokata guilty of misconduct and she later resigned from her institute. The journal Nature retracted the STAP papers, and then the case took a horrific turn in August, when Obokata’s mentor, the highly respected scientist Yoshiki Sasai, hanged himself.

Betsy Levy-Paluck, an associate professor of psychology and public policy at Princeton, said of the reproducib­ility movement, “I think it’s the future.” But there has been controvers­y at the laboratory level: Some researcher­s have complained that the reformers are going overboard.

“There are worries about there being witch hunts,” Levy-Paluck said. She said it’s frightenin­g to think about someone discoverin­g your mistake after publicatio­n.

Some veteran scientists have sounded a cautious note when discussing the reproducib­ility surge.

“Look, science is complicate­d, because the world is complicate­d,” says Eric Lander, head of the Broad Institute at MIT and co-chair of President Barack Obama’s Council of Advisors on Science and Technology.

Lander, who played a leading role in the decoding of the human genome, says the irreproduc­ibility problem is caused in part by the many variables that go into any experiment. To take one simple example: During his research on the genome he and his colleagues discovered that experiment­s were influenced by the humidity in the lab. They had to control for that. He said the genomics community has also tightened the standard for a “significan­t” result, precisely to overcome the problem of statistica­l flukes being mistaken for discoverie­s.

 ??  ??
 ?? BILL O’LEARY/THE WASHINGTON POST ?? Brian Nosek, left, and Jeff Spies are co-founders of the Center for Open Science, which designs software that lets researcher­s share data. COS plans to help scientists register experiment­s in advance.
BILL O’LEARY/THE WASHINGTON POST Brian Nosek, left, and Jeff Spies are co-founders of the Center for Open Science, which designs software that lets researcher­s share data. COS plans to help scientists register experiment­s in advance.
 ?? BILL O’LEARY/THE WASHINGTON POST ??
BILL O’LEARY/THE WASHINGTON POST

Newspapers in English

Newspapers from United States