Re­pro­ducibil­ity cri­sis ‘has been overblown’

Study claim­ing no ev­i­dence of cri­sis sparks de­bate on how to im­prove sci­ence. Rachael Pells re­ports

THE (Times Higher Education) - - FRONT PAGE - www.timeshigh­ere­d­u­ca­ Twit­ter: @timeshigh­ered

The nar­ra­tive of a “re­pro­ducibil­ity cri­sis” in sci­ence has been overblown, ac­cord­ing to a re­searcher whose claims have sparked fresh de­bate among schol­ars about the re­li­a­bil­ity of aca­demic stud­ies.

The re­pro­ducibil­ity cri­sis nar­ra­tive has come to dom­i­nate sci­en­tific de­bate in re­cent years, with about 90 per cent of re­spon­dents to a 2016 Na­ture sur­vey agree­ing that such a cri­sis ex­isted, and more than 60 per cent blam­ing it on se­lec­tive re­port­ing and pres­sures to pub­lish.

These re­sponses were driven by stud­ies that found, for ex­am­ple, that re­searchers had failed to repli­cate 47 out of 53 cancer pa­pers, and that the re­sults of less than half of key psy­chol­ogy and eco­nom­ics pa­pers could be repli­cated.

How­ever, a re­view of more than 40 re­cent stud­ies on re­pro­ducibil­ity has led Daniele Fanelli, a fel­low in method­ol­ogy at the Lon­don School of Eco­nom­ics, to con­clude that, although mis­con­duct and ques­tion­able re­search meth­ods oc­cur in “rel­a­tively small” fre­quen­cies, there is “no ev­i­dence” that the is­sue is grow­ing.

Writ­ing in Pro­ceed­ings of the Na­tional Academy of Sciences, Dr Fanelli high­lights that some re­cent repli­ca­tion stud­ies have pro­duced higher rates of re­pro­ducibil­ity and says that it is un­fair to set more store by the re­sults of early ex­ploratory stud­ies than by pa­pers that build on pre­vi­ous stud­ies and are there­fore more re­li­able.

Re­pro­ducibil­ity also ap­pears to vary heav­ily by sub­field, method­ol­ogy and the ex­per­tise of the re­searchers at­tempt­ing to repli­cate find­ings, he says.

The num­ber of yearly find­ings of sci­en­tific mis­con­duct is­sued by the US Of­fice of Re­search In­tegrity has not in­creased, nor has the pro­por­tion of all in­ves­ti­ga­tions re­sult­ing in such a find­ing, based on data for 1994 to 2011, Dr Fanelli says. And, he adds, although the num­ber of re­trac­tions be­ing is­sued by jour­nals has risen, the num­ber of re­trac­tions per re­tract­ing jour­nal has not.

Dr Fanelli ques­tions whether pres­sure to pub­lish can be blamed, high­light­ing that re­searchers who pub­lish of­ten and in jour­nals with high im­pact fac­tors are less likely to pro­duce pa­pers that are re­tracted.

He con­cludes that sci­ence “can­not be said to be un­der­go­ing a ‘re­pro­ducibil­ity cri­sis’, at least not in the sense that it is no longer re­li­able due to a per­va­sive and grow­ing prob­lem with find­ings that are fab­ri­cated, fal­si­fied, bi­ased, un­der­pow­ered, se­lected, and ir­re­pro­ducible. While these prob­lems cer­tainly ex­ist and need to be tack­led, ev­i­dence does not sug­gest that they un­der­mine the sci­en­tific en­ter­prise as a whole.”

Dr Fanelli told Times Higher Ed­u­ca­tion that im­prov­ing “how we con- duct and com­mu­ni­cate re­search…is an ab­so­lute pri­or­ity [but] we don’t need to be­lieve that there is a cri­sis to jus­tify these ef­forts”.

“If the be­lief is in­cor­rect, then we should re­vise it as soon as pos­si­ble. If we don’t, then we risk mis­di­rect­ing our ef­forts, iron­i­cally pro­duc­ing dis­torted and waste­ful ev­i­dence in meta-re­search it­self,” he said.

Dr Fanelli’s ar­gu­ments have sparked de­bate among sci­en­tists.

Christopher Cham­bers, pro­fes­sor of cog­ni­tive neu­ro­science at Cardiff Univer­sity, said that he chooses to “steer away” from the term “cri­sis”. “[It] is emo­tional and po­lar­is­ing, and so leads to dis­tract­ing and frankly rather point­less ar­gu­ments, like this

one, about what to call it, rather than solv­ing the prob­lem,” he said.

Nev­er­the­less, Pro­fes­sor Cham­bers con­tin­ued, the ma­jor­ity of life and so­cial sciences stud­ies were “not repli­ca­ble”, and fix­ing this should be a pri­or­ity. “Re­pro­ducibil­ity isn’t op­tional; it’s cen­tral to the sci­en­tific method. If we aban­don re­pro­ducibil­ity, we aban­don sci­ence,” he said.

Mar­cus Manufo, pro­fes­sor of bi­o­log­i­cal psy­chol­ogy at the Univer­sity of Bris­tol, said that whether the prob­lem of re­pro­ducibil­ity was worse than in the past was “dif­fi­cult to de­ter­mine, and not nec­es­sar­ily that rel­e­vant”. But he agreed that there were vi­tal is­sues to ad­dress.

“Much of the prob­lem stems from the in­cen­tive struc­tures that we work within – the things that are good for sci­en­tists, like get­ting pub­lished, might not be the things that are good for sci­ence,” he said. “While I wouldn’t de­scribe where we are as a cri­sis, I cer­tainly think there’s con­sid­er­able scope for im­prove­ment.”

Mal­colm McLeod, pro­fes­sor of neu­rol­ogy and trans­la­tional neu­ro­science at the Univer­sity of Ed­in­burgh, said that sci­en­tists should be wary of com­pla­cency, how­ever. “The cri­sis ter­mi­nol­ogy came about at a time when re­searchers were urg­ing peo­ple to take notice of what was go­ing wrong in sci­ence,” he said. “To lose that com­pletely would be a mis­take.”

Un­likely re­sem­blance ‘prob­lems ex­ist and need to be tack­led, but ev­i­dence does not sug­gest that they un­der­mine the sci­en­tific en­ter­prise’

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.