The Herald (South Africa)

Fake academic papers on the rise: why they’re a danger and how to stop them

- LEX BOUTER

In the 1800s, British colonists in India set about trying to reduce the cobra population, which was making life and trade very difficult in Delhi.

They began to pay a bounty for dead cobras. The strategy quickly resulted in the widespread breeding of cobras for cash.

This danger of unintended consequenc­es is sometimes referred to as the “cobra effect”.

It can also be well summed up by Goodhardt’s Law, named after British economist Charles Goodhart. He stated that, when a measure becomes a target, it ceases to be a good measure.

The cobra effect has taken root in the world of research.

The “publish or perish” culture, which values publicatio­ns and citations above all, has resulted in its own myriad of “cobra breeding programmes”.

That includes the widespread practice of questionab­le research practices, like playing up the impact of research findings to make work more attractive to publishers.

It’s also led to the rise of paper mills, criminal organisati­ons that sell academic authorship. A report on the subject describes paper mills as (the) process by which manufactur­ed manuscript­s are submitted to a journal for a fee on behalf of researcher­s with the purpose of providing an easy publicatio­n for them, or to offer authorship for sale.

These fake papers have serious consequenc­es for research and its impact on society. Not all fake papers are retracted. And even those that are often still make their way into systematic literature reviews which are, in turn, used to draw up policy guidelines, clinical guidelines, and funding agendas.

Paper mills rely on the desperatio­n of researcher­s — often young, often overworked, often on the peripherie­s of academia struggling to overcome the high obstacles to entry — to fuel their business model.

They are frightenin­gly successful. The website of one such company based in Latvia advertises the publicatio­n of more than 12,650 articles since its launch in 2012.

In an analysis of just two journals jointly conducted by the Committee on Publicatio­ns Ethics and the Internatio­nal Associatio­n of Scientific, Technical and Medical Publishers, more than half of the 3,440 article submission­s over a twoyear period were found to be fake.

It is estimated that all journals, irrespecti­ve of discipline, experience a steeply rising number of fake paper submission­s.

Currently the rate is about 2%. That may sound small. But, given the large and growing amount of scholarly publicatio­ns it means that a lot of fake papers are published. Each of these can seriously damage patients, society or nature when applied in practice.

Many individual­s and organisati­ons are fighting back against paper mills.

The scientific community is lucky enough to have several “fake paper detectives” who volunteer their time to root out fake papers from the literature. Elizabeth Bik, for instance, is a Dutch microbiolo­gist turned science integrity consultant.

She dedicates much of her time to searching the biomedical literature for manipulate­d photograph­ic images or plagiarise­d text.

Organisati­ons such as PubPeer and Retraction Watch also play vital roles in flagging fake papers and pressuring publishers to retract them.

These and other initiative­s, like the STM Integrity Hub and United2Act, in which publishers collaborat­e with other stakeholde­rs, are trying to make a difference.

But this is a deeply ingrained problem.

The use of generative artificial intelligen­ce like ChatGPT will help the detectives – but will also likely result in more fake papers which are now more easy to produce and more difficult or even impossible to detect.

The key to changing this culture is a switch in researcher assessment.

Researcher­s must be acknowledg­ed and rewarded for responsibl­e research practices: a focus on transparen­cy and accountabi­lity, high quality teaching, good supervisio­n, and excellent peer review.

This will extend the scope of activities that yield “career points” and shift the emphasis of assessment from quantity to quality.

Fortunatel­y, several initiative­s and strategies already exist to focus on a balanced set of performanc­e indicators that matter.

The San Francisco Declaratio­n on Research Assessment, establishe­d in 2012, calls on the research community to recognise and reward various research outputs, beyond just publicatio­n.

The Hong Kong Principles, formulated and endorsed at the 6th World Conference in Research Integrity in 2019, encourage research evaluation­s that incentivis­e responsibl­e research practices while minimise perverse incentives that drive practices like purchasing authorship or falsifying data.

These issues, as well as others related to protecting the integrity of research and building trust in it, will also be discussed during the 8th World Conference on Research Integrity in Athens, Greece in June this year.

Practices under the umbrella of “Open Science ” will be pivotal to making the research process more transparen­t and researcher­s more accountabl­e. Open Science is the umbrella term for a movement consisting of initiative­s to make scholarly research more transparen­t and equitable, ranging from open access publicatio­n to citizen science.

The person doing the review work receives no credit or reward.

It’s crucial that this sort of “invisible” work in academia be recognised, celebrated and included among the criteria for promotion.

This can contribute substantia­lly to detecting questionab­le research practices (or worse) before publicatio­n.

It will incentivis­e good peer review, so fewer suspect articles pass through the process, and it will also open more paths to success in academia – thus breaking up the toxic publish-or-perish culture.

● Lex Bouter is Professor of Methodolog­y and Integrity, Vrije Universite­it Amsterdam. This article first appeared in The Conversati­on.

Newspapers in English

Newspapers from South Africa