Mail & Guardian

Our appetite for fake news is a global threat to health

- Junaid Nabi

The most frustratin­g part of my job as a public health scientist is the spread of false informatio­n — usually online — that overrides years of empirical research. It is difficult enough for doctors to counter medical falsehoods in face-to-face conversati­ons with patients. It becomes even harder to do so when such fakery is transmitte­d via the internet.

I recently witnessed this pattern first-hand in Kashmir, where I was raised. There, parents of young children trusted videos and messages on Facebook, Youtube or Whatsapp that spread false rumours that modern medication­s and vaccines were harmful, or even that they are funded by foreigners with ulterior motives. Discussion­s with local colleagues in paediatric­s revealed how a single video or instant message with false informatio­n was enough to dissuade parents from believing in medical therapies.

Physicians in other parts of India and Pakistan have reported numerous cases in which parents, many of them well educated, refuse polio vaccinatio­ns for their children. Reports that the CIA once organised a fake vaccinatio­n drive to spy on militants in Pakistan have added to mistrust in the region. Given the high stakes involved, states sometimes resort to extreme measures, such as arresting parents, to ensure that vulnerable children are vaccinated.

This is just one regional example of the global threat that online misinforma­tion poses to public health.

In the United States, a recent study in the American Journal of Public Health reported how Twitter bots and Russian trolls have skewed the public debate on vaccine effectiven­ess. Having examined 1.8-million tweets over a three-year period from 2014 to 2017, the study authors concluded that the purpose of these automated accounts was to create enough anti-vaccine content online to develop a false equivalenc­e in the vaccinatio­n debate.

Such misinforma­tion programmes succeed for a reason. In March last year, researcher­s from the Massachuse­tts Institute of Technology reported that false stories on Twitter spread significan­tly faster than true ones. Their analysis revealed how the human need for novelty, and the informatio­n’s ability to evoke an emotional response, are vital in spreading false stories.

The internet amplifies the damage caused by these “alternativ­e facts,” because it can disseminat­e them at huge scale and speed — a few fake or troll accounts are enough to spread misinforma­tion to millions. And once it spreads, it is almost impossible to retract.

The role of Twitter bots and trolls in the 2016 US elections and the United Kingdom’s Brexit vote is clear. Now they have affected global health as well. If we don’t take robust and co-ordinated steps to address this alarming trend, we may lose out on a century’s worth of successes in health communicat­ion and vaccinatio­n, both of which depend on public trust.

We can take several steps to start reversing the damage.

For starters, health officials and experts in developed and developing countries need to understand how this online misinforma­tion is eroding public trust in health programmes.

They also need to work actively with global social media giants such as Facebook, Twitter and Google, as well as major regional players including Wechat and Viber. This means working in tandem to create guidelines and protocols for how informatio­n of public interest can be disseminat­ed safely.

In addition, social media companies can work with scientists to identify patterns and behaviours of spam accounts that try to disseminat­e false informatio­n about important public health issues. Twitter, for example, has already started using machine-learning technology to limit activity from spam accounts, bots and trolls.

More rigorous verificati­on of accounts, from the moment of signing up, will also be a powerful deterrent to the further expansion of automated accounts. Two-factor authentica­tion, using an email address or phone number when signing up, is a prudent start. Captcha technology requiring users to identify images of cars or street signs — something humans can do better than machines (for now, at least) — can also limit automated sign-ups and bot activity.

These precaution­s are unlikely to infringe on any individual’s right to voice an opinion. Public health officials must err on the side of caution when weighing free-speech rights against outright falsehoods that endanger public welfare. Abusing the anonymity provided by the internet, spam accounts, bots and trolls serve to disrupt and pollute available informatio­n and confuse people. Taking prudent action to avert situations in which lives are at stake is a moral imperative.

Global public health took huge strides forward during the 20th century. Further progress in the 21st will come not only from ground-breaking research and community work, but also from online engagement.

The next battle for global health is likely to be fought on the internet. And by acting quickly enough to defeat the trolls, we can prevent avoidable illnesses and deaths around the world. — © Project Syndicate

Junaid Nabi is a public health researcher at Brigham and Women’s Hospital and Harvard Medical School, Boston. The opinions expressed in this article are his own

Newspapers in English

Newspapers from South Africa