Hindustan Times (Patiala)

Is there a way to counter fake news on WhatsApp?

A new research suggests that corrective messages may need to be frequent rather than sourced or sophistica­ted

- SUMITRA BADRINATHA­N SIMON CHAUCHARD Sumitra Badrinatha­n is a PhD student in political science, University of Pennsylvan­ia, US; Simon Chauchard is an assistant professor of Political Science at Leiden University, The Netherland­s The views expressed are per

If you were on WhatsApp in the months leading up to the 2019 general election in India, you likely came across a story claiming that cow urine cures cancer. Or perhaps you were forwarded a photo of electronic voting machines (EVMs), with a message stating they were being hacked. If you are a regular WhatsApp user, you have almost certainly borne witness to a seemingly unending barrage of misinforma­tion. And this misinforma­tion is abundant. India is now one of the largest and fastest-growing markets for digital consumers, with 560 million Internet subscriber­s in 2018, second only to China. However, the Internet — and WhatsApp in particular — are fruitful environmen­ts for the massive diffusion of unverified informatio­n and rumours.

Survey data measured online with a sample of over 5,000 Facebook users shows that belief in misinforma­tion and rumours can be fairly high. More than 75% of the sample said that polygamy is very common in the Muslim population (this is inaccurate). A similar proportion stated that they believed drinking gaumutra (cow urine) can help build one’s immune system (also not true). Survey data measured in-person with a sample of 1,200 paints a similar picture. About 48% of the sample believed in the power of gaumutra to cure terminal illnesses, while about 45% of the sample believed India hasn’t experience­d a single terror attack since 2014 (you guessed it — not true).

To combat misinforma­tion disseminat­ed through the platform, WhatsApp has encouraged user-driven fact checking. WhatsApp bought full-page advertisem­ents in multiple Indian dailies ahead of the 2019 elections, exhorting users to fact-check fake news. To what extent should we expect such a strategy — so far the only known strategy to correct misinforma­tion on encrypted discussion apps — to be effective?

In June 2019, we ran a study to test whether user-driven correction­s work to counteract fake news on WhatsApp. Participan­ts in our study saw different versions of a fictitious, but realistic, WhatsApp group chat screenshot­s. In it, a first user posts a rumour, which a second user subsequent­ly corrects. The correction­s in different versions varied in their level of sophistica­tion. In some cases, the user cited a source and referred to an investigat­ion by that source to correct the first user. In addition, these sources were varied too. The “correcting” user may, for instance, refer to an authoritat­ive source, such as the Election Commission of India, to refute a claim about EVM hacking. Alternativ­ely, they may have cited a fact checking service, such as Alt News. In other cases, the attempt to correct was extremely minimal, with the second user merely stating a phrase such as “I don’t think that’s true, bro”, and providing no evidence as to why. Importantl­y, everyone in the group that received a correction was compared to a group of people where the second user did not attempt to correct the first user’s informatio­n.

Results from the study show that participan­ts who were exposed to a correction of any kind were significan­tly less likely to believe the false informatio­n posted by the first user, relative to those who do not receive a correction. But interestin­gly, the results also demonstrat­e that the degree of sophistica­tion of the correction made no difference. Simply put, unsourced correction­s such as “I don’t think that’s true, bro” achieved an effect comparable to that of correction­s based on fact-checking by credible sources.

These findings have important implicatio­ns. They suggest that corrective messages may need to be frequent rather than sourced or sophistica­ted, and that merely signalling a problem with the credibilit­y of a claim (regardless of how detailed this signalling is) may go a long way in reducing overall rates of misinforma­tion. For users, these results imply that expressing doubts in a group chat setting should be encouraged; for encrypted chat apps such as WhatsApp, they imply that creating a simple option to express doubt may be a complement­ary, cost-effective way to limit rates of belief in rumours.

However, these results must be interprete­d with caution. Given that any expression of incredulit­y about a false claim leads to a reduction in self-reported belief, expressing doubt may also reduce the credibilit­y of a true story. The hyperparti­san, polarised world of misinforma­tion that Indians now operate in suggests that malevolent political actors frequently have incentives to “correct” true informatio­n. The ease of use of platforms such as WhatsApp, coupled with cheap data rates and increased connectivi­ty, have together led to an explosion in social media usage, especially among first-time users in India. Paradoxica­lly, this leap in technologi­cal developmen­t has also meant that the novelty and unfamiliar­ity of the medium makes users more vulnerable to the informatio­n they receive online. Our study underscore­s the need for more empirical testing to understand the vulnerabil­ities of individual­s, institutio­ns, and society to manipulati­on by misinforma­tion and rumours. It also suggests that the fake news problem is here to stay, and that evaluating the effectiven­ess of innovative solutions to combat misinforma­tion is a pressing priority.

 ?? ISTOCK ?? The fake news problem is here to stay. Combating it is a pressing priority
ISTOCK The fake news problem is here to stay. Combating it is a pressing priority
 ??  ??
 ??  ??

Newspapers in English

Newspapers from India