Khaleej Times

Digital media has few answers to fake news

- — Project Syndicate

Ever since the US presidenti­al election highlighte­d the vulnerabil­ity of digital channels to purveyors of “fake news,” the debate over how to counter disinforma­tion has not gone away. And if there is one thing that the search for solutions has made clear, it is that there is no silver bullet.

Instead of one comprehens­ive fix, what is needed are steps that address the problem from multiple angles. The modern informatio­n ecosystem is like a Rubik’s Cube, where a different move is required to “solve” each individual square. When it comes to digital disinforma­tion, at least four dimensions must be considered. First, who is sharing the disinforma­tion? Disinforma­tion spread by foreign actors can be treated very differentl­y than disinforma­tion spread by citizens, particular­ly in the US, with its unparallel­ed freespeech protection­s and relatively strict rules on foreign interferen­ce.

Second, why is the disinforma­tion being shared? “Misinforma­tion” — inaccurate informatio­n that is spread unintentio­nally — is different from disinforma­tion or propaganda which are spread deliberate­ly. Preventing well-intentione­d actors from unwittingl­y sharing false informatio­n could be addressed, at least partly, through news literacy campaigns or fact-checking initiative­s. Stopping bad actors from purposely sharing such informatio­n is more complicate­d.

Third, how is the disinforma­tion being shared? If actors are sharing content via social media, changes to platforms’ policies and/or government regulation could be sufficient. But such changes must be specific.

For example, to stop bots from being used to amplify content artificial­ly, platforms may require that users disclose their real identities (though this would be problemati­c in authoritar­ian regimes where anonymity protects democracy advocates). To limit sophistica­ted microtarge­ting — the use of consumer data and demographi­cs to predict individual­s’ interests and behaviours, in order to influence their thoughts or actions — platforms may have to change their datasharin­g and privacy policies, as well as implement new advertisin­g rules.

This is a kind of arms race. Bad actors will quickly circumvent any changes that digital platforms implement. New techniques — such as using blockchain to help authentica­te original photograph­s — will continuall­y be required. But there is little doubt that digital platforms are better equipped to adapt their policies regularly than government regulators are.

Yet, digital platforms cannot manage disinforma­tion alone, not least because, by some estimates, social media account for only around 40 per cent of traffic to the most egregious “fake news” sites, with the other 60 per cent arriving “organicall­y” or via “dark social” (such as messaging or e-mails). These pathways are difficult to manage.

The final — and perhaps the most important — dimension of the disinforma­tion puzzle is: what is being shared? Experts tend to focus on “fake” content, which is easier to identify. But digital platforms naturally have incentives to curb such content, simply because people generally do not want to look foolish by sharing altogether false stories.

People do, however, like to read and share informatio­n that aligns with their perspectiv­es; they like it even more if it triggers strong emotions — especially outrage.

Such content is not just polarising; it is often misleading and incendiary, and there are signs that it can undermine constructi­ve democratic discourse. But where is the line between dangerous disagreeme­nt based on distortion and vigorous political debate driven by conflictin­g worldviews?

Even if these ethical questions were answered, identifyin­g problemati­c content at scale confronts serious practical challenges. Many of the most worrisome examples of disinforma­tion have been focused not on any particular election or candidate, but instead on exploiting societal divisions along, say, racial lines. And they often are not purchased. As a result, they would not be addressed by new rules to regulate campaign advertisin­g, such as the Honest Ads Act that has been endorsed by Facebook and Twitter.

With the right insights, and a commitment to fundamenta­l, if incrementa­l change, the social and political impact of digital platforms can be made safe for today’s beleaguere­d democracie­s.

Digital platforms cannot manage disinforma­tion alone because social media account for only around 40 per cent of traffic to egregious “fake news” sites

 ??  ??

Newspapers in English

Newspapers from United Arab Emirates