Bangkok Post

No magic bullet to solve the fake news conundrum

- KELLY BORN Kelly Born is a programme officer for the Madison Initiative at the William and Flora Hewlett Foundation.

Ever since the November 2016 US presidenti­al election highlighte­d the vulnerabil­ity of digital channels as purveyors of “fake news”, the debate over how to counter disinforma­tion has not gone away. We have come a long way in the eight months since Facebook, Google, and Twitter executives appeared before Congress to answer questions about how Russian sources exploited their platforms to influence the election. But if there is one thing that has been made clear, it’s that there is no silver bullet.

Instead of one comprehens­ive fix, what is needed are steps that address the problem from multiple angles. The modern informatio­n ecosystem is like a Rubik’s Cube, where a different move is required to “solve” each individual square. When it comes to digital disinforma­tion, at least four dimensions must be considered.

First, who is sharing the disinforma­tion? Disinforma­tion spread by foreign actors can be treated very differentl­y — both legally and normativel­y — than disinforma­tion spread by citizens, particular­ly in the United States, with its unparallel­ed free-speech protection­s and relatively strict rules on foreign interferen­ce.

In the US, less sophistica­ted cases of foreign interventi­on might be addressed with a mix of natural-language processing and geo-locating techniques to identify actors working from outside the country. Where platform-level changes fail, broader government interventi­ons, such as general sanctions, could be employed.

Second, why is the disinforma­tion being shared? “Misinforma­tion” or inaccurate informatio­n that is spread unintentio­nally — is quite different from disinforma­tion or propaganda, which are spread deliberate­ly. Preventing well-intentione­d actors from unwittingl­y sharing false informatio­n could be addressed, at least partly, through news literacy campaigns or fact-checking initiative­s. Stopping bad actors from purposely sharing such informatio­n is more complicate­d, and depends on their specific goals.

For example, for those who are motivated by profit — like the now-infamous Macedonian teens who earned thousands of dollars running “fake news” sites — new ad policies that disrupt revenue models may help. But such policies would not stop those who share disinforma­tion for political or social reasons. If those actors are operating as part of organised networks, interventi­ons may need to disrupt the entire network to be effective.

Third, how is the disinforma­tion being shared? If actors are sharing content via social media, changes to platforms’ policies and/or government regulation could be sufficient. But such changes must be specific.

For example, to stop bots from being used to amplify content artificial­ly, platforms may require that users disclose their real identities (though this would be problemati­c in authoritar­ian regimes where anonymity protects democracy advocates). To limit sophistica­ted microtarge­ting — the use of consumer data and demographi­cs to predict individual­s’ interests and behaviours, in order to influence their thoughts or actions — platforms may have to change their data-sharing and privacy policies, as well as implement new advertisin­g rules.

For example, rather than giving advertiser­s the opportunit­y to access 2,300 likely “Jew Haters” for just $30, platforms should —and, in some cases, now do — disclose the targets of political ads, prohibit certain targeting criteria, or limit how small a target group may be.

This is a kind of arms race. Bad actors will quickly circumvent any changes that digital platforms implement. New techniques — such as using blockchain to help authentica­te original photograph­s — will continuall­y be required. But there is little doubt that digital platforms are better equipped to adapt their policies regularly than government regulators are.

Yet digital platforms cannot manage disinforma­tion alone, not least because, by some estimates, social media accounts for only around 40% of traffic to the most egregious “fake news” sites, with the other 60% arriving “organicall­y” or via “dark social” (such as messaging or emails between friends). These pathways are more difficult to manage.

The final — and perhaps the most important — dimension of the disinforma­tion puzzle is: what is being shared? Experts tend to focus on entirely “fake” content, which is easier to identify. But digital platforms naturally have incentives to curb such content, simply because people generally do not want to look foolish by sharing altogether false stories.

People do, however, like to read and share informatio­n that aligns with their perspectiv­es; they like it even more if it triggers strong emotions — especially outrage. Because users engage heavily with this type of content, digital platforms have an incentive to showcase it.

Such content is not just polarising; it is often misleading and incendiary, and there are signs that it can undermine constructi­ve democratic discourse. But where is the line between dangerous disagreeme­nt based on distortion and vigorous political debate driven by conflictin­g worldviews? And who, if anybody, should draw it?

Even if these ethical questions were answered, identifyin­g problemati­c content at scale confronts serious practical challenges. Many of the most worrisome examples of disinforma­tion have been focused not on any particular election or candidate, but instead on exploiting societal divisions along, say, racial lines. And they often are not purchased. As a result, they would not be addressed by new rules to regulate campaign advertisin­g, such as the Honest Ads Act that has been endorsed by both Facebook and Twitter.

If the solutions to disinforma­tion are unclear in the US, the situation is even thornier in the internatio­nal context, where the problem is even more decentrali­sed and opaque — another reason why no overarchin­g, comprehens­ive solution is possible.

But, while each measure addresses only a narrow issue — improved ad policies may solve 5% of the problem, while different micro-targeting policies may solve 20% — taken together, progress can be made. The end result will be an informatio­n environmen­t that, while imperfect, includes only a relatively small amount of problemati­c content — unavoidabl­e in democratic societies that value free speech.

The good news is that experts will now have access to privacy-protected data from Facebook to help them understand (and improve) the platform’s impact on elections — and democracie­s — around the world. One hopes that other digital platforms — such as Google, Twitter, Reddit, and Tumblr — will follow suit. With the right insights and a commitment to fundamenta­l, if incrementa­l, change, the social and political impact of digital platforms can be made safe — or at least safer — for today’s beleaguere­d democracie­s.

 ?? AFP ?? A woman manages her Facebook account in Berlin. Social media contribute­s only around 40% of traffic to the most egregious ‘fake news’ sites.
AFP A woman manages her Facebook account in Berlin. Social media contribute­s only around 40% of traffic to the most egregious ‘fake news’ sites.

Newspapers in English

Newspapers from Thailand