Khaleej Times

Worse news — we can’t really get rid of fake news

- Joseph s. Nye

The term “fake news” has become an epithet that US President Donald Trump attaches to any unfavorabl­e story. But it is also an analytical term that describes deliberate disinforma­tion presented in the form of a convention­al news report.

The problem is not completely novel. In 1925, Harper’s Magazine published an article about the dangers of “fake news.” But today two-thirds of American adults get some of their news from social media, which rest on a business model that lends itself to outside manipulati­on and where algorithms can easily be gamed for profit or malign purposes.

Whether amateur, criminal, or government­al, many organizati­ons – both domestic and foreign – are skilled at reverse engineerin­g how tech platforms parse informatio­n. To give Russia credit, it was one of the first government­s to understand how to weaponize social media and to use America’s own companies against it.

Overwhelme­d with the sheer volume of informatio­n available online, people find it difficult to know what to focus on. Attention, rather than informatio­n, becomes the scarce resource to capture. Big data and artificial intelligen­ce allow micro-targeting of communicat­ion so that the informatio­n people receive is limited to a “filter bubble” of the likeminded.

The “free” services offered by social media are based on a profit model in which users’ informatio­n and attention are actually the products, which are sold to advertiser­s. Algorithms are designed to learn what keeps users engaged so that they can be served more ads and produce more revenue.

Emotions such as outrage stimulate engagement, and news that is outrageous but false has been shown to engage more viewers than accurate news. One study found that such falsehoods on Twitter were 70% more likely to be retweeted than accurate news. Likewise, a study of demonstrat­ions in Germany earlier this year found that YouTube’s algorithm systematic­ally directed users toward extremist content because that was where the “clicks” and revenue were greatest. Fact checking by convention­al news media is often unable to keep up, and sometimes can even be counterpro­ductive by drawing more attention to the falsehood.

By its nature, the social-media profit model can be weaponized by states and non-state actors alike. Recently, Facebook has been under heavy criticism for its cavalier record on protecting users’ privacy. CEO Mark Zuckerberg admitted that in 2016, Facebook was “not prepared for the coordinate­d informatio­n operations we regularly face.” The company had, however, “learned a lot since then and have developed sophistica­ted systems that combine technology and people to prevent election interferen­ce on our services.”

Such efforts include automated programs to find and remove fake accounts; featuring Facebook pages that spread disinforma­tion less prominentl­y than in the past; issuing a transparen­cy report on the number of false accounts removed; verifying the nationalit­y of those who place political advertisem­ents; hiring 10,000 additional people to work on security; and improving coordinati­on with law enforcemen­t and other companies to address suspicious activity. But the problem is not solved.

An arms race will continue between the social media companies and the states and non-state actors who invest in ways to exploit their systems. Technologi­cal solutions like artificial intelligen­ce are not a silver bullet. Because it is often more sensationa­l and outrageous, fake news travels farther and faster than real news. False informatio­n on Twitter is retweeted by many more people and far more rapidly than true informatio­n, and repeating it, even in a fact-checking context, may increase an individual’s likelihood of accepting it as true. In preparing for the 2016 US presidenti­al election, the Internet Research Agency in St. Petersburg, Russia, spent more than a year creating dozens of social media accounts masqueradi­ng as local American news outlets. Sometimes the reports favored a candidate, but often they were designed simply to give an impression of chaos and disgust with democracy, and to suppress voter turnout.

When Congress passed the Communicat­ions Decency Act in 1996, then-infant social media companies were treated as neutral telecoms providers that enabled customers to interact with one other. But this model is clearly outdated. Under political pressure, the major companies have begun to police their networks more carefully and take down obvious fakes, including those propagated by botnets.

But imposing limits on free speech, protected by the First Amendment of the US Constituti­on, raises difficult practical problems. While machines and non-US actors have no First Amendment rights (and private companies are not bound by the First Amendment in any case), abhorrent domestic groups and individual­s do, and they can serve as intermedia­ries for foreign influencer­s.

In any case, the damage done by foreign actors may be less than the damage we do to ourselves. The problem of fake news and foreign impersonat­ion of real news sources is difficult to resolve because it involves trade-offs among our important values. The social media companies, wary of coming under attack for censorship, want to avoid regulation by legislator­s who criticize them for both sins of omission and commission.

Experience from European elections suggests that investigat­ive journalism and alerting the public in advance can help inoculate voters against disinforma­tion campaigns. But the battle with fake news is likely to remain a cat-and-mouse game between its purveyors and the companies whose platforms they exploit. It will become part of the background noise of elections everywhere. Constant vigilance will be the price of protecting our democracie­s. -

The social media companies, wary of coming under attack for censorship, want to avoid regulation by legislator­s who criticize them for both sins of omission and commission.

 ??  ??

Newspapers in English

Newspapers from United Arab Emirates