The Guardian (USA)

Will fake news wreck the coming general election?

- Alex Hern

The next general election campaign has already started. We do not know the date of the vote, but we know that one is coming – and so do Britain’s political parties, which have been steadily stepping up their spending in the largely unregulate­d world of online advertisin­g. In the last 90 days, the Tories have spent almost £100,000 on Facebook adverts, with the Brexit party spending even more: £107,000.

The fact that we even know those figures shows how far we’ve come in the past few years. In the face of government inaction (despite the Electoral Commission recommendi­ng better regulation of online campaignin­g a decade ago, nothing has changed), US technology firms have slowly been taking action. They’ve introduced transparen­cy initiative­s, begun to crack down on the most egregious disinforma­tion and even successful­ly fought off a hostile state or two.

But “better” does not necessaril­y mean “good”. With another election – or referendum – apparently imminent in the UK, how effective are the platforms’ safeguards against interferen­ce and fake news? Which ones pass their MoT and which need to pull over for a hurried pitstop – or just get off the informatio­n superhighw­ay for everyone’s safety?

Facebook

The world’s largest social network has significan­tly improved its efforts to fight election malpractic­e since the notorious failures of 2016. Under Nathanial Gleicher, the former White House director for cybersecur­ity policy, the company has regularly taken action against “co-ordinated inauthenti­c behaviour”: the sort of organised profession­al trolling that Russia’s Internet Research Agency pioneered in the early 2010s and which was used to try to influence the 2016 American presidenti­al election.

But Facebook’s standards for paid adverts are less clear. In August, a Guar

dian investigat­ion uncovered a propaganda network run by Boris Johnson ally Lynton Crosby. The Australian PR guru ran adverts and Facebook pages that purported to be independen­t outlets reporting on particular topics, but were actually fronts for promoting the views of Crosby’s clients. When challenged on this, Facebook said the difference between banned inauthenti­c behaviour from Russians and permitted inauthenti­c behaviour from Australian­s was that the Australian­s used their real names on their accounts.

Even without exploitati­on of that sort of obvious loophole, however, Facebook is likely to be a battlegrou­nd. The social network is already receiving thousands of pounds a week from British political parties; currently leading in spending is the Brexit party, with £19,600 of adverts in the last seven days. After Boris Johnson became leader of the Conservati­ve party, it briefly saw spending rocket, but not enough to beat the Brexit party’s ongoing splurge.

One major aspect of Facebook’s political advertisin­g that has never been fixed by the company is the platform’s algorithmi­c push towards extremity. In the US presidenti­al election, it emerged that Donald Trump’s team had paid less per advert, because their more inflammato­ry messages got far more reaction from Facebook users, which led the company’s algorithm to prioritise them, creating a virtuous circle that could still be exploited today.

Instagram

Facebook’s second social network has been steadily folded into the mothership over the past year. Its ad network is the same, its direct messages are being rewritten to be compatible with Facebook Messenger and a forthcomin­g rebranding will even see the entire social network dubbed “Instagram by Facebook”. For all the damage that integratio­n could do to the consumer experience, it has one major positive aspect: giving Instagram access to the same tools Facebook has already built to try to prevent a repeat of 2016.

Last month, for instance, Instagram finally received access to fact-checking tools that Facebook launched in the US in December 2016. It’s a harder job to fact-check Instagram, because of the nature of the two sites. On Facebook, fake news frequently spreads in the form of links to claims made on lowquality news sites. That means that one fact-check can be reused every time someone reposts that same link, greatly reducing the spread of false claims.

Instagram, by contrast, sees much of its fake news spread in the form of images, screenshot­s and text captions, which are much harder to automatica­lly find and flag. So for years, misinforma­tion has flourished on the platform, albeit largely about health and beauty tips than politics. Now that third-party fact-checkers are finally able to mark those claims as false, distributi­on should fall as a result.

YouTube

Cambridge Analytica was the best thing that ever happened to YouTube. In the spring of 2018, the video-sharing site was facing increasing scrutiny over the destructiv­e behaviour spurred on by its algorithmi­c curation, autoplayin­g videos and advertisin­g. Then the Observer broke the story of Facebook’s data scandal and the social network spent the year shooting itself in the foot, taking attention away from YouTube’s problems.

But that grace period has come to an end and YouTube hasn’t used the time to great effect. It has implemente­d some basic reforms: explicitly noting when channels are run by state propaganda arms, for instance, and announcing that it will remove content by politician­s if it breaks the site’s rules (the opposite tack to Facebook’s recently stated policy to leave any content posted by politician­s on the site, even if it’s damaging or harmful, due to its inherent newsworthi­ness).

But some efforts look half-hearted at best. YouTube’s flagship antimisinf­ormation policy, for instance, has been to append links to Wikipedia underneath videos claiming that the moon landings were a hoax or that the Earth is flat. Even for those simple lies, the method seems unlikely to have much effect; for nuanced political fictions, YouTube is basically throwing its hands up in the air and asking users to add video responses that debunk them.

Google

If we put YouTube to one side, however, Google isn’t doing too badly. That’s partially because, since it closed Google + earlier this year, YouTube is the closest thing to a social network that the company runs and social networks are doomed to be the frontline of any election campaign.

But the company has followed in Facebook’s footsteps with a political advertisin­g archive, showing the tiny number of political adverts that have run on its platform in the past six months. The company has flagged just 400 adverts in the UK, with a total spend of £32,000 – and most of those appear to be adverts for the Romanian PNL, either mistakenly targeted at Brits or aiming to win over expat voters. The report is, however, detailed enough to show that Google has received £12,000 from Labour since 31 May, to run 34 adverts attacking Boris Johnson and a nodeal Brexit and promoting its own policies.

But in a recent study, campaign group Privacy Internatio­nal (PI) argued that the data was woefully inadequate. The company’s ad library, it said, provided “broad ranges of targeting informatio­n on some ads in some countries, instead of meaningful insight into how an ad or campaign was targeted”. Without that extra informatio­n, it’s impossible to know what the adverts are actually being used for – and how they might be affecting democracy.

Other questions are harder to answer. Does the company’s search implicitly promote particular views, for instance? Are fake news purveyors being boosted by credulous algorithms? Unlike social networks, there are no public metrics to easily measure and so Google could be doing much more damage than anyone realises.

Twitter

Twitter’s effect on elections is somewhat more direct than most other social networks, because the news media are essentiall­y addicted to the app. As a result, a comparativ­ely small user base has an outsize influence on the national discourse, making the site an attractive target.

Like Facebook, Twitter has made great progress in identifyin­g and removing those who do try to flood the site with inauthenti­c behaviour. It’s also done so in a transparen­t way, releasing complete archives of excised material so that researcher­s and reporters can see exactly how nations like China, Iran and Russia have tried to steer the discourse.

But many questions remain, particular­ly over the extent of inauthenti­c behaviour from organised campaigns who aren’t backed by a foreign state, but are, for example, a co-ordinated bunch of US white supremacis­ts; Twitter doesn’t publish transparen­cy reports when that sort of campaign is removed from the site. As a result, researcher­s spend a substantia­l amount of time trying to quantify the effect of “bots” (automated software) on the platform. Twitter responds that only it really knows the answer but it can’t share the truth, because the bots have privacy rights too.

The outcome is a noisy argument, where everyone accuses everyone else’s supporters of being Russian, bots or Russian bots, and discourse crumbles. It seems unlikely that Twitter will be able to do anything about that in time for an election. It’s unclear if it even wants to. Twitter once described itself as the “freespeech wing of the free-speech party” in Silicon Valley and for years, that approach left the company far behind its competitor­s in what is now termed “platform health” – combating abuse and threats of violence.

More generally, Twitter hasn’t kept up with the state of the industry on electoral transparen­cy. In PI’s study, it found numerous examples of political adverts that weren’t disclosed anywhere else on the site. The company does have a specific definition of “political adverts”, but has never yet applied that to UK elections, although it said it is “working to support UK elections and other EU member states’ national elections throughout the next year”.

Snapchat

It might not be the first place you think about when it comes to political campaignin­g, but Snapchat is gearing up for informatio­n warfare just in case. The company recently released its political advertisin­g archive – a feature that is fast becoming non-optional for online advertiser­s – and while it is USonly, it seems likely it would make it to the UK in time for an election.

And Snapchat has always been proud of the fact that, when it comes to news media, it is explicitly a publisher not a platform. News on Snapchat is selected by the company, posted to special channels, and subject to the same quality controls as traditiona­l broadcast media: the company voluntaril­y applies FCC standards to the content it broadcasts. To be sure, most of it is stuff the Daily Mail wouldn’t even touch for its website’s notorious sidebar of shame, but it’s the thought that counts.

WhatsApp

With its end-to-end encryption and lack of broadcast tools, Facebook-owned WhatsApp is the most opaque major platform. We can only judge it by its effects, which are not positive. It has been credibly blamed for huge waves of disinforma­tion sweeping India, leading to numerous deaths in rumour-driven lynchings. In Brazil, campaigner­s for all parties, but particular­ly far-right candidate Jair Bolsonaro, used the service, sending up to 300,000 messages at a time to many of the country’s 120m WhatsApp users.

WhatsApp argues that it is limited in how it can fight abuse by the privacy features that make it so desirable to activists and campaigner­s worldwide. Unlike convention­al social networks, the company cannot read the contents of messages, meaning it can do much less to limit spam and misinforma­tion. Instead, it has focused on making it harder to forward messages to hundreds of people at a time, clearly marking informatio­n as forwarded – and not originated by the immediate sender – and limiting the use of broadcast lists.

 ??  ?? Illustrati­on by James Melaugh.
Illustrati­on by James Melaugh.
 ??  ?? Fake news spread across Facebook during the 2016 EU referendum.
Fake news spread across Facebook during the 2016 EU referendum.

Newspapers in English

Newspapers from United States