Business Standard

Democracy never faced a threat like Facebook

- LEONID BERSHIDSKY

The social media giants based in the US may soon face a new attack in Europe: There’s a perception among activists and officials that the basis of their business model — targeted advertisin­g — can be a threat to democracy.

In a speech on Wednesday, Commission­er Margrethe Vestager — who, as the top European Union antitrust official, has been the nemesis of US tech companies such as Google and Apple — laid out her problems with the way social networking changes people’s political behaviour. One of her complaints is familiar and much-discussed: Facebook and its peers tend to sort people into political and ideologica­l filter bubbles and silos, destroying, as Vestager sees it, the chances of meaningful debate. The other has received less media attention. It concerns political ads and, generally, campaigns’ social messaging. Vestager: “If political ads only appear on the timelines of certain voters, then how can we all debate the issues that they raise? How can other parties and the media do their job of challengin­g those claims? How can we even know what mandate an election has given, if the promises that voters relied on were made in private?”

There are reasonable arguments against both of Vestager’s complaints. Long before social networks, people have grouped together on the basis of compatible views; confirmati­on biases, too, are as old as human society. The social networks merely reflect reality and make it more palpable. As for targeted messages, old campaignin­g instrument­s, such as direct mailings and phone calls, also delivered private messages to potential voters, and the media usually parsed them — just as they parse modern campaign activities on social platforms. It’s actually become easier because everyone is on Facebook and Twitter.

Vestager, however, is on to something. The old tools allowed for rather generic targeting — say, by voting or campaign donation history. Modern campaigns try to target messages using people’s private data or even psychologi­cal profiles created on the basis of social network and browsing activity. That’s not necessaril­y effective but it means certain voters get ads and messages that they wouldn’t have chosen to receive.

Imagine I’m a social media junkie for whom Facebook is the primary news source. I see a political ad because someone — or, most likely, an artificial­ly intelligen­t entity — has profiled me in a certain way, not because I made a donation to a certain party or voted for a specific candidate in the last election. Unless another algorithm profiles me differentl­y, I don’t see the other parties’ responses to the content with which I’ve been plied. I have no idea what the party that advertises to me has promised people in different target groups. I have less of an idea of the campaigns parties are running than if I watched TV like a 20th century voter.

At the same time, Facebook doesn’t release any data about what campaigns do on its platform. In a country that hasn’t removed campaign spending limits as the US effectivel­y did with Citizens United, that makes it hard to check what they spend on ads. Facebook’s position is that it’s the campaigns’ responsibi­lity to follow their countries’ laws, and that a user has full control over which ads are shown to him or her. The former is irrelevant to the task of checking campaigns’ self-reporting. The latter is only true to a degree: On Facebook, you can opt out of certain ads, but algorithms will still decide how they will be replaced.

In the run-up to Thursday’s UK election, a group called Who Targets Me recruited 10,000 volunteers to install a browser extension that registers targeted messages, ranging from Facebook videos to Google search ads. The group calls them “dark ads” because they are so hard to monitor: They’ve been targeted to specific local constituen­cies, gender and age groups.

Last year’s US election led to pressure on the social networks to crack down on fake news stories and the bot networks that spread them. Facebook responded by introducin­g well-publicised mechanisms for reporting likely fake stories and having them fact-checked. During the recent French presidenti­al election, it said it also suspended 30,000 fake accounts to stop them from spreading false stories. None of that really fixes the filter bubble problem — people will still believe what they want to believe, and if they mistrust mainstream media, they are likely to discount fact-checkers’ efforts, too. So the pressure is still on for a more pertinent response, but it’s not clear what that could be — short of having human editors remove stories deemed to be fake, something the networks will resist because it’s contrary to their self-perception as neutral platforms.

If a regulatory backlash starts against political targeting, though, it’s clear what the social networks might be required to do. Regulators could order them to disclose what messages campaigns are using and how much they are paying to circulate them. In an extreme scenario, they could even ban paid political advertisin­g on social networks, arguing, as Vestager did in her speech, that politics is different from business, so rules for targeted messaging should be different to protect democracy.

In a manifesto earlier this year, Facebook founder Mark Zuckerberg wrote about moving relationsh­ips and social structures formed on the networks into the offline world. “These changes have been so fast that I’m not sure our democracy has caught up,” Vestager said in her speech. One can be sure European regulators will choose to slow down the developmen­t of Zuckerberg’s vision rather than rewrite campaignin­g rules to catch up with it.

Newspapers in English

Newspapers from India