Business Day

States put screws on social networks over self-censorship

- Leonid Bershidsky

The pressure for social networks to censor the content that appears on them just won’t cease, and the networks are bending. Censorship, however, is not what users want. Nor is it technicall­y possible, even if the platforms won’t admit it.

The EU is pushing Facebook, Twitter and other social networks to comply with member states’ hate speech laws. In the US, many in the media and on the losing side of the recent presidenti­al campaign would like to see the platforms take action against fake news. Unlike in cases involving abuse of market dominance (the charge Google faces in Europe) or the release of users’ private data (over which Microsoft has fought the US government), the platform owners aren’t fighting back. In May, Facebook, Twitter, Google’s YouTube and Microsoft signed a code of conduct in which they promised to review most hate speech reports within 24 hours and remove content that they find illegal. The EU wasn’t content with that; Justice Commission­er Vera Jourova voiced dismay last week that only 40% of reports are reviewed within 24 hours, according to a compliance audit. Her tone was stringent. “If Facebook, YouTube, Twitter and Microsoft want to convince me and the ministers that the nonlegisla­tive approach can work, they will have to act quickly and make a strong effort in the coming months.”

This is a threat: Unless the social networks step up selfcensor­ship, legislatio­n will be passed to force them to comply.

In the US, where free speech is protected by the First Amendment, there’s no such urgency, but plenty willing to offer wellmeanin­g advice. For the most part, demands have to do with having users flag offensive content and then reacting quickly to the complaints. That is a deeply flawed process, as Microsoft’s Kate Crawford and Cornell University’s Tarleton Gillespie explained in a 2014 paper:

“Disagreeme­nts about what is offensive or acceptable are inevitable when a diverse audience encounters shared cultural objects. From the providers’ point of view, these disagreeme­nts may be a nuisance, tacks under the tyres of an otherwise smoothly running vehicle. But they are also vital public negotiatio­ns. Controvers­ial examples can become opportunit­ies for substantiv­e public debate: Why is a gay kiss more inappropri­ate than a straight one? Where is the line drawn between an angry political statement and a call to violence? What are the aesthetic and political judgments brought to bear on an image of a naked body — when is it “artistic” versus “offensive?”

Flagging has been gamed ever since it became available to users. Campaigns have been run to flag pro-Muslim content on YouTube as terrorist, proUkraini­an content on Facebook as reprehensi­bly anti-Russian (and vice versa), gay groups as offensive to Christians, and so on. In many cases, the social networks’ abuse teams have removed posts flagged by both sides so as not to alienate anyone. I see many of the bloggers I follow disappear for a few days due to bans imposed in such campaigns, then resurface and keep going until the abuse team is overwhelme­d with new flags.

Still, regulators and wellwisher­s want the networks to make flagging easier and more prevalent — and then work as fast as possible with the complaints. The tech firms can only comply by hiring more censors, inventing technologi­cal solutions and going to third parties for validation. Facebook CEO Mark Zuckerberg has said his company plans to use automation to take down offending content before users flag it and to work with outside groups to verify stories. The EU-dictated code of conduct encourages the latter scenario, too.

Yet given the current, primitive state of natural language processing, the automation will do more harm than good.

A new gimmick — a shared database of content removed by Facebook, Microsoft, Twitter and YouTube as terror-related — will only amplify the effect of errors made by each of the networks, and only until the posters start gaming it by making minor changes to their content.

As for outsourcin­g censorship, the partisan biases it brings into the process are too numerous to account for. A report from Morning Consult, a polling and research company, shows that 24% of people in the US believe the reader holds the most responsibi­lity for preventing the spread of fake news; just 17% believe the social networks do.

Indeed, social network users are much better able to police their own feed than the network is to censor the staggering amount of content it carries. A reader is usually able to tell fake news from real news; in the Morning Consult survey, 55% of respondent­s said they had, on more than one occasion, started reading a story only to realise it was untrue. People have a social incentive not to share fake stories: They will be mocked by friends if they do. In many cases, people share fake news knowingly — because it confirms their biases or because they want to troll the subject of the fake story — but in these cases, Facebook and Twitter cannot expect a flag from these users, only from their equally emotional opponents.

Similar mechanics are involved in the spread of “hate speech,” terrorist recruitmen­t videos and cyberbully­ing posts. It’s easy for a user to block those who spread this kind of content. Some won’t, however, because they’re not offended by it. Instead of signalling compliance, the networks should stand up and make a few simple points.

If a government has passed laws against hate speech, it should be able to enforce them with the large security apparatus at its disposal. It cannot delegate the policing to companies any more than it can outsource, say, the fight against terror.

If the government­s admit they are unable to police the networks, how can they expect the private firms to be able to do it? In any case, in the Morning Consult survey, only a small minority wanted the government to prevent the spread of fake news. That reflects the average user’s attitude towards any government-enforced censorship, no matter how it is exercised.

In a free society, specific people are responsibl­e for their thoughts and actions. Social platforms can provide the tools for getting rid of offensive or irrelevant posts and try to block bots, but they should not exercise some kind of misguided parental authority over their adult users.

The social networks may see taking a stand as an unnecessar­y risk given the legislativ­e threats. Yet they may be better off fighting censorship now, before they’re choked by everincrea­sing, unreasonab­le demands to step it up. / Bloomberg

 ?? /AFP Photo/Sunday Times ?? Making a statement: Social network users are better able to police their own feeds than a network is to censor the huge amount of content it carries.
/AFP Photo/Sunday Times Making a statement: Social network users are better able to police their own feeds than a network is to censor the huge amount of content it carries.

Newspapers in English

Newspapers from South Africa