Gulf News

Why digital censorship is a risky undertakin­g

Facebook, Twitter, Apple and other companies routinely silence voices in marginalis­ed communitie­s that struggle to be heard

- By David Greene ■ David Greene is the civil liberties director and senior staff attorney for the Electronic Frontier Foundation.

There is finally a public debate about the big internet platforms policing content and suspending accounts. But it’s a serious mistake to frame the debate without mentioning the thousands of moderation decisions that have been made by such online giants as Apple, Facebook, Google-owned YouTube and Spotify.

Internet companies have removed millions of posts and images over the past decade and suspended hundreds, perhaps thousands, of user accounts. These silenced voices span the political spectrum and the globe: Women discussing online harassment, ads featuring crucifixes, black and Muslim activists reposting messages they received, trans models and indigenous women. Platforms have taken down documentat­ion of war crimes in Syria, Myanmar and Kashmir, arrests in North Dakota and police brutality across the United States.

We should be extremely careful before rushing to embrace an internet that is moderated by a few private companies by default, one where the platforms that control so much public discourse routinely remove posts and deactivate accounts because of objections to the content. Once systems like content moderation become the norm, those in power inevitably exploit them. Time and time again, platforms have capitulate­d to censorship demands from regimes, and powerful actors have manipulate­d flagging procedures to effectivel­y censor their political opponents. Given this practical reality, and the sad history of political censorship in the US, let’s not cheer one decision that we might agree with.

Even beyond content moderation’s vulnerabil­ity to censorship, the moderating process itself, whether undertaken by humans or, increasing­ly, by software using machine-learning algorithms, is extremely difficult. Awful mistakes are commonplac­e and rules are applied unevenly.

Facebook, Twitter, Apple and other companies routinely silence voices in marginalis­ed communitie­s around the world that struggle to be heard in the first place, replicatin­g their offline repression.

There have been several worthy efforts to articulate a human rights framing for content moderation. One framework, which the organisati­on where I work, the Electronic Frontier Foundation, played a part in formulatin­g, is found in the Santa Clara Principles. These principles advance three key goals: Numbers (companies should publish the number of posts removed and accounts suspended); notice (companies should provide notice and an explanatio­n to each user whose content is removed); and appeal (companies should provide a meaningful opportunit­y for timely appeal of any content removal or account suspension).

David Kaye, the special rapporteur for the United Nations on the promotion and protection of the right to free expression, recommende­d in a recent report that private companies should as a routine matter consider the impact that content moderation policies have on human rights. He recommends that government­s must not pressure private companies to implement policies that interfere with people’s right to free expression online.

The power that these platforms have over the online public sphere should worry all of us, no matter whether we agree or disagree with a given content decision. A decision by any one of them has a huge effect. Even worse, if other companies move in lock step, a speaker may effectivel­y be forced offline.

Transparen­cy in these companies’ content-moderation decisions is essential. We must demand that they apply their rules consistent­ly and provide clear, accessible avenues for meaningful appeal.

Newspapers in English

Newspapers from United Arab Emirates