Toronto Star

Who should be policing hate speech online?

- NATASHA TUSIKOV

This week’s march of white supremacis­ts in Charlottes­ville, Va., which culminated in the death of civilright­s activist Heather Heyer by an apparent Nazi sympathize­r, brought long-simmering U.S. racist politics into public view.

In doing so, it also raised the question of how we as a society should deal with hate speech, as well as the extent to which internet companies are becoming de facto regulators of online activity.

Following Heyer’s death and facing a public backlash, GoDaddy removed its domain services from the Daily Stormer, a white supremacis­t website, saying the website violated GoDaddy’s terms of service after a post that mocked her death.

Google, which provided replacemen­t domain services to the website, quickly followed suit. The Daily Stormer has since moved to the dark web, where it is more difficult to identify — and thereby pressure — its service providers. Relying on internet companies to act as regulators is appealing because they can move swiftly and decisively to push undesirabl­e actors and content out of sight to most people. When their actions target groups as dangerous and reprehensi­ble as white supremacis­ts, it is tempting to simply cheer this as a victory.

However, it also raises difficult questions about the extent to which we as a society are increasing­ly relying upon a handful of major internet companies to police a broad array of social problems, not all of which rises to the level of violent hate speech.

GoDaddy and Google’s actions are significan­t because this is not an isolated example of large internet firms targeting some bad actors. Rather, they reflect a growing trend of large, mostly U.S. internet companies acting as global regulators of online activity, raising issues of accountabi­lity and the arbitrary nature of their actions.

The appeal of relying on these companies to police bad behaviour is obvious. These companies have become go-to regulators for legislator­s around the world because they have significan­t regulatory latitude through their terms-of-service agreements to remove any speech or ban any users they deem in violation of their rules.

Because these companies can work through their terms of service, government officials are calling upon them to address social problems ranging from illegal gambling, copyright infringeme­nt to child sexual abuse content, hate speech and “fake news.”

Crucially, however, in many cases, these internet firms are removing content and terminatin­g their services in the absence of actual specific legislativ­e require- ments; that is, “voluntaril­y,” and in the absence of any judicial process.

The Electronic Frontier Foundation, a U.S.-based digital-rights group, has critiqued such efforts as shadow regulation that can have the force of law, but not its transparen­cy or accountabi­lity.

The violence in Charlottes­ville demonstrat­es again that companies respond to public criticism, especially in high-profile cases. But is rule by public protest, which at its worst is mob mentality, where one website or group is arbitraril­y targeted while another is overlooked how we want to govern the internet?

Industry-led enforcemen­t campaigns also often lack rigorous accountabi­lity measures. Devolving enforcemen­t responsibi­lity to internet companies is useful for government wishing to sidestep public demands for regulatory oversight. However, internet firms’ internal rules and enforcemen­t practices can be troublingl­y opaque and prone to arbitrary interpreta­tion.

Facebook’s leaked rules reveal the company’s complex processes for determinin­g content as hate speech and highlight its dependence on overworked, underpaid content moderators who have only seconds to flag objectiona­ble material. As a result, internet firms are inaccurate­ly removing lawful, inoffensiv­e content.

While we may welcome enforcemen­t action against violent hate speech, we should recognize that internet companies have too-often acted to stifle peaceful, inoffensiv­e speech criticizin­g government­s and law enforcemen­t.

Rules first enacted against the most reprehensi­ble behaviour — terrorism, child sexual abuse, and hate speech — are often expanded to target other forms of speech. What will we do when the censors come for controvers­ial or confrontin­g speech that we support?

To be absolutely clear, I do not support white supremacis­ts. My argument here is that there is a broader role for government to play in determinin­g how content and behaviour on the internet should be regulated and by whom.

There is also a critical role for public debate to determine how the internet should be governed. Simply off-loading responsibi­lity to companies such as Google and GoDaddy to react to public pressure may have gotten the job done in this specific case, but in the longer term, it represents a troubling, potentiall­y dangerous policy choice.

 ?? Natasha Tusikov, author of Chokepoint­s: Global Private Regulation on the internet, is an assistant professor of criminolog­y at York University in Toronto. ??
Natasha Tusikov, author of Chokepoint­s: Global Private Regulation on the internet, is an assistant professor of criminolog­y at York University in Toronto.

Newspapers in English

Newspapers from Canada