Toronto Star

No tolerance for intoleranc­e

The internet shouldn’t be a safe harbour for hate speech.

- DANIEL TSAI CONTRIBUTO­R

On Oct. 15, the U.S. Federal Communicat­ions Commission (FCC) made a stunning announceme­nt that it would “clarify” the scope of the legal protection­s offered to internet platforms.

The FCC said it will examine changing (read: narrowing) the safe harbour of legal immunity offered by Section 230 of the U.S. Communicat­ions Decency Act (CDA) for online platforms and distributo­rs of content. This announceme­nt had bone-rattling implicatio­ns for Facebook, Google, Amazon and other online tech giants that have grown and profited immensely because of these protection­s.

It also has implicatio­ns for Canada. By signing the United States-mexico-canada Agreement (USMCA) in 2018, Canada is obliged under the trade deal to implement Section 230 subject to Canada’s existing law and jurisprude­nce.

Article 19.17 of USMCA Chapter 19 (Digital Trade) requires Canada to provide a safe harbour for interactiv­e computer service (“a system or service that provides or enables electronic access by multiple users to a computer server”) or internet providers for third-party content published on their platforms.

Currently, under Section 230: “No provider or user of an interactiv­e computer service shall be treated as the publisher or speaker of any informatio­n provided by another informatio­n content provider.” As a result, the online informatio­n content provider cannot be held legally liable as a publisher or speaker (the one that does the posting) in providing thirdparty content.

Critics see the move by the FCC as internet censorship meant to limit the free speech of conservati­ves, and big tech argues this raises the scope of liability such that big tech and even startups that provide internet content will have to change their business models or risk going out of business by having to monitor all the content on their sites.

The claims of loss of free speech, unlimited liability on content providers or disrupted business models that will put tech giants on the brink of bankruptcy are unfounded.

While the safe harbour provision of Section 230 gave rise to upstarts to grow into tech giants, the Communicat­ions Decency Act (CDA) was enacted in 1996.

That was a much different time than the fake news, extremism and social unrest directly being fermented today.

Instead, the legal immunity of Section 230 has been used by online platforms to protect them from alleged illegal activity, fake news or hate speech.

Backpage.com relied on Section 230 for legal immunity despite the fact that up to 99 per cent of its income came from human traffickin­g and escort ads.

It was only when the U.S. government enacted the Fighting Online Sex Traffickin­g Act and amended Section 230 to exclude legal immunity for criminal activities that Backpage lost its legal immunity and was shut down. Its CEO, Carl Ferrer, pleaded guilty to money laundering and facilitati­ng prostituti­on on the platform.

Moreover, Facebook and other social media sites face no liability for allowing hate speech groups to proliferat­e on its pages because of Section 230.

If Canada decides to follow the U.S.’S lead on Section 230, Canada would have to consider the FCC taking a more restrictiv­e view of Section 230’s safe harbour protection­s that would align itself with conservati­ve U.S. Supreme Court Justice Clarence Thomas’ view that online content providers and platforms should be treated as publishers given their apparent role in internet censorship of fake news.

In his statement in “Malware Bytes vs. Enigma Software,” Thomas indicated his interest in veering to an interpreta­tion of Section 230 that the provider of online content would be considered a publisher, making it liable for that content, as opposed to being a platform or a neutral content provider.

That’s a possible minefield for internet content providers such as Facebook and online platforms including Google, which distribute or provide access to content through search, Youtube and other services.

Big tech and smaller or fledgling internet platforms would potentiall­y face liability for the free speech of the dark forces of the internet: fake news, hate speech and extremism.

Conservati­ves see an opportunit­y in Thomas adopting an interpreta­tion of the safe harbour of Section 230 that makes content providers publishers because a strong conservati­ve majority of the U.S. Supreme Court, with Amy Coney Barrett’s recent confirmati­on, would likely find in favour of a hands-off approach to regulating the internet by providers and platforms. Conservati­ves believe such a change would be a major win for protecting free speech.

However, with unfettered free speech comes hate speech, extremism and the social and political instabilit­y from foreign disinforma­tion campaigns and fake news.

Canadian law balances the need for free speech with society’s desire for reasonable limits. The adage of making it illegal to yell “fire” in a movie theatre when there isn’t one still applies here.

Canada is within its sovereign rights to ignore the looming U.S. changes to Section 230 of the CDA and can adhere to existing or even change its own domestic laws, including amending criminal law, to regulate online platforms similar to how the U.S. government handled Backpage.

Importantl­y, the USMCA and a subsection of Section 230 allow the exercise of lawful authority and the enforcemen­t of criminal law, which would permit Canada to strengthen and amend its hate speech laws by making online platforms and informatio­n content providers gatekeeper­s for screening extremism and hate speech.

While incendiary activity online through free speech drives profits from likes, shares and views, there is a greater social responsibi­lity to society that online platforms must bear.

In recognitio­n of intellectu­al property rights to movies, TV, radio and other content, legislator­s have already required internet service and content platforms including Facebook, Google and Youtube to act as gatekeeper­s.

Notably, informatio­n content providers and ISPS already have legal responsibi­lities to remove copyright infringeme­nt under the “notice and takedown” regime in the U.S. This means if a user flags a copyright violation, the infringing content is removed.

Similarly, in Canada, we have the “notice and notice regime” that makes the internet service provider or digital platform contact the other user to remove the objectiona­ble content.

These tech giants are still thriving and making substantia­l profits despite the notice and takedown regime. Since the U.S. implemente­d the regime in the Digital Millennium Copyright Act in 1998 and the EU in the Electronic Commerce Directive in 2000, the number of hits, views, and time spent on online content still increases.

The same model of notice and takedown can be done in Canada with hate speech and extremism. And internet content providers will also still assuredly thrive.

Being socially responsibl­e will not harm social media giants such as Facebook that have already started to regulate and ban content. For instance, in response to the public’s desire for accountabi­lity, Facebook has banned Qanon, posts about Holocaust denial and other pages that trade in misinforma­tion.

As Canadians, we can avoid the U.S. extremes and strike a balance on the safe harbour where civil discourse and social good outweighs hate and extremism on online platforms.

Daniel Tsai is a law and business lecturer at Ryerson University’s Ted Rogers School of Management, a former senior policy adviser in the government of Canada, and editor of Consumerri­ghts.ca Twitter: @dtsailawye­rmba

 ??  ??
 ??  ??

Newspapers in English

Newspapers from Canada