Financial Mail

Challengin­g an ‘indecent’ act

The 27-year-old regulation is out of touch with the way people interact today

- Toby Shapshak

The internet has become a disinforma­tion free-for-all medium because of section 230 of the US Communicat­ions Decency Act.

Enacted in 1996, the act protects internet companies from liability for what their users post. Google, YouTube, Facebook and Twitter have relied on it to avoid being sued. Until now.

Two cases launched in the US Supreme Court last week aim to change the situation. The families of people killed in terrorist attacks are alleging that the platforms YouTube in one instance and Twitter in the other were used by terrorist organisati­ons.

Nawras Alassaf’s family are suing Twitter after Alassaf was killed in a 2017 attack in an Istanbul nightclub.

Islamic State (Isis) claimed responsibi­lity and allegedly used Twitter for “recruiting and fundraisin­g”.

“Of the overall cost of running a terrorist organisati­on, the cost of a particular attack is a very small part,” said the family’s lawyer Eric Schnapper. “Running terrorist organisati­ons is very expensive. That’s why it’s so important that the court hold that the entire enterprise [that is] being aided matters. If you limit the aid that matters to the tip of the spear, you’ve written out of the statute almost all the assistance that matters.”

Twitter’s lawyer Seth Waxman said the network did not provide “substantia­l assistance, much less knowing substantia­l assistance, to that attack, or, for that matter, to any other attack”.

But, as justice Sonia Sotomayor replied, Twitter “knew that Isis was using your platform”.

In the other case, YouTube is being sued by the family of Nohemi Gonzalez, who was killed in Paris during the 2015 Bataclan concert venue attack.

“We’re focusing on the recommenda­tion function,” said Schnapper, who is also the Gonzalez family’s lawyer. He was referring to users being presented with content through the platform’s recommenda­tion algorithm.

Google’s lawyer Lisa Blatt said content curation is necessary because of the vast amount of uploaded content. “Helping users find the proverbial needle in the haystack is an existentia­l necessity on the internet. Search engines thus tailor what users see based on what’s known about users.”

She said this was done by Amazon, Tripadviso­r, Wikipedia, Yelp, Zillow and “countless” video, music, news, job-finding, social media and dating websites.

“Exposing websites to liability for implicitly recommendi­ng third-party context defies the text and threatens today’s internet,” she said.

Section 230 was introduced when the internet was in its infancy, and user-generated content (what people post) was nascent. The regulation reads: “No provider or user of an interactiv­e computer service shall be treated as the publisher or speaker of any informatio­n provided by another informatio­n content provider.”

In 1996, when the legal protection was introduced, there were no social networks. The explosion of cloud services and user interactio­n then known as Web 2.0 was just being conceived.

As justice Elena Kagan said of the Twitter case, heard a day before the challenge to YouTube, this is a “pre-algorithm statute” that is out of touch in this “post-algorithm world”.

Protecting users against harmful content and disin

formation has become a necessity for social networks, which have complained about how difficult (and expensive) content moderation is. But when Covid hammered the world in 2020, suddenly these social media firms were able to stop antivaxxer­s and Covid denialism. All it took was a pandemic to stop misinforma­tion.

And whistle-blower Frances Haugen said of Facebook’s knowledge Instagram was causing mental health damage among teen girls that its management “prioritise­d growth over safety”.

Section 230 allows social networks to hide behind this legal protection, even though the internet has changed radically since it was introduced 27 years ago. It is inconceiva­ble that an almost 30-year-old law can be in tune with how people interact in such a fast-changing environmen­t. Back then most content was benign, a lot of it featuring cat videos.

Now social media is full of misogyny, hate speech, antisemiti­sm, Islamophob­ia and the recruitmen­t and mobilisati­on of terrorist organisati­ons, antivaxxer­s and conspiracy theorists. Despite widespread anger this is still rampant.

Internet and social media firms make huge profits and have resisted changes to section 230 because the cost of content moderation eats into their profits. Just ask Mark Zuckerberg after Facebook’s belated attempt to wrestle with its right-wing pages and groups that organised the January 6 insurrecti­on at the US Capitol.

One of the main problems with content moderation is that it is done only in English, while much of Facebook, Google, YouTube and Twitter’s growth is in non-English countries. That alone is worth section 230 being scrapped to protect the rest of the world from the informatio­n spam and hate speech.

 ?? ??
 ?? ??

Newspapers in English

Newspapers from South Africa