USA TODAY International Edition
Liability suits before high court threaten internet as we know it
The Supreme Court is mulling the future of the internet. Critics warn that if the court isn’t careful, it could destroy the web as we know it. We’ll still have online shopping and plenty of porn. But the whole “marketplace of ideas” thing? That’s another matter.
Two cases with similar elements are before the Supreme Court. Families who lost loved ones at the hands of terrorists are suing social media companies they believe supported the Islamic State group on their platforms.
At the heart of the cases is Section 230 of the Communications Decency Act of 1996, a good idea in the middle of a bad piece of legislation. While much of the act was struck down a year later as government overreach violating the First Amendment, Section 230 survived, providing cover for the thenemerging online industries.
26 words ‘ created the internet’
Professor and author Jeff Kosseff has written that these are the 26 words that “created the internet:” “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
“Interactive computer service” sounds so quaint. In 1996, it was a place early web users could pull over on the “Information Superhighway.”
With internet- based media still in a nascent stage, Congress recognized that this new industry could never grow and succeed if it could be held liable for everything posted by users. Section 230 meant that internet companies didn’t have to review everything before it was posted, allowing dramatic growth over the next quarter- century.
The new cases before the court use novel theories to try to break through the Section 230 shield.
In Gonzalez v. Google, heard by the court Tuesday, the family of Nohemi Gonzalez, a young American woman killed in 2015 in an ISIS attack in Paris, argued that YouTube algorithms “recommend” ISIS propaganda. A user who clicks on an ISIS video may be offered similar videos.
In Twitter v. Taamneh on Wednesday, the court heard claims by the family of Nawras Alassaf, a Jordanian man killed in an ISIS attack in Istanbul in 2017. The family contends that access to Twitter gave ISIS “substantial assistance” in violation of a California antiterrorism law. The contention is that Twitter didn’t work hard enough to bar terrorist posts.
Both YouTube, owned by Google, and Twitter have attempted to monitor and remove ISIS content, but some propaganda still gets posted, if only for a brief period. Can YouTube be sued for having an algorithm that doesn’t automatically prevent ISIS propaganda from being displayed? Can Twitter be liable for not catching all the ISIS tweets before they’re posted?
Those would be high standards, in part because posts from terrorist organizations aren’t always branded.
Since 1997, cases like these would never have made the docket of the U. S. Supreme Court. Section 230 was established law. But this court decided it wanted to review not one but two Section 230 cases. On Tuesday, Justice Elena Kagan aptly noted that the questions stem from a pre- algorithm statute in a post- algorithm world.
What the justices said
Kagan’s observation came during oral arguments in Gonzalez, where the justices appeared to be generally supportive of Section 230. Justice Clarence Thomas noted that it appeared the YouTube algorithm was neutral in its application, treating the videos in question the same it would videos about rice pilaf. Chief Justice John Roberts expanded on that idea.
“They’re still not responsible for the content of the videos or the text that is transmitted. Your focus is on the actual selection and recommendations,” Roberts told an attorney for the Gonzales family. “It may be significant if the algorithm is the same … across the different subject matters, because then they don’t have a focused algorithm with respect to terrorist activities or pilaf.
“And then I think it might be harder for you to say that there’s selection involved for which they could be held responsible.”
It would be welcome if the court dispatches both cases with similarly practical reasoning.
The First Amendment rights of a private business limit the options of those who want to remake social media to their liking. That’s why Section 230 is an easy and obvious target.
Blocking criminals, terrorists
There’s no need for us to look to the Supreme Court to determine the fate of Section 230. The same Congress that passes laws can rewrite them. There’s already a federal law punishing online services that knowingly promote sexual exploitation of children. The same can be done to address terrorism.
Of course, it would mean a functioning Congress capable of crafting a law that both ensures internet freedom and mandates good- faith efforts to block criminal and terrorist communications. No robes required.