TOP COURT COULD THROW THE INTERNET INTO CHAOS
Section 230 of the Communications Decency Act is vexing: No one likes it, but neither can anyone come up with a satisfying proposal for fixing it. Now, with good outcomes elusive, the Supreme Court is in a position to produce an especially bad one.
On Tuesday, the justices will hear Gonzalez v. Google, a case whose decision could wipe away what are called the 26 words that created the internet. Section 230 protects platforms from liability for most content contributed by third parties — which means that when individuals send defamatory tweets or post inciting comments, Twitter, Facebook, YouTube and their peers aren’t held legally responsible. Gonzalez asks a slightly more complicated question: When platforms algorithmically promote those tweets, comments or, in this instance, videos, does their legal shield disappear?
The facts of the suit are tragic, although attenuated. The case was brought by the family of a 23-year-old American college student killed in a Paris restaurant during an attack by Islamic State followers. But rather than alleging that the murderers in question were radicalized on YouTube, they allege that YouTube more generally promoted radicalizing material via its “Up Next” recommendation feature.
The theory behind treating material that platforms promote differently from material that platforms simply host has some appeal. It’s easy enough to say sites can’t be responsible, either morally or logistically, for everything that their millions and sometimes billions of users decide to stick on the web. But arguing that they aren’t responsible for the decisions their own employees encode into their own systems is more difficult.
On the other hand, the consequences of removing Section 230 immunity for algorithmically recommended content could be catastrophic. Platforms would likely abandon systems that suggest or prioritize information altogether, or just sanitize their services to avoid carrying anything close to objectionable — creating, as some have put it, either a wasteland or a Disneyland. Part of the trouble is there’s no clear way to distinguish one type of recommendation from another. Elevating content relevant to whatever a user has recently interacted with is different from elevating content based on subject matter, which is in turn different from elevating content determined to be high-quality. But all these types of curation are at once under threat.
The centrality of algorithmic recommendation to today’s internet is, indeed, the greatest problem for the plaintiffs’ argument. Those who want to see Section 230 gutted argue that its drafters never meant for the provision to apply to this type of promoted material. But the drafters certainly did mean for it to give platforms the freedom to moderate content — and, by the way, they’ve filed an amicus brief saying as much. Today, algorithmic recommendation is exactly what makes this content moderation possible — amid those millions or billions of users generating hundreds of millions or hundreds of billions of posts a day. By ruling that this practice is out of bounds, the Supreme Court would get the modern internet all wrong. It would get the statute at hand wrong too.
That doesn’t mean there’s nothing to be done about Section 230, and it certainly doesn’t mean there’s nothing to be done about algorithms’ role in shaping platforms. That starts with greater transparency surrounding the outcomes these algorithms are designed to produce, as well as the outcomes they actually produce in practice. Perhaps there’s even room to harness those findings so that platforms may be held liable for negligence when they systematically elevate illegal content and don’t attempt to remedy that failing. (First Amendment issues, in almost any attempt at reforming this thorny law, will inevitably arise.)
But all that is work for Congress. Lawmakers wrote the 26 words that created the internet. It’s their job to write the words that determine its future.