Las Vegas Review-Journal (Sunday)
Imagine a more responsible internet
U.S. Supreme Court Justice Elena Kagan sparked laughter in the court last week by speaking honestly about her colleagues’ and her own lack of expertise on the internet. “We are a court — we really don’t know about these things. We are not, like, the nine greatest experts on the internet,” she said.
Although made in jest, the comment is not a laughing matter. Significant updates to the interpretation of current law are sorely needed to reflect evolving technology. Two cases came before the court last week that will shape legal liability for online communication for years to come. Experts or not, those nine justices will decide whether the online marketplace of ideas becomes an impregnable haven for all manner of libel, slander and criminal communications.
At the heart of the cases is whether tech companies are legally responsible for the content posted on their sites by third-party users and then distributed by algorithms that are designed to amplify the content to specific individuals most likely to engage. We believe they should be, provided there are carefully drawn distinctions.
After his daughter was killed in a 2015 ISIS attack, Reynaldo Gonzalez sued Google, now called Alphabet, the parent company of Youtube, for the company’s personalized distribution algorithms’ role in promoting ISIS recruitment videos.
Current law, Section 230 of the Telecommunications Act of 1996, has been interpreted to shield companies from almost all liability for content posted by third-party users. Under the law, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This interpretation was, at the time, a natural outgrowth of common carrier laws that shielded telephone companies from liability for telephone conversations about criminal activity. The logic held that because phone lines were merely conduits for the conversations to travel through but otherwise did not speak, control, distribute or otherwise publish the words, the phone companies were not liable for the specific content of the conversations.
Gonzalez contends that technology has changed and that the targeted personalized algorithms used by companies like Alphabet and Meta, the parent company of Facebook and Instagram, represent affirmative action to distribute certain content to specific users. In other words, by utilizing algorithms that effectively read, understand and redistribute content to specific users, online platforms are no longer simply conduits for conversations and should not be given blanket immunity.
In a separate but related case, the family of Nawras Alassaf, another victim of ISIS, is seeking to hold Twitter, Alphabet and Meta liable for their failure to remove ISIS content from their platforms, which they claim helped ISIS grow and thrive.
Both families contend that online platforms already have a responsibility to remove illegal materials, such as child pornography and copyrighted material, from their sites. Expanding those responsibilities to include removing — rather than distributing — recruitment and organizing materials for terrorists and other violent extremists and allowing liability for their failure to do so is not an unreasonable expectation.
The families’ arguments are compelling and should be sustained by the court. Congress has already carved out exceptions to Section 230 for speech that clearly violates federal law such as human trafficking, child pornography and international drug trade. These exceptions should extend to active recruitment and conspiracy by criminal organizations.
However, the court should go even further and recognize that, unlike platforms that provide nothing more than a conduit or “bulletin board” for user-generated content, any platform that promotes content for redistribution without user consent is acting as a publisher. As such, they should be fully liable for any content that is redistributed without the specific opt-in, consent or “subscription” of a targeted user.
Under this model, users could still be presented with relevant content by following or subscribing to certain content creators, publications, or subject-matter index tags determined by the content creators. The number of available tags would be limited and used only for basic indexing of content, in much the same way public libraries index content using the Dewey Decimal System.
Imagine logging on to Facebook, Twitter or Instagram and only seeing content that is exclusively from your friends, people or organizations that you signed up to follow, or directly related to subjects that you signed up to see and can change at any time.
Advertising would not disappear, but you would have more control over it because rather than targeting ads at you based on an invasive profile of your personal activity, advertisers would only be able to target ads via tags, publications or content creators. This would change the advertising model of the internet but does not run the risk of “eliminating innovation,” “destroying the internet” or any other hyperbole being offered by those with a financial interest in the current model.
Most importantly, content producers would be responsible for the content they produce or distribute. Existing libel laws and slander laws could be enforced. Online bullies could be identified and held accountable. And propaganda and spam farms would be far less effective because they wouldn’t have algorithms assisting them in the distribution of their lies.
This model would inoculate society from the worst of the internet without creating an overwhelming impediment to innovation, vigorous debate or the marketplace of ideas — which were the major concerns of the authors of Section 230, Sen. Ron Wyden, D-ore., and former Rep. Chris Cox, R-calif., when they proposed the law.
Overall, this would be a major step toward returning accountability and responsibility for the cesspool that is too much of the internet today. And it would make it much harder for bad actors to use the internet to accomplish their goals.
Congress has already carved out exceptions to Section 230 for speech that clearly violates federal law such as human trafficking, child pornography and international drug trade. These exceptions should extend to active recruitment and conspiracy by criminal organizations.