The debate over Silicon Valley’s embrace of content moderation
Some see it as a freedom issue, others say unbridled speech is a risk to democracy
The existential question that every big tech platform from Twitter to Google to Facebook has to wrestle with is the same: How responsible should it act for the content that people post?
The answer that Silicon Valley has come up with for decades is: Less is more. But now, as protests of police brutality continue across the country, many in the tech industry are questioning the wisdom of letting all flowers bloom online.
After years of leaving President Donald Trump’s tweets alone, Twitter has taken a more aggressive approach in recent days, in several cases adding factchecks and marks indicating the president’s tweets were misleading or glorified violence. Many Facebook employees want their company to do the same, although the chief executive, Mark Zuckerberg, said he was against it. And Snapchat said Wednesday that it had stopped promoting Trump’s content on its main Discover page.
In the midst of this notable shift, some civil libertarians are raising a question in an already complicated debate: Any move to moderate content more proactively could eventually be used against speech loved by the people now calling for intervention.
“It comes from this drive to be protected, this belief that it’s a platform’s role to protect us from that which may harm or offend us,” said Suzanne Nossel, head of PEN America, a freespeech advocacy organization. “And if that means granting them greater authority, then that’s worth it if that means protecting people,” she added, summarizing the argument. “But people are losing sight of the risk.”
Civil libertarians caution that adding warning labels or additional context to posts raises a range of issues issues that tech companies until recently had wanted to avoid. New rules often backfire. Fact-checks and context, no matter how sober or accurate they are, can be perceived as politically biased. More proactive moderation by the platforms could threaten their special protected legal status. And intervention goes against the apolitical self-image that some in the tech world have.
But after years of shrugging off concerns that content on social media platforms leads to harassment and violence, many in Silicon Valley appear willing to accept the risks associated with shutting down bad behavior even from world leaders.
“Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves,” Twitter’s chief executive, Jack Dorsey, wrote.
A group of early Facebook employees wrote a letter Wednesday denouncing Zuckerberg’s decision not to act on Trump’s content.
“Fact-checking is not censorship. Labeling a call to violence is not authoritarianism,” they wrote, adding, “Facebook isn’t neutral, and it never has been.”
A hands-off approach by the companies has allowed harassment and abuse to proliferate online, Lee Bollinger, president of Columbia University and a First Amendment scholar, said last week. So now the companies, he said, have to grapple with how to moderate content and take more responsibility, without losing their legal protections.
“These platforms have achieved incredible power and influence,” Bollinger said, adding that moderation was a necessary response. “There’s a greater risk to American democracy in allowing unbridled speech on these private platforms.”
Section 230 of the federal Communications Decency Act, passed in 1996, shields tech platforms from being held liable for the third-party content that circulates on them. But taking a firmer hand to what appears on their platforms could endanger that protection, most of all, for political reasons.
One of the few things that Democrats and Republicans in Washington agree on is that changes to Section 230 are on the table. Trump issued an executive order calling for changes to it after Twitter added labels to some of his tweets. Former Vice President Joe Biden, the presumptive Democratic presidential nominee, has also called for changes to Section 230.
“You repeal this, and then we’re in a different world,” said Josh Blackman, a constitutional law professor at the South Texas College of Law Houston. “Once you repeal Section 230, you’re now left with 51 imperfect solutions.”
Blackman said he was shocked that so many liberals especially inside the tech industry were applauding Twitter’s decision.
“What happens to your enemies will happen to you eventually,” he said. “If you give these entities power to shut people down, it will be you one day.”
Brandon Borrman, a spokesman for Twitter, said the company was “focused on helping conversation continue by providing additional context where it’s needed.” A spokeswoman for Snap, Rachel Racusen, said the company “will not amplify voices who incite racial violence and injustice by giving them free promotion on Discover.” Facebook and Reddit declined to comment.
Tech companies have historically been wary of imposing editorial judgment, lest they have to act more like a newspaper.
It is complicated when Dorsey begins doing that at Twitter. Does that mean a person who is now libeled on the site and asks for a fact-check gets one? And if the person doesn’t, is that grounds for a lawsuit?
The circumstances around fact-checks and added context can quickly turn political, the freespeech activists said. Which tweets should be fact-checked? Who does that fact-checking? Which get added context? What is the context that’s added? And once you have a full team doing fact-checking and adding context, what makes that different from a newsroom?
“The idea that you would delegate to a Silicon Valley board room or a bunch of content moderators at the equivalent of a customer service center the power to arbitrate our landscape of speech is very worrying,” Nossel said.
There has long been a philosophical rationale for the hands-off approach still embraced by Zuckerberg. Many in tech, especially the early creators of the social media sites, embraced a near-absolutist approach to free speech. Perhaps because they knew the power of what they were building, they did not trust themselves to decide what should go on it.
Of course, the companies already do moderate to some extent. They block nudity and remove child pornography. They work to limit doxxing when someone’s phone number and address is shared without consent. And promoting violence is out of bounds.
They have rules that would bar regular people from saying what Trump and other political figures say. Yet they did not do anything to mark the president’s recent false tweets about MSNBC host Joe Scarborough. They did do something a label, though not a deletion when Trump strayed into areas that Twitter has staked out: election misinformation and violence.