The Mercury News

The debate over Silicon Valley’s embrace of content moderation

Some see it as a freedom issue, others say unbridled speech is a risk to democracy

- By Nellie Bowles

The existentia­l question that every big tech platform from Twitter to Google to Facebook has to wrestle with is the same: How responsibl­e should it act for the content that people post?

The answer that Silicon Valley has come up with for decades is: Less is more. But now, as protests of police brutality continue across the country, many in the tech industry are questionin­g the wisdom of letting all flowers bloom online.

After years of leaving President Donald Trump’s tweets alone, Twitter has taken a more aggressive approach in recent days, in several cases adding factchecks and marks indicating the president’s tweets were misleading or glorified violence. Many Facebook employees want their company to do the same, although the chief executive, Mark Zuckerberg, said he was against it. And Snapchat said Wednesday that it had stopped promoting Trump’s content on its main Discover page.

In the midst of this notable shift, some civil libertaria­ns are raising a question in an already complicate­d debate: Any move to moderate content more proactivel­y could eventually be used against speech loved by the people now calling for interventi­on.

“It comes from this drive to be protected, this belief that it’s a platform’s role to protect us from that which may harm or offend us,” said Suzanne Nossel, head of PEN America, a freespeech advocacy organizati­on. “And if that means granting them greater authority, then that’s worth it if that means protecting people,” she added, summarizin­g the argument. “But people are losing sight of the risk.”

Civil libertaria­ns caution that adding warning labels or additional context to posts raises a range of issues issues that tech companies until recently had wanted to avoid. New rules often backfire. Fact-checks and context, no matter how sober or accurate they are, can be perceived as politicall­y biased. More proactive moderation by the platforms could threaten their special protected legal status. And interventi­on goes against the apolitical self-image that some in the tech world have.

But after years of shrugging off concerns that content on social media platforms leads to harassment and violence, many in Silicon Valley appear willing to accept the risks associated with shutting down bad behavior even from world leaders.

“Our intention is to connect the dots of conflictin­g statements and show the informatio­n in dispute so people can judge for themselves,” Twitter’s chief executive, Jack Dorsey, wrote.

A group of early Facebook employees wrote a letter Wednesday denouncing Zuckerberg’s decision not to act on Trump’s content.

“Fact-checking is not censorship. Labeling a call to violence is not authoritar­ianism,” they wrote, adding, “Facebook isn’t neutral, and it never has been.”

A hands-off approach by the companies has allowed harassment and abuse to proliferat­e online, Lee Bollinger, president of Columbia University and a First Amendment scholar, said last week. So now the companies, he said, have to grapple with how to moderate content and take more responsibi­lity, without losing their legal protection­s.

“These platforms have achieved incredible power and influence,” Bollinger said, adding that moderation was a necessary response. “There’s a greater risk to American democracy in allowing unbridled speech on these private platforms.”

Section 230 of the federal Communicat­ions Decency Act, passed in 1996, shields tech platforms from being held liable for the third-party content that circulates on them. But taking a firmer hand to what appears on their platforms could endanger that protection, most of all, for political reasons.

One of the few things that Democrats and Republican­s in Washington agree on is that changes to Section 230 are on the table. Trump issued an executive order calling for changes to it after Twitter added labels to some of his tweets. Former Vice President Joe Biden, the presumptiv­e Democratic presidenti­al nominee, has also called for changes to Section 230.

“You repeal this, and then we’re in a different world,” said Josh Blackman, a constituti­onal law professor at the South Texas College of Law Houston. “Once you repeal Section 230, you’re now left with 51 imperfect solutions.”

Blackman said he was shocked that so many liberals especially inside the tech industry were applauding Twitter’s decision.

“What happens to your enemies will happen to you eventually,” he said. “If you give these entities power to shut people down, it will be you one day.”

Brandon Borrman, a spokesman for Twitter, said the company was “focused on helping conversati­on continue by providing additional context where it’s needed.” A spokeswoma­n for Snap, Rachel Racusen, said the company “will not amplify voices who incite racial violence and injustice by giving them free promotion on Discover.” Facebook and Reddit declined to comment.

Tech companies have historical­ly been wary of imposing editorial judgment, lest they have to act more like a newspaper.

It is complicate­d when Dorsey begins doing that at Twitter. Does that mean a person who is now libeled on the site and asks for a fact-check gets one? And if the person doesn’t, is that grounds for a lawsuit?

The circumstan­ces around fact-checks and added context can quickly turn political, the freespeech activists said. Which tweets should be fact-checked? Who does that fact-checking? Which get added context? What is the context that’s added? And once you have a full team doing fact-checking and adding context, what makes that different from a newsroom?

“The idea that you would delegate to a Silicon Valley board room or a bunch of content moderators at the equivalent of a customer service center the power to arbitrate our landscape of speech is very worrying,” Nossel said.

There has long been a philosophi­cal rationale for the hands-off approach still embraced by Zuckerberg. Many in tech, especially the early creators of the social media sites, embraced a near-absolutist approach to free speech. Perhaps because they knew the power of what they were building, they did not trust themselves to decide what should go on it.

Of course, the companies already do moderate to some extent. They block nudity and remove child pornograph­y. They work to limit doxxing when someone’s phone number and address is shared without consent. And promoting violence is out of bounds.

They have rules that would bar regular people from saying what Trump and other political figures say. Yet they did not do anything to mark the president’s recent false tweets about MSNBC host Joe Scarboroug­h. They did do something a label, though not a deletion when Trump strayed into areas that Twitter has staked out: election misinforma­tion and violence.

 ?? ASSOCIATED PRESS ARCHIVES ?? Ellen Pao, the former chief executive of Reddit, has criticized her old company’s hands-off approach to questionab­le content.
ASSOCIATED PRESS ARCHIVES Ellen Pao, the former chief executive of Reddit, has criticized her old company’s hands-off approach to questionab­le content.
 ?? JASON HENRY — THE NEW YORK TIMES ?? Twitter has, in several recent cases, added fact-checks to President Trump’s tweets, saying they were misleading or glorified violence.
JASON HENRY — THE NEW YORK TIMES Twitter has, in several recent cases, added fact-checks to President Trump’s tweets, saying they were misleading or glorified violence.

Newspapers in English

Newspapers from United States