San Francisco Chronicle

Tech firms go on offense against neo-Nazis

- By Sarah Frier, Jeff Green and Olivia Zaleski

When white supremacis­ts plan rallies like the one a few days ago in Charlottes­ville, Va., they often organize their events on Facebook, pay for supplies with PayPal, book their lodging with Airbnb and ride with Uber. Technology companies, for their part, have been taking pains to distance themselves from these customers.

But sometimes it takes more than automated systems or complaints from other users to identify and block those who promote hate speech or violence, so companies are finding novel ways to spot and shut down content they deem inappropri­ate or dangerous. People don’t tend to share their views on their Airbnb accounts, for example. But after matching user names to posts on social media profiles, the company canceled dozens of reservatio­ns made by self-identified Nazis who were using its app to find rooms in Charlottes­ville, where they were heading to protest the removal of a Confederat­e statue.

“We try to take a very strong stance against discrimina­tion and hatred,” Airbnb co-founder and chief strategy officer Nathan Blecharczy­k told Bloomberg this week. “We make every one of our users sign a pledge when they sign up that they will not discrimina­te, that they will not exhibit hatred. Whenever we become aware of such an example, they’re permanentl­y banned.”

At Facebook, which relies on community feedback to flag hateful content

for removal, private groups meant for likeminded people can be havens for extremists, falling through gaps in the content-moderation system. The company is working quickly to improve its machine-learning capabiliti­es to be able to automatica­lly identify posts that should be reviewed by human moderators.

These more aggressive actions mark a shift in how companies view their responsibi­lities. Virtually all these services have long maintained rules on how users should behave, but in the past they’d mostly enforce these policies in response to bad behavior. After the violence in Charlottes­ville, which resulted in the death of a counterpro­tester, their approach has become more proactive, in anticipati­on of future events. While social media companies have been grappling for years with how to rid their sites of hateful speech and images, the events of the last several days served as a stark reminder of just how real, present and local the threat posed by white supremacis­ts can be.

Uber told drivers they don’t have to pick up racists; PayPal said it can cancel relationsh­ips with sites that promote racial intoleranc­e. Even Discover Financial Services, the credit card company, said this week that it was ending its agreements with hate groups. Apple has also moved to block hate sites from using Apple Pay, and Facebook shut down eight group pages that it said violated hate-speech policies, including “Right Wing Death Squad” and “White Nationalis­ts United.”

“It’s one thing to say, we do not allow hate groups — it’s another thing to actually go and hunt down the groups, make those decisions, and kick those people off,” said Gerald Kane, a professor of informatio­n systems at the Boston College Carroll School of Management. “It’s something most of these companies have avoided intentiona­lly and fervently over the past 10 years.”

Companies historical­ly have steered clear of trying to determine what is good and what is evil, Kane said. But given the increasing­ly heated public debate in the U.S., they may feel they need to act, he said.

There’s some precedent. Globally, tech firms have been criticized by government­s for their role in the spread of Islamic State ideology, particular­ly on Facebook and Twitter. Both of the social media companies have stepped up their efforts to remove extremist content, deleting hundreds of thousands of accounts, as well as group pages on Facebook.

“People have wondered, why are they so focused on Islamic extremism, and not white nationalis­m or white supremacy in their own backyard?” said Emma Llanso, director of the Center for Democracy & Technology’s Free Expression Project. “Now extremists in the United States are getting swept up in the same policies.”

Tech companies have no legal obligation in the U.S. to respond to calls to censor racist content. Under the Communicat­ions Decency Act of 1996, intermedia­ries are immunized from most litigation that claims material on their pages is unlawful.

That doesn’t mean these companies aren’t feeling the pressure from advertiser­s and users who fear that pages belonging to alt-right publicatio­ns like the Daily Stormer could incite violence, said Daphne Keller, director of intermedia­ry liability at Stanford Law School’s Center for Internet and Society. The Daily Stormer’s Web domain support was revoked this week by GoDaddy and then Google, and Twitter suspended several associated accounts. Technology companies are likely to be evaluating their options in consultati­on with organizati­ons including the Anti-Defamation League before shaping their policy, Keller said.

“What’s pushing them is probably a mix of people being revolted by the content, plus the public and advertisin­g pressure,” said Keller, who is also former associate general counsel at Google. “Everything they’re doing is because they want to, or because of public pressure. But not because of the law.”

In March, Google conceded to giving marketers more control over their online ads after a flurry of brands halted spending in the United Kingdom amid concerns about offensive content. The company also agreed to expand its definition of hate speech under its advertisin­g policy to include vulnerable racial and socioecono­mic groups. The policies marked a sharp turn for the Mountain View company, which had hewed to its position as a neutral content host.

Google, Twitter and Facebook continue to face increased pressure to amend their user terms to bring them into compliance with European Union law pertaining to illegal content on their websites.

Facebook hired thousands more human moderators this year to try to help it tackle violent content, hate speech and extremism. CEO Mark Zuckerberg has in the past touted Facebook’s product for groups as a key to improving empathy around the world. But when groups are used to silence others or threaten violence, Facebook will remove them, he said Wednesday.

“With the potential for more rallies, we’re watching the situation closely and will take down threats of physical harm,” Zuckerberg wrote on his Facebook page. “We won’t always be perfect, but you have my commitment that we’ll keep working to make Facebook a place where everyone can feel safe.”

A Facebook page remains active for one upcoming rally that has raised concerns among local officials about potential violence — set to be hosted by Patriot Prayer at Crissy Field in San Francisco on Aug. 26. Facebook said it is aware of the event, but hasn’t found a reason to take it down. The company has to weigh public pressure with its own assessment of a real-world threat.

Because all the decisions are subjective, it’s going to be important for technology companies to make it clear what standards they’re applying when they’re reacting to public outrage, Llanso said.

“When does extra scrutiny kick in, if there are other standards, or if it’s a special case?” she said. “They have a lot of leeway, but they still have a responsibi­lity to their user base to explain, what are the terms, when is the company going to weigh in with a values-based judgment?”

Cloudflare, a San Francisco Web-security company that has protected the networks of several neo-Nazi sites, including the Daily Stormer, faced criticism in May from ProPublica for doing so, and has been one of the “worst offenders when it comes to protecting white-supremacis­t propaganda,” said Heidi Beirich, who monitors hate groups for the Southern Poverty Law Center. The company has defended itself by saying service providers shouldn’t be censoring content on the Internet.

But on Wednesday, Cloudflare decided to end its business with the Daily Stormer, saying it could no longer remain neutral because the neo-Nazi website was claiming the company secretly supported its ideology.

“Maybe even they are waking up to this problem,” Beirich said. “Maybe this is a moment of reckoning and change — and it sure seems serious right now.”

Still, Cloudflare CEO Matthew Prince warned that even as he chose to sever ties with the Daily Stormer, the move could set a dangerous precedent.

“After today, make no mistake, it will be a little bit harder for us to argue against a government somewhere pressuring us into taking down a site they don’t like,” Prince wrote.

“It’s one thing to say, we do not allow hate groups — it’s another thing to actually go and hunt down the groups, make those decisions, and kick those people off.” Gerald Kane, Boston College professor of informatio­n systems

 ?? Sam Kang Li / Bloomberg ?? Airbnb co-founder Nathan Blecharczy­k says the San Francisco online hospitalit­y company bans users who violate a pledge not to discrimina­te.
Sam Kang Li / Bloomberg Airbnb co-founder Nathan Blecharczy­k says the San Francisco online hospitalit­y company bans users who violate a pledge not to discrimina­te.
 ?? Edu Bayer / New York Times ?? The clash in Charlottes­ville, Va., helped push technology companies to get more involved in monitoring and limiting content on their services.
Edu Bayer / New York Times The clash in Charlottes­ville, Va., helped push technology companies to get more involved in monitoring and limiting content on their services.

Newspapers in English

Newspapers from United States