San Francisco Chronicle

Twitter backs off broad ban of offensive speech

- By Kate Conger

In August, Twitter’s top executives gathered at the company’s San Francisco headquarte­rs to discuss how to make the site safer for its users. Two of them proposed banning all speech that could be considered “dehumanizi­ng.” For an example of what they meant, they showed a sample post that featured the words President Trump used to compare certain nations to excrement.

By January, Twitter had backed off from deeming that sample tweet dehumanizi­ng. Instead, the post was included in an internal company slideshow, which helps train Twitter moderators, as the kind of message that should be allowed on the platform.

And Tuesday, when Twitter rolled out its first official guidelines around what constitute­s dehumanizi­ng speech on its service, the sample

post was nowhere in sight. The company had narrowed its policymaki­ng to focus only on banning speech that is insulting and unacceptab­le if directed at religious groups.

“While we have started with religion, our intention has always been and continues to be an expansion to all protected categories,” Jerrel Peterson, Twitter’s head of safety policy, said in an interview. “We just want to be methodical.”

The scaling back of Twitter’s efforts to define dehumanizi­ng speech illustrate­s the company’s challenges as it sorts through what to allow on its platform. While the new guidelines help it draw starker lines around what it will and will not tolerate, it took Twitter nearly a year to put together the rules — and even then they are just a fraction of the policy that it originally said it intended to create.

Twitter said it had ratcheted down the policy’s scope partly because it kept running into obstacles. When the company sought users’ feedback last year on what it thought such speech might include, people pushed back on the proposed definition­s. Over months of discussion­s late last year and early this year, Twitter employees also worried that such a policy might be too sweeping, potentiall­y resulting in the removal of benign messages and in haphazard enforcemen­t.

“We get one shot to write a policy that has to work for 350 million people who speak 43plus languages while respecting cultural norms and local laws,” Peterson said. “It’s incredibly difficult, and we can’t do it by ourselves. We realized we need to be really small and specific.”

Twitter unveiled its new policy ahead of a social media summit at the White House on Thursday that is expected to thrust it and other Silicon Valley companies under the spotlight for what they will and won’t allow. For the event, Trump has invited conservati­ve activists who have thrived on social media, such as Charlie Kirk, president of Turning Point USA, which advocates limited government and other issues. Many of those who are expected to attend have accused social media companies of anticonser­vative bias.

Twitter declined to comment on the meeting.

In the past, Twitter has focused its removal policies on posts that may directly harm an individual, such as threats of violence or messages that contain personal informatio­n or nonconsens­ual nudity. Under the new rules, the company is adding a sentence that says users “may not dehumanize groups based on their religion, as these remarks can lead to offline harm.” Twitter said that included any tweets that might compare people in religious groups to animals, insects, bacteria and other categories.

The company quickly put the change into effect Tuesday. Twitter said it had removed a tweet in which Louis Farrakhan, the outspoken black nationalis­t minister, compared Jewish people to termites because it violated the dehumaniza­tion policy.

Rashad Robinson, president of Color of Change, a civil rights group, said Twitter’s new policy fell short of where it should go.

“Dehumaniza­tion is a great start, but if dehumaniza­tion starts and stops at religious categories alone, that does not encapsulat­e all the ways people have been dehumanize­d,” he said.

Twitter’s work around a dehumaniza­tion policy began in August after the company faced a firestorm for not immediatel­y barring Alex Jones, the rightwing conspiracy theorist, when Apple, Facebook and others did. Twitter eventually did bar Jones, and CEO Jack Dorsey, said at the time that the incident had forced the company to consider “that safety should come first.”

“That’s a conversati­on we need to have,” he added.

Dorsey delegated the task of figuring out what makes up dehumanizi­ng speech on Twitter to the company’s legal, policy and safety teams, which are led by Vijaya Gadde. Dorsey took a handsoff approach because he wanted to empower Gadde to make the decisions, a Twitter spokeswoma­n said.

The discussion­s began with the meeting at Twitter’s headquarte­rs, which included the sample tweet featuring Trump’s unflatteri­ng descriptio­n of nations such as Haiti. At the end of that meeting, executives agreed to draft a policy about dehumanizi­ng speech and open it to the public for comments.

In September, Twitter published the draft policy of what dehumanizi­ng speech would be forbidden. It included posts likening people to animals or suggesting that certain groups serve a single, mechanisti­c purpose.

“I like to think of this as us trying to be experiment­al, the way that our colleagues in product and engineerin­g are very experiment­al and they’re trying new things,” Gadde said in an interview at the time.

The response from users was swift — and critical. Twitter received more than 8,000 pieces of feedback from people in more than 30 countries. Many said the draft made no sense, pointing out cases in which the policy would lead to takedowns of posts that lacked any negative intent.

In one example, fans of Lady Gaga, who call themselves “Little Monsters” as a term of endearment, worried that they would no longer be able to use the phrase. Some gamers complained that they would be unable to discuss killing a character in a video game. Others said the draft policy didn’t go far enough in addressing hate speech and sexist comments.

In October and November, Twitter employees began revising the policy with the public input.

“We knew the policy was too broad,” Peterson said. The solution, he and others decided, was to narrow it down to groups that are protected under civil rights law, such as women, minorities and LGBTQ people. Religious groups seemed particular­ly easy to identify in tweets, and there were clear cases of dehumaniza­tion on social media that led to harm in the real world, Twitter employees said. Those include the ethnic cleansing of Rohingya Muslims in Myanmar, which was preceded by hate campaigns on social networks like Facebook.

Early this year, Twitter further limited the scope of the policy by carving out an exception. The company prepared a feature to preserve tweets from world leaders, like Trump, even if they engaged in dehumanizi­ng speech. Twitter reasoned that such posts were in the public interest. So if any world leaders tweeted something insulting and unacceptab­le, their posts would be kept online but hidden behind a warning label.

Newspapers in English

Newspapers from United States