San Francisco Chronicle - (Sunday)

Inside Facebook’s secret rulebook for global speech

- By Max Fisher NET WORTH

In a glass conference room at its Menlo Park headquarte­rs, Facebook is taking on the bonfires of hate and misinforma­tion it has helped fuel across the world, one post at a time.

The social network has drawn criticism for underminin­g democracy and for provoking bloodshed in societies small and large.

But for Facebook, it’s also a business problem. The company, which makes about $5 billion in profit per quarter, has to show it is serious about removing dangerous content. It must also continue to attract more users from more countries and try to keep them on the site longer.

How can Facebook monitor billions of posts per day in more than 100 languages, all without disturbing the endless expansion that is core to its business? The company’s solution: a network of workers using a maze of PowerPoint slides spelling out what’s forbidden.

Every other Tuesday morning, several dozen Facebook employees gather over breakfast to come up with the rules, hashing out what the site’s 2 billion users should be allowed to say. The guidelines that emerge from these meetings are sent to 7,500-plus

moderators around the world.

The closely held rules are extensive, and they make the company a far more powerful arbiter of global speech than has been publicly recognized or acknowledg­ed by the company itself, the New York Times has found. The Times was provided with more than 1,400 pages from the rulebooks by an employee who said he feared the company was exercising too much power, with too little oversight — and making too many mistakes.

An examinatio­n of the files revealed numerous gaps, biases and outright errors. As Facebook employees grope for the right answers, they have allowed extremist language to flourish in some countries while censoring mainstream speech in others.

Moderators were once told, for example, to remove fundraisin­g appeals for volcano victims in Indonesia because a co-sponsor of the drive was on Facebook’s internal list of banned groups. In Myanmar, a paperwork error allowed a prominent extremist group, accused of fomenting genocide, to stay on the service for months. In India, moderators were mistakenly told to take down comments critical of religion.

The Facebook employees who meet to set the guidelines, mostly young engineers and lawyers, try to distill highly complex issues into simple yes-or-no rules. Then the company outsources much of the actual postby-post moderation to companies that enlist largely unskilled workers, many hired out of call centers.

Those moderators, at times relying on Google Translate, have mere seconds to recall countless rules and apply them to the hundreds of posts that dash across their screens each day. When is a reference to “jihad,” for example, forbidden? When is a “crying laughter” emoji a warning sign?

Moderators express frustratio­n at rules they say don’t always make sense and sometimes require them to leave up posts they fear could lead to violence. “You feel like you killed someone by not acting,” one said, speaking on the condition of anonymity because he had signed a nondisclos­ure agreement.

Facebook executives say they are working diligently to rid the service of dangerous posts.

“It’s not our place to correct people’s speech, but we do want to enforce our community standards,” said Sara Su, a senior engineer on the News Feed. “When you’re in our community, we want to make sure that we’re balancing freedom of expression and safety.”

Monika Bickert, Facebook’s head of global policy management, said that the primary goal is to prevent harm, and that to a great extent, the company had been successful. But perfection, she said, is not possible.

“We have billions of posts every day, we’re identifyin­g more and more potential violations using our technical systems,” Bickert said. “At that scale, even if you’re 99 percent accurate, you’re going to have a lot of mistakes.”

One document sets out several rules just to determine when a word like “martyr” or “jihad” indicates pro-terrorism speech. Another describes when discussion of a barred group should be forbidden. Words like “brother” or “comrade” probably cross the line. So do any of a dozen emoji.

The guidelines for identifyin­g hate speech, a problem that has bedeviled Facebook, run to 200 jargon-filled, headspinni­ng pages. Moderators must sort a post into one of three “tiers” of severity. They must bear in mind lists like the six “designated dehumanizi­ng comparison­s,” among them comparing Jews to rats.

“There’s a real tension here between wanting to have nuances to account for every situation, and wanting to have set of policies we can enforce accurately and we can explain cleanly,” said Bickert, the Facebook executive.

Though the Facebook employees who make the rules are largely free to set policy however they wish, and often do so in the room, they also consult with outside groups.

“We’re not drawing these lines in a vacuum,” Bickert said.

As detailed as the guidelines can be, they are also approximat­ions — best guesses at how to fight extremism or disinforma­tion. And they are leading Facebook to intrude into sensitive political matters the world over, sometimes clumsily.

Increasing­ly, the decisions on what posts should be barred amount to regulating political speech — and not just on the fringes. In many countries, extremism and the mainstream are blurring.

In the United States, Facebook banned the Proud Boys, a far-right pro-Trump group. The company also blocked an inflammato­ry ad, about a caravan of Central American migrants, that was produced by President Trump’s political team.

In June, according to internal emails reviewed by the Times, moderators were told to allow users to praise the Taliban — normally a forbidden practice — if they mentioned the decision to enter into a cease-fire. In another email, moderators were told to hunt down and remove rumors wrongly accusing an Israeli soldier of killing a Palestinia­n medic.

“Facebook’s role has become so hegemonic, so monopolist­ic, that it has become a force unto itself,” said Jasmin Mujanovic, an expert on the Balkans. “No one entity, especially not a forprofit venture like Facebook, should have that kind of power to influence public debate and policy.”

In the absence of government­s or internatio­nal bodies that can set standards, Facebook is experiment­ing on its own.

The company never set out to play this role, but in an effort to control problems of its own creation, it has quietly become, with a speed that makes even employees uncomforta­ble, what is arguably one of the world’s most powerful political regulators.

“A lot of this would be a lot easier if there were authoritat­ive third parties that had the answer,” said Brian Fishman, a counterter­rorism expert who works with Facebook.

“Sometimes these things explode really fast,” Fishman said, “and we have to figure out what our reaction’s going to be, and we don’t have time for the U.N.”

But the results can be uneven.

Consider the guidelines for the Balkans, where rising nationalis­m is threatenin­g to reignite old violence. The file on that region, not updated since 2016, includes odd errors. Ratko Mladic, a Bosnian war criminal still celebrated by extremists, is described as a fugitive. In fact, he was arrested in 2011.

Facebook’s most politicall­y consequent­ial document may be a spreadshee­t that names every group and individual the company has quietly barred as a hate figure.

Moderators are instructed to remove any post praising, supporting or representi­ng any listed figure.

Anton Shekhovtso­v, an expert in far-right groups, said he was “confused about the methodolog­y.” The company bans an impressive array of American and British groups, he said, but relatively few in countries where the far right can be more violent, particular­ly Russia or Ukraine.

Countries where Facebook faces government pressure seem to be better covered than those where it does not. Facebook blocks dozens of far-right groups in Germany, where the authoritie­s scrutinize the social network, but only one in neighborin­g Austria.

The list includes a growing number of groups with one foot in the political mainstream, like the far-right Golden Dawn, which holds seats in the Greek and EU parliament­s.

For a tech company to draw these lines is “extremely problemati­c,” said Jonas Kaiser, a Harvard University expert on online extremism. “It puts social networks in the position to make judgment calls that are traditiona­lly the job of the courts.”

 ?? Associated Press ?? An employee at a photograph­y institute checks his Facebook account in Dhaka, Bangladesh. Facebook shut down some sites spreading false informatio­n about the Bangladesh opposition days before elections.
Associated Press An employee at a photograph­y institute checks his Facebook account in Dhaka, Bangladesh. Facebook shut down some sites spreading false informatio­n about the Bangladesh opposition days before elections.

Newspapers in English

Newspapers from United States