Inside Facebook’s secret rule book for global political speech
MENLO PARK, CALIF.
In a glass conference room at its California headquarters, Facebook is taking on the bonfires of hate and misinformation it has helped fuel across the world, one post at a time.
The social network has drawn criticism for undermining democracy and for provoking bloodshed in societies small and large.
But for Facebook, it’s also a business problem.
The company, which makes about $5 billion in profit per quarter, has to show it is serious about removing dangerous content. It must also continue to attract more users from more countries and try to keep them on the site longer.
How can Facebook monitor billions of posts per day in over 100 languages, all without disturbing the endless expansion that is core to its business? The company’s solution: a network of workers using a maze of PowerPoint slides spelling out what’s forbidden.
Every other Tuesday morning, several dozen Facebook employees gather to come up with the rules, hashing out what the site’s 2 billion users should be allowed to say. The guidelines that emerge from these meetings are sent to 7,500-plus moderators around the world.
The closely held rules are extensive, and they make the company a far more powerful arbiter of global speech than has been publicly recognized or acknowledged by the company itself, The New York Times has found.
The Times was provided with more than 1,400 pages from the rule books by an employee who said he feared the company was exercising too much power, with too little oversight – and making too many mistakes.
An examination of the files revealed numerous gaps, biases and outright errors. As Facebook em- ployees grope for the right answers, they have allowed extremist language to flourish in some countries while censoring mainstream speech in others.
Moderators were once told, for example, to remove fundraising appeals for volcano victims in Indonesia because a cosponsor of the drive was on Facebook’s internal list of banned groups. In Myanmar, a paperwork error allowed an extremist group, accused of fomenting genocide, to stay on the platform for months. In India, moderators were mistakenly told to take down comments critical of religion.
The Facebook employees who meet to set the guidelines, mostly young engineers and lawyers, try to distill highly complex issues into simple yes-or-no rules. Then the company outsources much of the postby-post moderation to companies that enlist largely unskilled workers, many hired out of call centers.
Those moderators, at times relying on Google Translate, have mere seconds to recall countless rules and apply them to the hundreds of posts that dash across their screens each day.
Moderators express frustration at rules they say don’t always make sense and sometimes require them to leave up posts they fear could lead to violence. “You feel like you killed someone by not acting,” one said, speaking on the condition of anonymity because he had signed a nondisclosure agreement.
Facebook executives say they are working diligently to rid the platform of dangerous posts.
“It’s not our place to correct people’s speech, but we do want to enforce our community standards on our platform,” said
Sara Su, a senior engineer on the News Feed. “When you’re in our community, we want to make sure that we’re balancing freedom of expression and safety.”
Monika Bickert, Facebook’s head of global policy management, said that the primary goal was to prevent harm, and that to a great extent, the company had been successful. But perfection, she said, is not possible.
The guidelines for identifying hate speech, a problem that has bedeviled Facebook, run to 200 pages. Moderators must sort a post into one of three “tiers” of severity. They must bear in mind lists like the six “designated dehumanizing comparisons,” among them comparing Jews to rats.
“There’s a real tension here between wanting to have nuances to account for every situation, and wanting to have set of policies we can enforce accurately and we can explain cleanly,” said Bickert.
As detailed as the guidelines can be, they are also approximations – best guesses at how to fight extremism or disinformation. And they are leading Facebook to intrude into sensitive political matters the world over, sometimes clumsily.
Increasingly, decisions on what posts should be barred amount to regulating political speech – and not just on the fringes. In many countries, extremism and the mainstream are blurring.
In the U.S., Facebook banned the Proud Boys, a far-right pro-Trump group. The company also blocked an inflammatory ad, about a caravan of Central American migrants, that was produced by President Donald Trump’s political team.