Pittsburgh Post-Gazette

Social network’s neutrality stance said to harm minorities

Facebook’s race-blind policies around hate speech came at the expense of Black users, new documents show

- By Elizabeth Dwoskin, Nitasha Tiku and Craig Timberg

Last year, researcher­s at Facebook showed executives an example of the kind of hate speech circulatin­g on the social network: an actual post featuring an image of four female Democratic lawmakers, known collective­ly as “The Squad.”

The poster, whose name was scrubbed out for privacy, referred to the women, two of whom are Muslim, as “swami rag heads.” A comment from another person used even more vulgar and racist language in reference to the four women of color, according to internal company documents exclusivel­y obtained by The Washington Post.

The post represente­d the “worst of the worst” language on Facebook — the majority of it directed at minority groups, according to a two-year effort by a large team working across the company, the document said. The researcher­s urged executives to adopt an aggressive overhaul of its software system that would primarily remove only those hateful posts before any Facebook users could see it.

But Facebook’s leaders balked at the plan. According to two people familiar with the internal debate, top executives including vice president for global public policy Joel Kaplan feared the new system would tilt the scales by protecting some vulnerable groups over others. A policy executive prepared a document for Mr. Kaplan that raised the potential for backlash from “conservati­ve partners,” according to the document. The people spoke to The Post on condition of anonymity to discuss sensitive internal matters.

The previously unreported debate is an example of how Facebook’s decisions in the name of being neutral and race-blind in fact come at the expense of minorities and particular­ly people of color. Far from protecting Black and other minority users, Facebook executives wound up institutin­g halfmeasur­es after the “worst of the worst” project that left minorities more likely to encounter derogatory and racist language on the site, the people said.

“Even though [Facebook executives] don’t have any animus toward people of color, their actions are on the side of racists,” said Tatenda Musapatike, a former Facebook manager working on political ads and CEO of the Voter Formation Project, a nonpartisa­n, nonprofit organizati­on that uses digital communicat­ion to increase participat­ion in local state and national elections. “You are saying that the health and safety of women of color on the platform is not as important as pleasing your rich white man friends.”

The Black audience on Facebook is in decline, according to data from a study Facebook conducted earlier this year that was revealed in documents obtained by whistleblo­wer Frances Haugen. According to the February report, the number of Black monthly users fell 2.7 percent in one month to 17.3 million adults. It also shows that usage by Black people peaked in September 2020. Ms. Haugen’s legal counsel provided redacted versions of the documents to Congress, which were viewed by a consortium of news organizati­ons including The Washington Post.

Civil rights groups have long claimed that Facebook’s algorithms and policies had a disproport­ionately negative impact on minorities, and particular­ly Black users. The “worst of the worst” documents show that those allegation­s were largely true in the case of which hate speech remained online.

But Facebook didn’t disclose its findings to civil rights leaders. Even the independen­t civil rights auditors Facebook hired in 2018 to conduct a major study of racial issues on its platform say they were not informed of the details of research that the company’s algorithms disproport­ionately harmed minorities. Laura Murphy, president of Laura Murphy and Associates, who led the civil rights audit process, said Facebook told her that “the company does not capture data as to the protected group( s) against whom the hate speech was directed.”

“I am not asserting nefarious intent, but it is deeply concerning that metrics that showed the disproport­ionate impact of hate directed at Black, Jewish, Muslim, Arab and LGBTQIA users were not shared with the auditors,” Ms. Murphy said. “Clearly, they have collected some data along these lines.”

The auditors, in the report they released last year, still concluded that Facebook’s policy decisions were a “tremendous setback” for civil rights.

Facebook spokesman Andy Stone defended the company’s decisions around its hate speech policies and how it conducted its relationsh­ip with the civil rights auditors.

He said that progress on racial issues included policies such as banning white nationalis­t groups, prohibitin­g content promoting racial stereotype­s such as people wearing blackface or claims that Jews control the media, and reducing the prevalence of hate speech to .03 percent of content on the platform.

Facebook approached the civil rights audit with “transparen­cy and openness” and was proud of the progress it has made on issues of race, Mr. Stone said.

He noted that the company had implemente­d parts of the “worst of the worst” project.

“But after a rigorous internal discussion about these difficult questions, we did not implement all parts as doing so would have actually meant fewer automated removals of hate speech such as statements of inferiorit­y about women or expression­s of contempt about multiracia­l people,” he added.

Facebook researcher­s first showed the racist post featuring The Squad — Reps. Alexandria Ocasio-Cortez, D-N.Y.; Ilhan Omar, D- Minn.; Rashida Tlaib, D-Mich.; and Ayanna Pressley, D-Mass. — to more than 10,000 Facebook users in an online survey in 2019. (The Squad now has six members.) The goal, which asked users to rate 75 examples of hate speech on the platform, was to find what users considered the most harmful.

The 10 worst examples, according to the surveyed users, were almost all directed at minority groups, documents show. Five of the posts were directed at Black people, including statements about mental inferiorit­y and disgust. Two were directed at the LGBTQ community. The remaining three were violent comments directed at women, Mexicans, and white people.

These findings about the most objectiona­ble content held up even among self-identified white conservati­ves that the market research team traveled to visit in Southern states. Facebook researcher­s sought out the views of white conservati­ves in particular because they wanted to overcome potential objections from the company’s leadership, which was known to appease right-leaning viewpoints, two people said.

Yet racist posts against minorities weren’t what Facebook’s own hate speech detection algorithms were most commonly finding. The software, which the company introduced in 2015, was supposed to detect and automatica­lly delete hate speech before users saw it. Publicly, the company said in 2019 that its algorithms proactivel­y caught more than 80% of hate speech.

But this statistic hid a serious problem that was obvious to researcher­s: The algorithm was aggressive­ly detecting comments denigratin­g white people more than attacks on every other group, according to several of the documents.

Newspapers in English

Newspapers from United States