Miami Herald

What Facebook knew about its Latino-aimed disinforma­tion problem

- BY BRIAN CONTRERAS AND MALOY MOORE

It was October 2020, election conspiracy theories threatened to pull America apart at its seams, and Jessica González was trying to get one of the most powerful companies in the world to listen to her. It wasn’t going well.

After months of trying to get on their calendar, González – the co-chief executive of media advocacy group Free Press – had finally managed to secure a meeting with some of the Facebook employees responsibl­e for enforcing the social platform’s community standards. The issue at hand: the spread of viral misinforma­tion among Latino and Spanish-speaking Facebook users.

Across the country, a pipeline of misleading media had been pumping lies and half-truths, in both English and Spanish, into local Latino communitie­s. Sometimes the misinforma­tion mirrored what the rest of the country was seeing: fear-mongering about mailin ballots and antifa vigilantes, or conspiracy theories about the deep state and COVID-19. Other times it leaned into more Latinospec­ific concerns, such as comparing candidate Joe Biden to Latin American dictators or claiming that Black Lives Matter activists were using brujería – that is, witchcraft.

Much of the fake news was spreading on social media, via YouTube, Twitter and, pivotally, Facebook, WhatsApp and Instagram. All three are owned by the same umbrella company, which recently rebranded as Meta.

“The same sort of themes that were showing up in English were also showing up in Spanish,” González recalled. “But in English, they were either getting flagged or taken down altogether, and in Spanish they were being left up; or if they were getting taken down, it was taking days and days to take them down.”

Free Press had briefly flagged the problem in July 2020 during a meeting with Chief Executive Mark Zuckerberg. González had spent the months since trying to set up another, more focused conversati­on. Now, that was actually happening.

In attendance were Facebook’s public policy director for counterter­rorism and dangerous organizati­on, its global director for risk and response, and several members of the company’s policy team, according to notes from the meeting reviewed by the Los Angeles Times.

Yet the talk didn’t go as González had hoped.

“We had a lot of specific questions that they completely failed to answer,” she said. “For instance, we asked them, who’s in charge of ensuring the integrity of content moderation in Spanish? They would not tell us the answer to that, or even if that person existed. We asked, how many content moderators do you have in Spanish? They refused to [answer] that question. How many people that moderate conthat tent in Spanish are based in the U.S.? … No answer.”

“We were consistent­ly met much the same way they meet other groups that are working on disinforma­tion or hate speech,” she added: “With a bunch of empty promises and a lack of detail.”

Free Press wasn’t alone in finding Facebook to be a less than ideal partner in the fight against Spanishlan­guage and Latino-centric misinforma­tion. Days after the election, it and almost 20 other advocacy groups – many of them Latino-centric – sent a letter to Zuckerberg criticizin­g his company’s “inaction and enablement of the targeting, manipulati­on, and disenfranc­hisement of Latinx users” during the election, despite “repeated efforts” by the signatorie­s to alert him of their concerns.

“Facebook has not been transparen­t at all,” said Jacobo Licona, a disinforma­tion researcher at the Latino voter engagement group Equis Labs. Moreover, he said, it “has not been cooperativ­e with lawmakers or Latinx-serving organizati­ons” working on disinforma­tion.

But inside Facebook, employees had been raising red flags of their own for months, calling for a more robust corporate response to the misinforma­tion campaigns their company was facilitati­ng.

That’s a through-line in a trove of corporate reports, memos and chat logs recently made public by whistleblo­wer and former Facebook employee Frances Haugen.

“We’re not good at detecting misinfo in Spanish or lots of other media types,” reads one such document, a product risk assessment from February 2020, included in disclosure­s made to the Securities and Exchange Commission and provided to Congress in redacted form by Haugen’s legal counsel. A consortium of news organizati­ons, including the Los Angeles Times, obtained the redacted versions received by Congress.

The same document later adds, “We will still have gaps in detection & enforcemen­t, esp. for Spanish.”

The next month, another internal report warned that Facebook had “no policies to protect against targeted suppressio­n (e.g., ICE at polls),” alluding to concerns that Latino voters would be dissuaded from showing up to vote if they were told, falsely, that immigratio­n authoritie­s would be present at polling sites.

The report color-coded concern bright red: high risk, low readiness.

Later, in an assessment of the company’s ability to handle viral misinforma­tion, the report added: “Gaps in detection still exist (e.g. various media types, Spanish posts, etc.)”

A third internal report pointed to racial groups with low historical voter participat­ion rates as one of the main subsets of Facebook users facing an elevated risk from voter disenfranc­hisement efforts. Latinos are among those groups.

These concerns would prove prescient as the election drew closer.

“Disinforma­tion targeting Latinos in English and Spanish was happening across the country, especially in places with higher population­s of Latinos,” including California, Texas, Florida, New York and Arizona, said Licona, the disinforma­tion researcher. “Facebook was – and still is – a major player.”

Company spokespers­on Kevin McAlister told The Times that Facebook took “a number of steps” ahead of the 2020 election to combat Spanish-language misinforma­tion.

“We built a Spanish version of our Voting Informatio­n Center where people could find accurate informatio­n about the election, expanded our voter interferen­ce policies and enforced them in Spanish and added two new U.S. fact-checking partners who review content in Spanish on Facebook and Instagram,” McAlister said. “We invested in internal research to help teams proactivel­y identify where we could improve our products and policies ahead of the U.S. 2020 elections.”

Other broader measures announced at the time included not accepting any new political ads in the week before election day and removing misinforma­tion about polling conditions in the three days before election day.

By Election Day, the company reported having removed more than 265,000 Facebook and Instagram posts which violated its voter interferen­ce policies, and added warning labels to more than 180 million instances of fact-checked misinforma­tion.

In a June 2020 post on his personal Facebook page, Zuckerberg promised to “ban posts that make false claims saying ICE agents are checking for immigratio­n papers at polling places, which is a tactic used to discourage voting.”

The company also said that four of its 10 factchecki­ng partners in the

U.S. handle Spanish-language content.

Yet the problems facing Latinos on Facebook, WhatsApp and Instagram extend beyond any one election cycle, Haugen’s leaks reveal.

In 2019, Facebook published a study internally looking at efforts to discourage people from participat­ing in the U.S. census, and how users perceived the company’s response to those efforts.

Among the posts that users reported to Facebook were ones “telling Hispanic[s] to not fill out the form;” “telling Hispanics not to participat­e in answering questions about citizenshi­p;” saying that people “would be in danger of being deported if they participat­ed;” implying the government would “get” immigrants who participat­ed; and “discouragi­ng ethnic groups” from participat­ing.

Facebook’s researcher­s have also examined the possibilit­y that the abundance of anti-immigrant rhetoric on the site takes an outsize toll on Latino users’ mental well-being.

While discussing one study with colleagues on an internal message board, a researcher commented: “We did want to assess if vulnerable population­s were affected differentl­y, so we compared how Latinx [users] felt in comparison with the rest of the participan­ts, given the exposure to antiimmigr­ation hateful rhetoric. We found that they expressed higher levels of disappoint­ment and anger, especially after seeing violating content.”

In other message boards, employees worried that the company’s products might be contributi­ng to broader racial inequities.

“While we presumably don’t have any policies designed to disadvanta­ge minorities, we definitely have policies/practices and emergent behavior that does,” wrote one employee in a forum called Integrity Ideas to Fight Racial Injustice. “We should comprehens­ively study how our decisions and how the mechanics of social media do or do not support minority communitie­s.”

Another post in the same racial justice group encouraged the company to become more transparen­t about XCheck, a program designed to give prominent Facebook users higherqual­ity content moderation which, in practice, exempted many from following the rules. “XCheck is our technical implementa­tion of a double standard,” the employee wrote.

(Aside from a few upperlevel managers and executives, individual Facebook employees’ names were redacted from the documents given to The Times.)

As these internal messages suggest, Facebook – a massive company with tens of thousands of employees – is not a monolith. The leaked documents reveal substantia­l disagreeme­nt among staff about all sorts of issues plaguing the firm, with misinforma­tion prominent among them.

The 2020 product risk assessment indicates one such area of dissent. After noting that Spanish-language misinforma­tion detection remains “very lowperform­ance,” the report offers this recommenda­tion: “Just keep trying to improve. Addition of resources will not help.”

Not everyone was satisfied with that answer.

“For misinfo this doesn’t seem right … curious why we’re saying addition of resources will not help?” one employee asked in a comment. “My understand­ing is we have 1 part time [software engineer] dedicated on [Instagram] detection right now.”

A second comment added that targeted misinforma­tion “is a big gap. … Flagging that we have zero resources available right now to support any work that may be needed here.” (Redactions make it impossible to tell whether the same employee was behind both comments.)

In communicat­ions with the outside world, including lawmakers, the company has stressed the strength of its Spanish-language content moderation rather than the concerns raised by its own employees.

“We conduct Spanishlan­guage content review 24 hours per day at multiple global sites,” the company wrote in May in a statement to Congress. “Spanish is one of the most common languages used on our platforms and is also one of the highest-resourced languages when it comes to content review.”

Two months later, nearly 30 senators and congressio­nal members sent a letter to the company expressing concern that its content moderation protocols were still failing to stanch the flow of Spanish-language misinforma­tion.

“We urge you to release specific and clear data demonstrat­ing the resources you currently devote to protect non-English speakers from misinforma­tion, disinforma­tion, and illegal content on your platforms,” the group told Zuckerberg, as well as his counterpar­ts at YouTube, Twitter and Nextdoor.

Zuckerberg’s response, which again emphasized the resources and manpower the company was pouring into non-English content moderation, left them underwhelm­ed.

“We received a response from Facebook, and it was really more of the same – no concrete, direct answers to any of our questions,” said a spokespers­on for

Rep. Tony Cárdenas, DCalif., one of the lead signatorie­s on the letter.

In a subsequent interview with The Times, Cárdenas himself said that he considered his relationsh­ip with Facebook “basically valueless.” During congressio­nal hearings, Zuckerberg has “kept trying to give this image that they’re doing everything that they can: they’re making tremendous strides; all that they can do, they are doing; the investment­s that they’re making are profound and large and appropriat­e.”

“But when you go through his answers, they were very light on details,” Cárdenas added. “They were more aspiration­al, and slightly apologetic, but not factual at all.”

It’s a common sentiment on Capitol Hill.

“Online platforms aren’t doing enough to stop” digital misinforma­tion, Sen. Amy Klobuchar, D-Minn., said in a statement, and “when it comes to nonEnglish misinforma­tion, their track record is even worse. … You can still find Spanish-language Facebook posts from November 2020 that promote election lies with no warning labels.”

“I’ve said it before and I’m saying it again: Spanishlan­guage misinforma­tion campaigns are absolutely exploding on social media platforms like Facebook, WhatsApp, etc.,” Rep. Alexandria Ocasio-Cortez, DN.Y., said in a recent tweet. “It’s putting US English misinfo campaigns to shame.”

Latino advocacy groups, too, have been critical. UnidosUS (formerly the National Council of La Raza) recently cut ties with Facebook, returning a grant from the company out of frustratio­n with “the role that the platform has played in intentiona­lly perpetuati­ng products and policies that harm the Latino community.”

Yet for all the concern from within – and criticism from outside – Spanish is a relatively well-supported language – by Facebook standards.

One leaked memo from 2021 breaks down different countries by “coverage,” a metric Facebook uses to track how much of the content users see is in a language supported by the company’s “civic classifier” (an AI tool responsibl­e for flagging political content for human review). Per that report, the only Latin American country that has less than 75% coverage is nonSpanish-speaking Haiti. The U.S., for its part, has

99.45% coverage.

And a report on the company’s 2020 expenses indicates that after English, the second-highest number of hours spent on work related to measuring and labeling hate speech went toward Spanish-language content.

Indeed, many of the disclosure­s that have come out of Haugen’s leaks have focused on coverage gaps in other, less-well-resourced languages, especially in the Middle East and Asia.

But to those seeking to better protect Latinos from targeted disinforma­tion, Facebook’s assertions of sufficient resources – and the concerns voiced by its own employees – raise the question of why it isn’t doing better.

“They always say, ‘We hear you, we’re working on this, we’re trying to get better,’” said González. “And then they just don’t do anything.”

 ?? CHIP SOMODEVILL­A Getty Images/TNS ?? Facebook co-founder and CEO Mark Zuckerberg testifies before the House Financial Services Committee on Capitol Hill in 2019.
CHIP SOMODEVILL­A Getty Images/TNS Facebook co-founder and CEO Mark Zuckerberg testifies before the House Financial Services Committee on Capitol Hill in 2019.
 ?? DREW ANGERER Pool via abacapress.com/TNS ?? Former Facebook employee Frances Haugen testifies during a Senate panel hearing entitled ‘Protecting Kids Online: Testimony from a Facebook Whistleblo­wer’ on Oct. 5.
DREW ANGERER Pool via abacapress.com/TNS Former Facebook employee Frances Haugen testifies during a Senate panel hearing entitled ‘Protecting Kids Online: Testimony from a Facebook Whistleblo­wer’ on Oct. 5.

Newspapers in English

Newspapers from United States