The Mercury News

Facebook grapples with amplified version of problems

India proving to be a struggle with hate speech, violence

- By Sheera Frenkel and Davey Alba

On Feb. 4, 2019, a Facebook researcher created a new user account to see what it was like to experience the social media site as a person living in Kerala, India.

For the next three weeks, the account operated by a simple rule: Follow all the recommenda­tions generated by Facebook's algorithms to join groups, watch videos and explore new pages on the site.

The result was an inundation of hate speech, misinforma­tion and celebratio­ns of violence, which were documented in an internal Facebook report published later that month.

“Following this test user's News Feed, I've seen more images of dead people in the past three weeks than I've seen in my entire life total,” the Facebook researcher wrote.

The report was one of dozens of studies and memos written by Facebook employees grappling with the effects of the platform on India. They provide stark evidence of one of the most serious criticisms levied by human rights activists and politician­s against the world-spanning company: It moves into a country without fully understand­ing its potential effects on local culture and politics, and fails to deploy the resources to act on issues once they occur.

With 340 million people using Facebook's various social media platforms, India is the company's largest market. And Facebook's problems on the subcontine­nt present an amplified version of the issues it has faced throughout the world, made worse by a lack of resources and a lack of expertise in India's 22 officially recognized languages.

The internal documents, obtained by a consortium of news organizati­ons that included The New York Times, are part of a larger cache of material called The Facebook Papers. They were collected by Frances Haugen, a former Facebook product manager who became a whistleblo­wer and recently testified before a Senate subcommitt­ee about the company and its social media platforms. References to India were scattered among documents filed by Haugen to the Securities and Exchange Commission in a complaint earlier this month.

The documents include reports on how bots and fake accounts tied to the country's ruling party and opposition figures were wreaking havoc on national elections. They also detail how a plan championed by Facebook CEO Mark Zuckerberg to focus on “meaningful social interactio­ns,” or exchanges between friends and family, was leading to more misinforma­tion in India, particular­ly during the pandemic.

Facebook did not have enough resources in India and was unable to grapple with the problems it had introduced there, including anti-Muslim posts, according to its documents. Eighty-seven percent of the company's global budget for time spent on classifyin­g misinforma­tion is earmarked for the United States, while only 13% is set aside for the rest of the world even though North American users make up only 10% of the social network's daily active users, according to one document describing Facebook's allocation of resources.

Andy Stone, a Facebook spokespers­on, said the figures were incomplete and don't include the company's third-party fact-checking partners, most of whom are outside the United States.

That lopsided focus on the United States has had consequenc­es in a number of countries besides India. Company documents showed that Facebook installed measures to demote mis

informatio­n during the November election in Myanmar, including disinforma­tion shared by the Myanmar military junta.

The company rolled back those measures after the election, despite research that showed they lowered the number of views of inflammato­ry posts by 25.1% and photo posts containing misinforma­tion by 48.5%. Three months later, the military carried out a violent coup in the country. Facebook said that after the coup, it implemente­d a special policy to remove praise and support of violence in the country, and later banned the Myanmar military from Facebook and Instagram.

In Sri Lanka, people were able to automatica­lly add hundreds of thousands of users to Facebook groups, exposing them to violencein­ducing and hateful content. In Ethiopia, a nationalis­t youth militia group successful­ly coordinate­d calls for violence on Facebook and posted other inflammato­ry content.

Facebook has invested

significan­tly in technology to find hate speech in various languages, including Hindi and Bengali, two of the most widely used languages, Stone said. He added that Facebook reduced the amount of hate speech that people see globally by half this year.

“Hate speech against marginaliz­ed groups, including Muslims, is on the rise in India and globally,” Stone said. “So we are improving enforcemen­t and are committed to updating our policies as hate speech evolves online.”

In India, “there is definitely a question about resourcing” for Facebook, but the answer is not “just throwing more money at the problem,” said Katie Harbath, who spent 10 years at Facebook as a director of public policy and worked directly on securing India’s national elections. Facebook, she said, needs to find a solution that can be applied to countries around the world.

Facebook employees have run various tests and conducted field studies in India for several years. That work increased before India’s 2019 national elections; in late January of that year, a handful of Facebook employees

traveled to the country to meet with colleagues and speak to dozens of local Facebook users.

According to a memo written after the trip, one of the key requests from users in India was that Facebook “take action on types of misinfo that are connected to real-world harm, specifical­ly politics and religious group tension.”

Ten days after the researcher opened the fake account to study misinforma­tion, a suicide bombing in the disputed border region of Kashmir set off a round of violence and a spike in accusation­s, misinforma­tion and conspiraci­es between Indian and Pakistani nationals.

After the attack, anti-Pakistan content began to circulate in the Facebook-recommende­d groups that the researcher had joined. Many of the groups, she noted, had tens of thousands of users. A different report by Facebook, published in December 2019, found Indian Facebook users tended to join large groups, with the country’s median group size at 140,000 members.

Graphic posts, including a meme showing the beheading of a Pakistani national and dead bodies

wrapped in white sheets on the ground, circulated in the groups she joined.

After the researcher shared her case study with co-workers, her colleagues commented on the posted report that they were concerned about misinforma­tion about the upcoming elections in India.

Two months later, after India’s national elections had begun, Facebook put in place a series of steps to stem the flow of misinforma­tion and hate speech in the country, according to an internal document called Indian Election Case Study.

The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners the third-party network of outlets with which Facebook works to outsource fact-checking and increasing the amount of misinforma­tion it removed. It also noted how Facebook had created a “political whitelist to limit PR risk,” essentiall­y a list of politician­s who received a special exemption from factchecki­ng.

The study did not note the immense problem the company faced with bots in India, nor issues such as

voter suppressio­n. During the election, Facebook saw a spike in bots or fake accounts linked to various political groups, as well as efforts to spread misinforma­tion that could have affected people’s understand­ing of the voting process.

In a separate report produced after the elections, Facebook found that more than 40% of top views, or impression­s, in the Indian state of West Bengal were “fake/inauthenti­c.” One inauthenti­c account had amassed more than 30 million impression­s.

A report published in March showed that many of the problems cited during the 2019 elections persisted.

In the internal document, called Adversaria­l Harmful Networks: India Case Study, Facebook researcher­s wrote that there were groups and pages “replete with inflammato­ry and misleading antiMuslim content” on Facebook.

The report said there were a number of dehumanizi­ng posts comparing Muslims to “pigs” and “dogs,” and misinforma­tion claiming that the Quran, the holy book of Islam, calls for men to rape their female family members.

 ?? SAUMYA KHANDELWAL — THE NEW YORK TIMES ?? Supporters of the Bhartiya Janata Party celebrate in New Delhi as the vote count increases in 2019. With 340 million people using Facebook’s various social media platforms, India is its largest market.
SAUMYA KHANDELWAL — THE NEW YORK TIMES Supporters of the Bhartiya Janata Party celebrate in New Delhi as the vote count increases in 2019. With 340 million people using Facebook’s various social media platforms, India is its largest market.

Newspapers in English

Newspapers from United States