Pakistan Today (Lahore)

Facebook hasn’t done Enough to FIX Its hate-speech problem In India

FACEBOOK’S FORMER PUBLIC POLICY DIRECTOR ANKHI DAS, WHO HAD PERSONALLY SHARED ISLAMOPHOB­IC CONTENT, ASKED EMPLOYEES TO NOT APPLY HATE SPEECH RULES TO POSTS OF CERTAIN BJP POLITICIAN­S

- Quartz ananya Bhattachar­ya

FACEBOOK has been selectivel­y curbing hate speech, misinforma­tion and inflammato­ry posts –particular­ly anti-muslim content – in India, according to more than a dozen leaked internal memos and studies seen by the Wall Street Journal, the New York Times and the Associated Press.

Facebook sees India as one of the most “at-risk countries” in the world, meaning that the company recognises it needs better algorithms and teams to respond to events in almost real-time. India is Facebook’s largest market with at least 34 crore Facebook accounts and 40 crore Whatsapp users.

The platform’s woes have been exacerbate­d by its own “recommende­d” feature, and a dearth of reliable content moderation systems, the papers shared by whistleblo­wer Frances Haugen reveal. And employees’ concerns over the mishandlin­g of such issues, and the viral “malcontent” on the platform appear to have been swept under the rug for the most part.

TRACK RECORD

Last year, the Wall Street Journal reported allegation­s of Facebook favouring Prime Minister Narendra Modi’s Bharatiya Janata Party. The whistleblo­wer’s exposé doubles down on these claims, providing further evidence that Facebook’s former public policy director Ankhi Das, who had personally shared Islamophob­ic content, asked employees to not apply hate speech rules to posts of certain BJP politician­s.

The author of a December 2020 internal document notes that “Facebook routinely makes exceptions for powerful actors when enforcing content policy”. In the same memo, a former Facebook chief security officer says that, outside the United States, “local policy heads are generally pulled from the ruling political party and are rarely drawn from disadvanta­ged ethnic groups, religious creeds, or castes”, which “naturally bends decision-making towards the powerful”.

ANTI-MUSLIM PROPAGANDA Much of the rhetoric on Facebook’s platforms teeters on India’s religious fault line – and the company seems to be quietly letting it all go. A case study from March this year shows Facebook was debating whether it could control the “fear-mongering, antimuslim narratives” pushed by the Rashtriya Swayamseva­k Sangh, a far-right Hindu nationalis­t group to which Modi belonged during his youth.

In a document titled “Lotus Mahal” – the lotus is the BJP’S party symbol – Facebook noted that members with links to the BJP had created multiple Facebook accounts to amplify anti-muslim content, ranging from “calls to oust Muslim population­s from India” and “love jihad”, an unproven conspiracy theory by Hindu hardliners who accuse Muslim men of using interfaith marriages to coerce Hindu women into changing their religion.

There were also Hindu nationalis­t groups with ties to the ruling party that continued their activity despite posting inflammato­ry anti-muslim content, including “dehumanisi­ng posts comparing Muslims to ‘pigs’ and ‘dogs’ and misinforma­tion ….”, the documents show.

In more extreme cases, such hate speech and fake news has led to physical harm in the real world. In February last year, a politician from Modi’s party posted a video on Facebook to call upon his supporters to remove mostly Muslim protesters from a road in New Delhi, sparking riots that killed 53. A New Delhi government committee found Facebook complicit.

After the pandemic hit, “coronajiha­d”, a term blaming Muslims for intentiona­lly spreading the Covid-19 virus, started circulatin­g on social media. It took Facebook days to remove the hashtag, and by then, doctored video clips and posts purportedl­y showing Muslims spitting on authoritie­s and hospital staff had already made the rounds. The conspiracy theory cost some Muslims dearly, leading to violence, business boycotts, and even jail time for some.

Despite all the internal deliberati­ons, Facebook did not kick out these hardline Hindu groups, citing “political sensitivit­ies”. Facebook spokespers­on Andy Stone told the Wall Street Journal on October 23 that the company bans groups or individual­s “after following a careful, rigorous and multidisci­plinary process”, and some leaked reports were working documents that were still under investigat­ion.

CONTENT MODERATION

But Facebook’s own failing systems are likely getting in the way of these so-called investigat­ions, too. Back in February 2019, just before India’s most recent general elections, a Facebook employee created a test user to understand what a new user in the country would see on their news feed if all they did was follow pages and groups solely recommende­d by the platform itself.

During this time, there was a militant attack in Kashmir, which killed over 40 Indian soldiers. In the aftermath, Facebook groups got flooded with hate speech and unverified rumours, and viral content ran rampant, the documents show. The new user’s recommende­d groups were inundated with fake news, anti-pakistan rhetoric and Islamophob­ic content.

Many of the rumours and calls to violence against Muslims were “never flagged or actioned” because Facebook lacked “classifier­s” and “moderators” in Hindi and Bengali languages. In a statement to the

 ?? ??

Newspapers in English

Newspapers from Pakistan