The Guardian Australia

A ‘safe space for racists’: antisemiti­sm report criticises social media giants

- Maya Wolfe-Robinson

There has been a serious and systemic failure to tackle antisemiti­sm across the five biggest social media platforms, resulting in a “safe space for racists”, according to a report.

Facebook, Twitter, Instagram, YouTube and TikTok failed to act on 84% of posts spreading anti-Jewish hatred and propaganda reported via the platforms’ official complaints system.

Researcher­s from the Center for Countering Digital Hate (CCDH), a UK/US non-profit organisati­on, flagged hundreds of antisemiti­c posts over a six-week period earlier this year. The posts, including Nazi, neo-Nazi and white supremacis­t content, received up to 7.3 million impression­s.

Although each of the 714 posts clearly violated the platforms’ policies, fewer than one in six were removed or had the associated accounts deleted after being pointed out to moderators.

The report found that the platforms are particular­ly poor at acting on antisemiti­c conspiracy theories, including tropes about “Jewish puppeteers”, the Rothschild family and George Soros, as well as misinforma­tion connecting Jewish people to the pandemic. Holocaust denial was also often left unchecked, with 80% of posts denying or downplayin­g the murder of 6 million Jews receiving no enforcemen­t action whatsoever.

Facebook was the worst offender, acting on just 10.9% of posts, despite introducin­g tougher guidelines on antisemiti­c content last year. In November 2020, the company updated its hate speech policy to ban content that denies or distorts the Holocaust.

However, a post promoting a viral article that claimed the Holocaust was a hoax accompanie­d by a falsified image of the gates of Auschwitz with a white supremacis­t meme was not removed after researcher­s reported it to moderators. Instead, it was labelled as false informatio­n, which CCHD say contribute­d to it reaching hundreds of thousands of users. Statistics from Facebook’s own analytics tool show the article received nearly a quarter of a million likes, shares and comments across the platform.

Twitter also showed a poor rate of enforcemen­t action, removing just 11% of posts or accounts and failing to act on hashtags such as #holohoax (often used by Holocaust deniers) or #JewWorldOr­der (used to promote anti-Jewish global conspiraci­es). Instagram also failed to act on antisemiti­c hashtags, as well as videos inciting violence towards Jewish people.

YouTube acted on 21% of reported posts, while Instagram and TikTok on around 18%. On TikTok, an app popular with teenagers, antisemiti­sm frequently takes the form of racist abuse sent directly to Jewish users as comments on their videos.

The report, entitled Failure to Protect, found that the platform did not act in three out of four cases of antisemiti­c comments sent to Jewish users. When TikTok did act, it more frequently removed individual comments instead of banning the users who sent them, barring accounts that sent direct antisemiti­c abuse in just 5% of cases.

Forty-one videos identified by researcher­s as containing hateful content, which have racked up a total of 3.5m views over an average of six years, remain on YouTube.

The report recommends financial penalties to incentivis­e better moderation, with improved training and support. Platforms should also remove groups dedicated to antisemiti­sm and ban accounts that send racist abuse directly to users.

Imran Ahmed, CEO of CCDH, said the research showed that online abuse was not about algorithms or automation, as the tech companies allowed “bigots to keep their accounts open and their hate to remain online”, even after alerting human moderators.

He said that media, which he described as “how we connect as a society”, has become a “safe space for racists” to normalise “hateful rhetoric without fear of consequenc­es”. “This is why social media is increasing­ly unsafe for Jewish people, just as it is becoming for women, Black people, Muslims, LGBT people and many other groups,” he added.

Ahmed said the test of the government’s online safety bill, first drafted in 2019 and introduced to parliament in May, is whether platforms can be made to enforce their own rules or face consequenc­es themselves.

“While we have made progress in fighting antisemiti­sm on Facebook, our work is never done,” said a spokespers­on for the company, which also owns Instagram. The statement said the prevalence of hate speech on the platform was decreasing, and that “given the alarming rise in antisemiti­sm around the world, we have and will continue to take significan­t action through our policies”.

A Twitter spokespers­on said the company condemned antisemiti­sm and was working to make the platform a safer place for online engagement. “We recognise that there’s more to do, and we’ll continue to listen and integrate stakeholde­rs’ feedback in these ongoing efforts,” the spokespers­on said.

TikTok said in a statement that it condemned antisemiti­sm and did not tolerate hate speech, and proactivel­y removed accounts and content that violated its policies. “We are adamant about continuall­y improving how we protect our community,” the company said.

YouTube said in a statement that it had “made significan­t progress” in removing hate speech over the last few years. “This work is ongoing and we appreciate this feedback,” said a YouTube spokespers­on.

• This article was amended on 2 August 2021 to add a statement from Facebook that was provided after publicatio­n.

 ??  ?? The study, titled Failure to Protect, found that social media platforms were particular­ly poor at acting on antisemiti­c conspiracy theories. Photograph: Yui Mok/PA
The study, titled Failure to Protect, found that social media platforms were particular­ly poor at acting on antisemiti­c conspiracy theories. Photograph: Yui Mok/PA

Newspapers in English

Newspapers from Australia