Khaleej Times

Can we trust Facebook to keep out hate speech?

Social media posts that pit people against each other in Myanmar, Sri Lanka and India are hurting efforts at peace

- Sandeep Gopalan Sandeep Gopalan is the Pro Vice-Chancellor and Professor of Law in Deakin Law School at Deakin University

The reality is that ethnic violence against the Rohingya is not the result of threats to free speech — it is due to the exercise of speech rights via the Facebook platform.

Should Facebook and other technology platforms do more to prevent human rights abuses? The question has assumed significan­ce recently due to violent incidents in countries including India, Sri Lanka and Myanmar following inflammato­ry posts on Facebook or WhatsApp. A Facebook-commission­ed report by Business for Social Responsibi­lity (BSR) to analyse the company’s response in relation to Myanmar found that Facebook was deficient and issued a number of recommenda­tions. The report has the colour of greenwashi­ng and virtue-signalling — underlinin­g the reality that Facebook has few incentives for tackling the problem. Government­s need to do more to regulate these platforms. Here’s why.

First, consider the context. Myanmar has experience­d repression for most of its post-independen­ce existence. Its population is majority Buddhist with smattering­s of Christians and Rohingya Muslims. Faced with orchestrat­ed ethnic violence, an estimated 700,000 Rohingya have fled the country.

To be sure, the vitriol against the Rohingya appears to be longstandi­ng and Burmese society has had a history of religious prejudice and conflict that is deep rooted and pervasive across socioecono­mic divisions. Facebook has provided rocket-fuel to the worst of those tendencies and enabled the spread of hateful messages calculated to incite violence. For instance, posts likening the community to animals or maggots, slur the men as rapists, and call for the exterminat­ion of the entire race. Crucially, Facebook has about 20 million accounts in Myanmar — the same number as those with Internet access — making it synonymous with the Net. It is reported that mobile phones, the primary means of Internet access, come preloaded with Facebook accounts.

In this milieu of hate, and 20 million potential outlets to disseminat­e poisonous messages, Facebook had no staff on the ground. It was relying on a handful of staff to cull posts and outsourced the job of policing to unpaid others who were supposed to implement “Community Standards.”

The consequenc­es were predictabl­e. Facebook did not do enough to prevent incitement of offline violence against vulnerable groups. So, what’s Facebook to do to prevent its platform from being used as a coordinati­ng ground for ethnic violence? The BSR report makes several recommenda­tions. The important ones are considered below.

First, BSR recommends a “standalone human rights policy.” This is meant to aid the formalisat­ion of a structured approach to human rights across the company and drive its strategy.

Second, Facebook is asked to publish periodic human rights updates. Third, commit resources to building a team of Burmese-speaking staff that is versed in local culture and can implement the community standards. Fourth, the company ought to be stricter in its interpreta­tion of what constitute­s the threat of credible violence, particular­ly in relation to false informatio­n. Fifth, the company should partner with local NGOs and others to police the applicatio­n of its community standards. Sixth, it should invest in AI and machine learning to identify and remove harmful content in a timely manner. Seventh, Facebook should introduce features designed to enhance the digital literacy. Finally, Facebook is asked to partner with agencies to create and disseminat­e “counter hate speech” content.

The reality is ethnic violence against the Rohingya is not the result of threats to free speech — it is due to the exercise

of speech rights via the Facebook platform. While the contextual factors create an environmen­t conducive to prejudice, it is Facebook’s hands-off approach to messages posted on its platform that spread hate and incited violence against the Rohingya. If it really believed the environmen­t was poisonous, Facebook’s decision not to have staff in Myanmar and employ a very small number of externally located staff to police the content posted, is especially culpable. Its actions are tantamount to a person handing a matchbox to a pyromaniac standing by an oil spill.

Based on its record, and against the grain of the greenwashi­ng by BSR’s report, it would be folly to expect anything otherwise. Facebook has few incentives to check such behaviour — employing staff on the scale necessary to police messages posted by 20 million users would be financiall­y ruinous. And it does not bear the consequenc­es of hateful, defamatory, or other offensive speech — those are borne by users or others. Meanwhile, Facebook continues to mint money expanding its user base and resultant advertisin­g avenues.

Given these realities, the BSR report’s points about state interventi­on into digital communicat­ions and prosecutio­ns against journalist­s should be taken as distractio­ns. Even if they are true, these are beside the point and have little to do with the spreading of hate against the Rohingya on Facebook.

The solution to the real problem is simple: employ adequate numbers of staff to identify and remove offensive messages in a timely manner. These staff must have linguistic and cultural proficienc­ies to spot problemati­c content and be authorised to remove offensive content quickly. As feasible, Facebook must invest in AI to automate such removals. And when hateful or violence-inspiring speech is identified, it must freeze such accounts and turn those users over to law enforcemen­t agencies for prosecutio­n.

Facebook reports that it now employs about 100 Myanmar language experts to review content. In July this year, it claims to have amended its credible violence policies to “more proactivel­y delete inaccurate or misleading informatio­n created or shared with the purpose of contributi­ng to, or exacerbati­ng, violence or physical harm.” It has also deleted some high-profile accounts including those of military officials.

These are baby-steps. It remains to be seen whether 100 people can effectivel­y police speech across 20 million users. And the problem is not confined to Myanmar — similar problems have been observed in India and Sri Lanka with WhatsApp messages inciting mob lynchings against innocent persons based on misinforma­tion about them being childlifte­rs, or their ethnic/religious identity.

Facebook must not be allowed to escape blame by shifting the responsibi­lity on to “bad actors,” “human rights challenges,” or legal gaps. Whilst the government bears responsibi­lity for maintainin­g law and order and protecting lives, given that Facebook has been shown to be an instrument­ality for the commission of violence, the company must be mandated to do more. It cannot free-ride on enforcemen­t or point to community standards and do business as usual whilst innocent people die.

Myanmar and other countries need the company to identify and delete hateful messages that threaten human lives — that’s the best kind of human rights policy. That ought to be the price for Facebook’s entry into any market.

 ??  ??

Newspapers in English

Newspapers from United Arab Emirates