Can we trust Facebook to keep out hate speech?
Social media posts that pit people against each other in Myanmar, Sri Lanka and India are hurting efforts at peace
The reality is that ethnic violence against the Rohingya is not the result of threats to free speech — it is due to the exercise of speech rights via the Facebook platform.
Should Facebook and other technology platforms do more to prevent human rights abuses? The question has assumed significance recently due to violent incidents in countries including India, Sri Lanka and Myanmar following inflammatory posts on Facebook or WhatsApp. A Facebook-commissioned report by Business for Social Responsibility (BSR) to analyse the company’s response in relation to Myanmar found that Facebook was deficient and issued a number of recommendations. The report has the colour of greenwashing and virtue-signalling — underlining the reality that Facebook has few incentives for tackling the problem. Governments need to do more to regulate these platforms. Here’s why.
First, consider the context. Myanmar has experienced repression for most of its post-independence existence. Its population is majority Buddhist with smatterings of Christians and Rohingya Muslims. Faced with orchestrated ethnic violence, an estimated 700,000 Rohingya have fled the country.
To be sure, the vitriol against the Rohingya appears to be longstanding and Burmese society has had a history of religious prejudice and conflict that is deep rooted and pervasive across socioeconomic divisions. Facebook has provided rocket-fuel to the worst of those tendencies and enabled the spread of hateful messages calculated to incite violence. For instance, posts likening the community to animals or maggots, slur the men as rapists, and call for the extermination of the entire race. Crucially, Facebook has about 20 million accounts in Myanmar — the same number as those with Internet access — making it synonymous with the Net. It is reported that mobile phones, the primary means of Internet access, come preloaded with Facebook accounts.
In this milieu of hate, and 20 million potential outlets to disseminate poisonous messages, Facebook had no staff on the ground. It was relying on a handful of staff to cull posts and outsourced the job of policing to unpaid others who were supposed to implement “Community Standards.”
The consequences were predictable. Facebook did not do enough to prevent incitement of offline violence against vulnerable groups. So, what’s Facebook to do to prevent its platform from being used as a coordinating ground for ethnic violence? The BSR report makes several recommendations. The important ones are considered below.
First, BSR recommends a “standalone human rights policy.” This is meant to aid the formalisation of a structured approach to human rights across the company and drive its strategy.
Second, Facebook is asked to publish periodic human rights updates. Third, commit resources to building a team of Burmese-speaking staff that is versed in local culture and can implement the community standards. Fourth, the company ought to be stricter in its interpretation of what constitutes the threat of credible violence, particularly in relation to false information. Fifth, the company should partner with local NGOs and others to police the application of its community standards. Sixth, it should invest in AI and machine learning to identify and remove harmful content in a timely manner. Seventh, Facebook should introduce features designed to enhance the digital literacy. Finally, Facebook is asked to partner with agencies to create and disseminate “counter hate speech” content.
The reality is ethnic violence against the Rohingya is not the result of threats to free speech — it is due to the exercise
of speech rights via the Facebook platform. While the contextual factors create an environment conducive to prejudice, it is Facebook’s hands-off approach to messages posted on its platform that spread hate and incited violence against the Rohingya. If it really believed the environment was poisonous, Facebook’s decision not to have staff in Myanmar and employ a very small number of externally located staff to police the content posted, is especially culpable. Its actions are tantamount to a person handing a matchbox to a pyromaniac standing by an oil spill.
Based on its record, and against the grain of the greenwashing by BSR’s report, it would be folly to expect anything otherwise. Facebook has few incentives to check such behaviour — employing staff on the scale necessary to police messages posted by 20 million users would be financially ruinous. And it does not bear the consequences of hateful, defamatory, or other offensive speech — those are borne by users or others. Meanwhile, Facebook continues to mint money expanding its user base and resultant advertising avenues.
Given these realities, the BSR report’s points about state intervention into digital communications and prosecutions against journalists should be taken as distractions. Even if they are true, these are beside the point and have little to do with the spreading of hate against the Rohingya on Facebook.
The solution to the real problem is simple: employ adequate numbers of staff to identify and remove offensive messages in a timely manner. These staff must have linguistic and cultural proficiencies to spot problematic content and be authorised to remove offensive content quickly. As feasible, Facebook must invest in AI to automate such removals. And when hateful or violence-inspiring speech is identified, it must freeze such accounts and turn those users over to law enforcement agencies for prosecution.
Facebook reports that it now employs about 100 Myanmar language experts to review content. In July this year, it claims to have amended its credible violence policies to “more proactively delete inaccurate or misleading information created or shared with the purpose of contributing to, or exacerbating, violence or physical harm.” It has also deleted some high-profile accounts including those of military officials.
These are baby-steps. It remains to be seen whether 100 people can effectively police speech across 20 million users. And the problem is not confined to Myanmar — similar problems have been observed in India and Sri Lanka with WhatsApp messages inciting mob lynchings against innocent persons based on misinformation about them being childlifters, or their ethnic/religious identity.
Facebook must not be allowed to escape blame by shifting the responsibility on to “bad actors,” “human rights challenges,” or legal gaps. Whilst the government bears responsibility for maintaining law and order and protecting lives, given that Facebook has been shown to be an instrumentality for the commission of violence, the company must be mandated to do more. It cannot free-ride on enforcement or point to community standards and do business as usual whilst innocent people die.
Myanmar and other countries need the company to identify and delete hateful messages that threaten human lives — that’s the best kind of human rights policy. That ought to be the price for Facebook’s entry into any market.