Bangkok Post

When politician­s lie, should FB promote them?

- Peter Apps is a writer on internatio­nal affairs, globalisat­ion, conflict and other issues. He is the founder and executive director of the Project for Study of the 21st Century; PS21, a non-national, non-partisan, non-ideologica­l think tank.

During the 2016 US election, Facebook says it did not do enough to enforce its standards as rumours and untruths spread. In the three years since, it says it has hired up to 30,000 people to take down inappropri­ate or extremist content, clamp down on fake accounts and reduce interactio­ns with socalled “fake news” by almost two thirds.

This week, Facebook’s chief of government relations and communicat­ions announced an important twist to that policy. As a new US election looms, he said the platform would continue to monitor and sometimes remove untruthful, extremist or otherwise harmful content. But he added a new twist to that policy — that it would not apply to politician­s. If someone holding or running for political office said something that would normally be banned, Facebook’s updated “newsworthi­ness” clause means users will still be able to see and share it.

The announceme­nt by Nick Clegg, a former British deputy prime minister and party leader who now leads Facebook’s political liaison, showed just what a complex position the firm and other internet giants now find themselves in. With it and other platforms such as Twitter now central to politics and community relations around the world, Facebook is increasing­ly held responsibl­e for the consequenc­es of content posted on it. But it is extremely unsure what to do about that — and perhaps unsurprisi­ngly, reluctant to find itself in conflict with politician­s who may one day be responsibl­e for regulating it.

Much of Mr Clegg’s speech, to the Atlantic Festival in Washington DC, appeared aimed at countering suggestion­s that Facebook should be broken up entirely. That, he said, would undermine a major US brand that supported wellpaid tech jobs and the broader economy. The firm should and could not become effectivel­y a censor of political dialogue and debate, he argued, although it might still sometimes block content it believed risked sparking violence or underminin­g human rights.

Critics, however, say the platform has been failing on that front for years, particular­ly in India, where it is widely cited as a factor in rising ethnic violence, notably against Muslims. With 300 million users, many using languages Facebook struggles to monitor, the Indian market has long been a challenge for the firm. According to internet monitoring organisati­on Equality Labs, 93% of Islamophob­ic or content otherwise tagged as extremist or incitement in India remained on the platform after being reported.

The rise in Facebook-based incitement has come at the same time as — and quite possibly fuelled — a sectarian trend in Indian politics. The risk is that Facebook’s new “newsworthi­ness” measure will further encourage such comments from politician­s. In parts of India and nearby Myanmar, that has fuelled violence against the Muslim Rohingya minority — and it remains unclear whether Myanmar’s rulers and military will be counted as “politician­s” under the updated rules.

Other countries have, on occasion, taken a much tougher line with Facebook. After ethnic riots against Muslims last year, Sri Lanka blocked the service and several others including Twitter. That, however, appears to have done little to blunt the unpleasant aspects of such platforms.

That is particular­ly true in the United States. Clearly, social media can give an insight into objectiona­ble views held by individual­s who might previously have kept them hidden. The fact they can now be seen more widely, however, can make such views appear more mainstream. A survey earlier this year looked at almost 3,000 serving US police officers as well as some retired colleagues. A fifth of serving officers, it revealed, had shared content judged as troubling, including racial epithets. That rose to two fifths among retired officers.

The risk of Facebook’s new newsworthi­ness terms, of course, is that they provide an incentive for right-wing and ethnically divisive politician­s to push more aggressive rhetoric. Those who monitor the internet closely say extreme and divisive positions are more likely to get shared. And, of course, those who wish to do so now have an additional incentive to embrace the political mainstream to protect their ability to publish such views.

Even among those who fall short of outright extremism or hate speech, this may encourage a growing trend in politician­s to disregard the truth. Much of this, of course, is about the current incumbent of the White House. President Donald Trump’s Twitter feed, in particular, continues to be an egregious offender. The truth may be that Facebook and others have simply given up any thoughts of moderating the current POTUS, and the updated rules for politician­s mean they will not risk his ire by interferin­g with his output.

Ultimately, this all points to a bigger and in many respects much more disturbing picture. Facebook and other social media might are the battlegrou­nd and what they will and will not take is crucial.

But what is even more alarming is the growing number of politician­s who feel neither constraine­d by truth nor worries over the impact of their divisive words, even if they result in violence, catastroph­e or death.

 ?? AFP ?? Facebook has vowed to screen untruthful, extremist content — unless it comes from politician­s.
AFP Facebook has vowed to screen untruthful, extremist content — unless it comes from politician­s.

Newspapers in English

Newspapers from Thailand