Global Times

Social media in a bind over extremist content

- By Daniel Hockenberr­y

The line between combating extremist propaganda and infringing on free speech poses an ongoing challenge for social media companies. Social media provides extremists with instantane­ous access to a large audience and disseminat­e informatio­n to further a cause. Consequent­ly, terrorist groups have taken advantage of services like Facebook and Twitter in clear violation of user policies. Few would contest the responsibi­lity of moderators to remove such content as quickly as possible. But as more and more people rely on social media for sharing current events, the question of what constitute­s oversteppi­ng this boundary between responsibl­e moderation and censorship has come into question.

Under US law, private entities have the right to make their own policies regarding the admissibil­ity of content on their service. Each platform maintains their own terms of use that provide a framework for users to abide by when using social media. These policies give a general guideline that is always subject to interpreta­tion by users and moderators alike.

The reality is such policies only provide a general set of standards. These standards are applied inconsiste­ntly and have even led to significan­t controvers­ies that illustrate the challenges both users and moderators face when dealing with controvers­ial content.

One such incident occurred in 2015, when Facebook moderators targeted advocates of a left- wing Kurdish organizati­on in Turkey, known as the PKK. The advocates reported any content about the PKK they posted from the US or the UK was quickly deleted on Facebook. Content flagged for removal included images of a map and a burning Turkish flag. A leaked document containing Facebook’s violations list indicated the PKK was the target of a largescale censorship campaign by the social media giant at the request of the Turkish government. A spokespers­on for Facebook stated that the content violated Facebook’s user policy by promoting terrorism. Critics were quick to question how Facebook justified carrying out censorship on behalf of the Turkish government against users outside Turkey’s legal jurisdicti­on. While Turkey has designated the group as a terrorist organizati­on, that status remains heavily contested, particular­ly for the group’s instrument­al involvemen­t in the campaign against the Islamic State ( IS). Terrorist organizati­on or not, Facebook appeared to prioritize maintainin­g favor with a national government over honoring their users’ right to free speech.

Social media companies have also been criticized for being too lenient on terroristr­elated content. Twitter came under fire in 2015 for its slow response to IS’ use of the platform to spread propaganda and attract potential recruits. It wasn’t until the appearance of James Foley’s execution video that Twitter made a substantia­ted effort to curb pro- IS content in response to increasing pressure from the US government. Consequent­ly, many high- profile accounts of IS members were removed and special software was employed to automate the process.

However, the company struggled with the fact that users whose accounts were deleted simply opened new ones. Critics claimed if Twitter were sincere in their efforts to curtail abuse of the platform, they would employ more stringent measures to keep IS members off the service for good. When it comes to content promoting terrorism, Twitter is still largely considered more permissive in comparison to Facebook, despite increased efforts to curb its popularity with terrorist organizati­ons.

While in practice, there are veritable discrepanc­ies between the types of content each company removes; the irony is their user policies on terrorism- related activity are strikingly similar. Both Facebook and Twitter have policies on prohibitin­g content that promotes terrorism and egregious displays of violence. In practice however, these inconsiste­ncies were so pronounced that the US government took notice and attempted to intervene through introducin­g their own regulatory measures. In 2016, regulation­s were proposed to mitigate the numerous diverse approaches toward curtailing terrorist- related activity across the Internet. The bill itself came under criticism for its vague definition of “terrorist activity,” and was opposed by 37 top Internet companies including Facebook and Twitter. In an attempt to thwart the regulation of their services, Facebook, Twitter, YouTube and Microsoft created a shared database to identify the most nefarious terrorist propaganda, to be implemente­d this year. Their efforts purportedl­y focus on removing content of the most extreme nature, which collective­ly violates each respective company’s user policies. When the content is flagged, digital fingerprin­ts are shared in the database so moderators of the other services can quickly track down the content in question and determine whether it violates their own policies.

The program represents a more collective approach to limiting the spread of extremist propaganda. Such efforts are a step in the right direction but the system needs time before its merits and drawbacks can be fairly assessed. Even with the progress that has been made in reducing extremist content on social media, nothing yet precludes individual companies from violating free speech on their own accord. How the right to free speech squares with content that is less decisively linked to terror and extremism will remain a critical question for some time to come.

 ?? Illustrati­on: Liu Rui/ GT ??
Illustrati­on: Liu Rui/ GT

Newspapers in English

Newspapers from China