Social media in a bind over extremist content
The line between combating extremist propaganda and infringing on free speech poses an ongoing challenge for social media companies. Social media provides extremists with instantaneous access to a large audience and disseminate information to further a cause. Consequently, terrorist groups have taken advantage of services like Facebook and Twitter in clear violation of user policies. Few would contest the responsibility of moderators to remove such content as quickly as possible. But as more and more people rely on social media for sharing current events, the question of what constitutes overstepping this boundary between responsible moderation and censorship has come into question.
Under US law, private entities have the right to make their own policies regarding the admissibility of content on their service. Each platform maintains their own terms of use that provide a framework for users to abide by when using social media. These policies give a general guideline that is always subject to interpretation by users and moderators alike.
The reality is such policies only provide a general set of standards. These standards are applied inconsistently and have even led to significant controversies that illustrate the challenges both users and moderators face when dealing with controversial content.
One such incident occurred in 2015, when Facebook moderators targeted advocates of a left- wing Kurdish organization in Turkey, known as the PKK. The advocates reported any content about the PKK they posted from the US or the UK was quickly deleted on Facebook. Content flagged for removal included images of a map and a burning Turkish flag. A leaked document containing Facebook’s violations list indicated the PKK was the target of a largescale censorship campaign by the social media giant at the request of the Turkish government. A spokesperson for Facebook stated that the content violated Facebook’s user policy by promoting terrorism. Critics were quick to question how Facebook justified carrying out censorship on behalf of the Turkish government against users outside Turkey’s legal jurisdiction. While Turkey has designated the group as a terrorist organization, that status remains heavily contested, particularly for the group’s instrumental involvement in the campaign against the Islamic State ( IS). Terrorist organization or not, Facebook appeared to prioritize maintaining favor with a national government over honoring their users’ right to free speech.
Social media companies have also been criticized for being too lenient on terroristrelated content. Twitter came under fire in 2015 for its slow response to IS’ use of the platform to spread propaganda and attract potential recruits. It wasn’t until the appearance of James Foley’s execution video that Twitter made a substantiated effort to curb pro- IS content in response to increasing pressure from the US government. Consequently, many high- profile accounts of IS members were removed and special software was employed to automate the process.
However, the company struggled with the fact that users whose accounts were deleted simply opened new ones. Critics claimed if Twitter were sincere in their efforts to curtail abuse of the platform, they would employ more stringent measures to keep IS members off the service for good. When it comes to content promoting terrorism, Twitter is still largely considered more permissive in comparison to Facebook, despite increased efforts to curb its popularity with terrorist organizations.
While in practice, there are veritable discrepancies between the types of content each company removes; the irony is their user policies on terrorism- related activity are strikingly similar. Both Facebook and Twitter have policies on prohibiting content that promotes terrorism and egregious displays of violence. In practice however, these inconsistencies were so pronounced that the US government took notice and attempted to intervene through introducing their own regulatory measures. In 2016, regulations were proposed to mitigate the numerous diverse approaches toward curtailing terrorist- related activity across the Internet. The bill itself came under criticism for its vague definition of “terrorist activity,” and was opposed by 37 top Internet companies including Facebook and Twitter. In an attempt to thwart the regulation of their services, Facebook, Twitter, YouTube and Microsoft created a shared database to identify the most nefarious terrorist propaganda, to be implemented this year. Their efforts purportedly focus on removing content of the most extreme nature, which collectively violates each respective company’s user policies. When the content is flagged, digital fingerprints are shared in the database so moderators of the other services can quickly track down the content in question and determine whether it violates their own policies.
The program represents a more collective approach to limiting the spread of extremist propaganda. Such efforts are a step in the right direction but the system needs time before its merits and drawbacks can be fairly assessed. Even with the progress that has been made in reducing extremist content on social media, nothing yet precludes individual companies from violating free speech on their own accord. How the right to free speech squares with content that is less decisively linked to terror and extremism will remain a critical question for some time to come.