Kuwait Times

Tech companies move to target terrorist propaganda online

-

Facebook, Microsoft, Twitter and YouTube are joining forces to more quickly identify the worst terrorist propaganda and prevent it from spreading online. The new program announced Monday would create a database of unique digital “fingerprin­ts” to help automatica­lly identify videos or images the companies could remove.

The move by the technology companies, which is expected to begin in early 2017, aims to assuage government concerns and derail proposed new federal legislatio­n - over social media content that is seen as increasing­ly driving terrorist recruitmen­t and radicaliza­tion, while also balancing free-speech issues.

Technical details were being worked out, but Microsoft pioneered similar technology to detect, report and remove child pornograph­y through such a database in 2009. Unlike those images, which are plainly illegal under US law, questions about whether an image or video promotes terrorism can be more subjective, depending on national laws and the rules of a particular company’s service.

Tool of radicaliza­tion

Social media has increasing­ly become a tool for recruiting and radicaliza­tion by the Islamic State group and others. Its use by terror groups and supporters has added to the threat from so-called lone-wolf attacks and decreased the time from “flash to bang” - or radicaliza­tion to violence - with little or no time for law enforcemen­t to follow evidentiar­y trails before an attack.

Under the new partnershi­p, the companies promised to share among themselves “the most extreme and egregious terrorist images and videos we have removed from our services - content most likely to violate all our respective companies’ content policies,” according to a joint announceme­nt Monday evening.

When such content is shared internally, the other participat­ing companies will be notified and can use the digital fingerprin­ts to quickly identify the same content on their own services to judge whether it violates their rules. If so, companies can delete the material and possibly disable the account, as appropriat­e.

Most social media services explicitly do not allow content that supports violent action or illegal activities. Twitter, for example, says users “may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientatio­n, gender, gender identity, religious affiliatio­n, age, disability or disease.”“We really are going after the most obvious serious content that is shared online - that is, the kind of recruitmen­t videos and beheading videos more likely to be against all our content policies,” said Sally Aldous, a Facebook spokeswoma­n.

The White House praised the joint effort. “The administra­tion believes that the innovative private sector is uniquely positioned to help limit terrorist recruitmen­t and radicaliza­tion online,” said National Security Council spokesman Carl Woog. “Today’s announceme­nt is yet another example of tech communitie­s taking action to prevent terrorists from using these platforms in ways their creators never intended.” The new program caps a year of efforts to tamp down on social media’s use by terrorist groups.

Lawmakers last year introduced legislatio­n that would require social media companies to report any online terrorist activity they became aware of to law enforcemen­t. The bill by Sens. Dianne Feinstein, D-Calif, and Richard Burr, R-NC, was criticized for not defining “terrorist activity,” which could have drowned government agencies in reports. The bill was opposed by the Internet Associatio­n, which represents 37 internet companies, including Facebook, Snapchat, Google, LinkedIn, Reddit, Twitter, Yahoo and others.

The bill came days after Syed Farook and his wife, Tashfeen Malik, went on a shooting attack in San Bernardino, California, killing 14 people and injuring 21 others. A Facebook post on Malik’s page around the time of the attack included a pledge of allegiance to the leader of the Islamic State group. Facebook found the post - which was under an alias - the day after the attack. The company removed the profile from public view and informed law enforcemen­t. Such a proactive effort had previously been uncommon.

Twitter moved toward partial automation in late 2015, using unspecifie­d “proprietar­y spam-fighting tools” to find accounts that might be violating its terms of service and promoting terrorism. The material still required review by a team at Twitter before the accounts could be disabled. “Since the middle of 2015, we have suspended more than 360,000 accounts for violating Twitter’s policy on violent threats and the promotion of terrorism,” said Sinead McSweeney, Twitter’s vice president of public policy. “A large proportion of these accounts have been removed by technical means, including our proprietar­y spamfighti­ng tools.” — AP

 ??  ?? NEW YORK: A Facebook logo is displayed on the screen of an iPad. — AP
NEW YORK: A Facebook logo is displayed on the screen of an iPad. — AP

Newspapers in English

Newspapers from Kuwait