Sun Sentinel Palm Beach Edition

PROPAGANDA WATCH

Tech companies move to target terrorists’ online posts

- By Tami Abdollah

WASHINGTON — Facebook, Microsoft, Twitter and YouTube are joining forces to more quickly identify the worst terrorist propaganda and prevent it from spreading online.

The new program announced Monday would create a database of unique digital “fingerprin­ts” to help automatica­lly identify videos or images the companies could remove.

The move by the technology companies, which is expected to begin in early 2017, aims to assuage government concerns — and derail proposed new federal legislatio­n — over social media content that is seen as increasing­ly driving terrorist recruitmen­t and radicaliza­tion, while also balancing free-speech issues.

Technical details were

being worked out, but Microsoft pioneered similar technology to detect, report and remove child pornograph­y through such a database in 2009. Unlike those images, which are plainly illegal under U.S. law, questions about whether an image or video promotes terrorism can be more subjective, depending on national laws and the rules of a particular company’s service.

Social media has increasing­ly become a tool for recruiting and radicaliza­tion by the Islamic State group and others. Its use by terror groups and supporters has added to the threat from so-called lone-wolf attacks and decreased the time from “flash to bang” — or radicaliza­tion to violence — with little or no time for law enforcemen­t to follow evidentiar­y trails before an attack.

Under the new partnershi­p, the companies promised to share among themselves “the most extreme and egregious terrorist images and videos we have removed from our services — content most likely to violate all our respective companies’ content policies,” according to a joint announceme­nt Monday evening.

When such content is shared internally, the other participat­ing companies will be notified and can use the digital fingerprin­ts to quickly identify the same content on their own services to judge whether it violates their rules. If so, companies can delete the material and possibly disable the account, as appropriat­e.

Most social media services explicitly do not allow content that supports violent action or illegal activities. Twitter, for example, says users “may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientatio­n, gender, gender identity, religious affiliatio­n, age, disability or disease.”

“We really are going after the most obvious serious content that is shared online — that is, the kind of recruitmen­t videos and beheading videos more likely to be against all our content policies,” said Sally Aldous, a Facebook spokeswoma­n.

The White House praised

the joint effort. “The administra­tion believes that the innovative private sector is uniquely positioned to help limit terrorist recruitmen­t and radicaliza­tion online,” said National Security Council spokesman Carl Woog. “Today’s announceme­nt is yet another example of tech communitie­s taking action to prevent terrorists from using these platforms in ways their creators never intended.”

The new program caps a

year of efforts to tamp down on social media use by terrorist groups.

Lawmakers last year introduced legislatio­n that would require social media companies to report any online terrorist activity they became aware of to law enforcemen­t. The bill by Sens. Dianne Feinstein, DCalif., and Richard Burr, R-N.C., was criticized for not defining “terrorist activity,” which could have drowned government agencies in reports. The bill was opposed by the Internet Associatio­n, which represents 37 internet companies, including Facebook, Snapchat, Google, LinkedIn, Reddit, Twitter, Yahoo and others.

The bill came days after Syed Rizwan Farook and his wife, Tashfeen Malik, carried out a terrorist attack in San Bernardino, Calif., killing 14 people and injuring 21 others. A Facebook post on Malik’s page around the time of the attack included a pledge of allegiance to the leader of the Islamic State group.

Facebook found the post — which was under an alias — the day after the attack. The company removed the profile from public view and informed law enforcemen­t. Such a proactive effort had previously been uncommon.

Twitter moved toward

partial automation in late 2015, using unspecifie­d “proprietar­y spam-fighting tools” to find accounts that might be violating its terms of service and promoting terrorism. The material still required review by a team at Twitter before the accounts could be disabled.

“Since the middle of 2015, we have suspended more than 360,000 accounts for violating Twitter’s policy on violent threats and the promotion of terrorism,” said Sinead McSweeney, Twitter’s vice president of public policy.

Facebook has also used image-matching technology to compare images to ones it’s already removed. The effort lets Facebook review images to avoid removing legitimate and protected uses, a spokeswoma­n said.

Terrence McNeil of Ohio

was charged in 2015 with soliciting the killings of U.S. service members over social media, including Tumblr, Facebook and Twitter.

Federal prosecutor­s accused him of posting a series of photograph­s on his Facebook account to praise the death of a Jordanian pilot who was burned to death by Islamic State.

In January, the White House dispatched top officials, including FBI Director James Comey, Attorney General Loretta Lynch and National Security Agency Director Mike Rogers, to Silicon Valley to discuss the use of social media by violent extremist groups. Among the issues they discussed was how to use technology to help quickly identify terrorist content.

 ?? JAMES H. COLLINS/AP ??
JAMES H. COLLINS/AP

Newspapers in English

Newspapers from United States