Tech com­pa­nies move to tar­get ter­ror­ist pro­pa­ganda on­line

Kuwait Times - - TECHNOLOGY -

Facebook, Mi­crosoft, Twit­ter and YouTube are join­ing forces to more quickly iden­tify the worst ter­ror­ist pro­pa­ganda and pre­vent it from spread­ing on­line. The new pro­gram an­nounced Mon­day would cre­ate a data­base of unique dig­i­tal “fin­ger­prints” to help au­to­mat­i­cally iden­tify videos or images the com­pa­nies could re­move.

The move by the tech­nol­ogy com­pa­nies, which is ex­pected to be­gin in early 2017, aims to as­suage gov­ern­ment con­cerns and de­rail pro­posed new fed­eral leg­is­la­tion - over so­cial me­dia con­tent that is seen as in­creas­ingly driv­ing ter­ror­ist re­cruit­ment and rad­i­cal­iza­tion, while also bal­anc­ing free-speech is­sues.

Tech­ni­cal de­tails were be­ing worked out, but Mi­crosoft pi­o­neered sim­i­lar tech­nol­ogy to de­tect, re­port and re­move child pornog­ra­phy through such a data­base in 2009. Un­like those images, which are plainly il­le­gal un­der US law, ques­tions about whether an image or video pro­motes ter­ror­ism can be more sub­jec­tive, de­pend­ing on na­tional laws and the rules of a par­tic­u­lar com­pany’s ser­vice.

Tool of rad­i­cal­iza­tion

So­cial me­dia has in­creas­ingly be­come a tool for re­cruit­ing and rad­i­cal­iza­tion by the Is­lamic State group and oth­ers. Its use by ter­ror groups and sup­port­ers has added to the threat from so-called lone-wolf at­tacks and de­creased the time from “flash to bang” - or rad­i­cal­iza­tion to vi­o­lence - with lit­tle or no time for law en­force­ment to fol­low ev­i­den­tiary trails be­fore an at­tack.

Un­der the new part­ner­ship, the com­pa­nies promised to share among them­selves “the most ex­treme and egre­gious ter­ror­ist images and videos we have re­moved from our ser­vices - con­tent most likely to vi­o­late all our re­spec­tive com­pa­nies’ con­tent poli­cies,” ac­cord­ing to a joint an­nounce­ment Mon­day evening.

When such con­tent is shared in­ter­nally, the other par­tic­i­pat­ing com­pa­nies will be no­ti­fied and can use the dig­i­tal fin­ger­prints to quickly iden­tify the same con­tent on their own ser­vices to judge whether it vi­o­lates their rules. If so, com­pa­nies can delete the ma­te­rial and pos­si­bly dis­able the ac­count, as ap­pro­pri­ate.

Most so­cial me­dia ser­vices ex­plic­itly do not al­low con­tent that sup­ports vi­o­lent ac­tion or il­le­gal ac­tiv­i­ties. Twit­ter, for ex­am­ple, says users “may not pro­mote vi­o­lence against or di­rectly at­tack or threaten other peo­ple on the ba­sis of race, eth­nic­ity, na­tional ori­gin, sex­ual ori­en­ta­tion, gen­der, gen­der iden­tity, reli­gious af­fil­i­a­tion, age, dis­abil­ity or dis­ease.”“We re­ally are go­ing af­ter the most ob­vi­ous se­ri­ous con­tent that is shared on­line - that is, the kind of re­cruit­ment videos and be­head­ing videos more likely to be against all our con­tent poli­cies,” said Sally Al­dous, a Facebook spokes­woman.

The White House praised the joint ef­fort. “The ad­min­is­tra­tion be­lieves that the in­no­va­tive pri­vate sec­tor is uniquely po­si­tioned to help limit ter­ror­ist re­cruit­ment and rad­i­cal­iza­tion on­line,” said Na­tional Se­cu­rity Coun­cil spokesman Carl Woog. “Today’s an­nounce­ment is yet an­other ex­am­ple of tech com­mu­ni­ties tak­ing ac­tion to pre­vent ter­ror­ists from us­ing these plat­forms in ways their creators never in­tended.” The new pro­gram caps a year of ef­forts to tamp down on so­cial me­dia’s use by ter­ror­ist groups.

Law­mak­ers last year in­tro­duced leg­is­la­tion that would re­quire so­cial me­dia com­pa­nies to re­port any on­line ter­ror­ist ac­tiv­ity they be­came aware of to law en­force­ment. The bill by Sens. Dianne Fe­in­stein, D-Calif, and Richard Burr, R-NC, was crit­i­cized for not defin­ing “ter­ror­ist ac­tiv­ity,” which could have drowned gov­ern­ment agen­cies in re­ports. The bill was op­posed by the In­ter­net As­so­ci­a­tion, which rep­re­sents 37 in­ter­net com­pa­nies, in­clud­ing Facebook, Snapchat, Google, LinkedIn, Red­dit, Twit­ter, Ya­hoo and oth­ers.

The bill came days af­ter Syed Fa­rook and his wife, Tash­feen Ma­lik, went on a shoot­ing at­tack in San Bernardino, Cal­i­for­nia, killing 14 peo­ple and in­jur­ing 21 oth­ers. A Facebook post on Ma­lik’s page around the time of the at­tack in­cluded a pledge of al­le­giance to the leader of the Is­lamic State group. Facebook found the post - which was un­der an alias - the day af­ter the at­tack. The com­pany re­moved the pro­file from pub­lic view and in­formed law en­force­ment. Such a proac­tive ef­fort had pre­vi­ously been un­com­mon.

Twit­ter moved to­ward par­tial au­to­ma­tion in late 2015, us­ing un­spec­i­fied “pro­pri­etary spam-fight­ing tools” to find ac­counts that might be vi­o­lat­ing its terms of ser­vice and pro­mot­ing ter­ror­ism. The ma­te­rial still re­quired re­view by a team at Twit­ter be­fore the ac­counts could be dis­abled. “Since the mid­dle of 2015, we have sus­pended more than 360,000 ac­counts for vi­o­lat­ing Twit­ter’s pol­icy on vi­o­lent threats and the pro­mo­tion of ter­ror­ism,” said Sinead McSweeney, Twit­ter’s vice pres­i­dent of pub­lic pol­icy. “A large pro­por­tion of these ac­counts have been re­moved by tech­ni­cal means, in­clud­ing our pro­pri­etary spam­fight­ing tools.” — AP

NEW YORK: A Facebook logo is dis­played on the screen of an iPad. — AP

Newspapers in English

Newspapers from Kuwait

© PressReader. All rights reserved.