Bal­anc­ing se­cu­rity against free speech

Face­book, Twit­ter and YouTube look to mute Is­lamic State with­out sti­fling other voices glob­ally

The Washington Post Sunday - - FRONT PAGE - BY SCOTT HIGHAM AND ELLEN NAKASHIMA

When a lone ter­ror­ist slaugh­tered 38 tourists at a Tu­nisian re­sort on June 26, the Is­lamic State turned to one of Amer­ica’s lead­ing so­cial-media com­pa­nies to claim re­spon­si­bil­ity and warn of more at­tacks on the world’s non­be­liev­ers.

“It was a painful strike and a mes­sage stained with blood,” the Is­lamic State an­nounced on Twit­ter fol­low­ing the mas­sacre in Sousse, a pop­u­lar des­ti­na­tion for Euro­peans on the Mediter­ranean. “Let them wait for the glad tid­ings of what will harm them in the com­ing days, Al­lah per­mit­ting.”

Three days be­fore the as­sault, the Is­lamic State re­lied on another pop­u­lar U.S. so­cial-media plat­form, Google’s YouTube, to pro­mote a grisly pro­pa­ganda video of three sep­a­rate mass killings. Men ac­cused of co­op­er­at­ing with U.S.-co­or­di­nated airstrikes in Iraq and Syria are seen be­ing in­cin­er­ated in a car, drowned in a cage low­ered into a swimming pool and de­cap­i­tated by ex­plo­sive neck­laces looped around their necks.

Ver­sions of it would re­main on YouTube, even as com­pany ex­ec­u­tives pro­claimed dur­ing an in­ter­na­tional advertising fes­ti­val that week in Cannes, France, that Google would not pro­vide a “dis­tri­bu­tion chan­nel for this hor­ri­ble, but very news­wor­thy, ter­ror­ist pro­pa­ganda.”

As the Is­lamic State, also known as ISIS and ISIL, con­tin­ues to hold large parts of Iraq and Syria and in­spire ter­ror­ist at­tacks in more and more coun­tries, it has come to rely upon U.S. so­cial­me­dia com­pa­nies to sum­mon fresh re­cruits to its cause, spread its pro­pa­ganda and call for at­tacks, ac­cord­ing to coun­tert­er­ror­ism an­a­lysts.

“We also have to ac­knowl­edge that ISIL has been par­tic­u­larly ef­fec­tive at reach­ing out to and re­cruit­ing vul­ner­a­ble peo­ple around the world, in­clud­ing here in the United States,” Pres­i­dent Obama said July 6 at the Pen­tagon. “So the United States will con­tinue to do our part, by work­ing with part­ners to counter ISIL’s hate­ful pro­pa­ganda, es­pe­cially online.”

The so­cial-media savvy of the mil­i­tant group is rais­ing dif­fi­cult ques­tions for many U.S. firms: how to pre­serve global plat­forms that of­fer fo­rums for ex­pres­sion while pre­vent­ing groups such as the Is­lamic State from ex­ploit­ing those free-speech prin­ci­ples to ad­vance their ter­ror­ist cam­paign.

“ISIS has been con­fronting us with these re­ally in­hu­mane and atro­cious im­ages, and there are some peo­ple who be­lieve if you type ‘ji­had’ or ‘ISIS’ on YouTube, you should get no re­sults,” Vic­to­ria Grand, Google’s di­rec­tor of pol­icy strat­egy, told The Washington Post in a re­cent in­ter­view. “We don’t be­lieve that should be the case. Ac­tu­ally, a lot of the re­sults you see on YouTube are ed­u­ca­tional about the ori­gins of the group, ed­u­cat­ing peo­ple about the dan­gers and vi­o­lence. But the goal here is how do you strike a bal­ance be­tween en­abling peo­ple to dis­cuss and ac­cess in­for­ma­tion about ISIS, but also not be­come the dis­tri­bu­tion chan­nel for their pro­pa­ganda?”

Some law­mak­ers and gov­ern­ment of­fi­cials say the com­pa­nies are not go­ing far enough.

“They are be­ing ex­ploited by ter­ror­ists,” As­sis­tant At­tor­ney Gen­eral for Na­tional Se­cu­rity John P. Car­lin said in a re­cent in­ter­view. “I think there is recog­ni­tion now that there is a prob­lem, and so we’re start­ing to see peo­ple at the com­pa­nies ad­dress ad­di­tional re­sources. But more needs to be done be­cause we’re still see­ing the threat, and the threat is in­creas­ing, not de­creas­ing.

“It’s not a prob­lem just here in the United States. I think they’re hear­ing it from gov­ern­ments and cus­tomers from through­out the world.”

A field anal­y­sis in May by the Depart­ment of Home­land Se­cu­rity warns that the Is­lamic State’s use of so­cial media is broad­en­ing the ter­ror­ist group’s reach.

“ISIL lever­ages so­cial media to prop­a­gate its mes­sage and ben­e­fits from thou­sands of or­ga­nized sup­port­ers glob­ally online, pri­mar­ily on Twit­ter, who seek to le­git­imize its ac­tions while bur­nish­ing an im­age of strength and power,” ac­cord­ing to the anal­y­sis. “The in­flu­ence is un­der­scored by the large num­ber of re­ports stem­ming from so­cial media post­ings.”

In Europe, some gov­ern­ments are re­quir­ing so­cial-media com­pa­nies to block or re­move ter­ror­re­lated posts.

Ear­lier this month, the Se­nate In­tel­li­gence Com­mit­tee ap­proved a bill that would re­quire so­cial­me­dia com­pa­nies to alert fed­eral author­i­ties when they be­come aware of ter­ror­ist-re­lated con­tent on their sites. The bill is de­signed to pro­vide lawen­force­ment agen­cies with in­for­ma­tion about po­ten­tial terror plots. It would not re­quire firms to mon­i­tor any users or their com­mu­ni­ca­tions.

Putting more pres­sure on the so­cial-media com­pa­nies, a U.N. panel last month called on the firms to re­spond to ac­cu­sa­tions that their sites are be­ing ex­ploited by the Is­lamic State and other groups.

In the United States, gov­ern­ment reg­u­la­tion of speech, re­gard­less of how of­fen­sive or hate­ful, is gen­er­ally held to be un­con­sti­tu­tional un­der the First Amend­ment. The so­cial-media com­pa­nies — each with its own cul­ture, mis­sion and phi­los­o­phy — have been gov­ern­ing how and when to block or re­move ter­ror­re­lated con­tent.

The rev­e­la­tions of for­mer Na­tional Se­cu­rity Agency con­trac­tor Ed­ward Snow­den about U.S. gov­ern­ment sur­veil­lance have also made the tech com­pa­nies wary of co­op­er­at­ing with Washington.

Face­book has been the most of the large so­cial­me­dia com­pa­nies when it comes to tak­ing down terror-re­lated con­tent. The com­pany has adopted a zero tol­er­ance pol­icy and, un­like other so­cial-media com­pa­nies, proac­tively re­moves posts re­lated to ter­ror­ist or­ga­ni­za­tions. Face­book also re­lies on its users to alert the com­pany to posts that pro­mote or celebrate ter­ror­ism and hires screen­ers to re­view con­tent that might vi­o­late its stan­dards.

“We don’t al­low praise or sup­port of terror groups or terror acts, any­thing that’s done by these groups and their mem­bers,” said Monika Bick­ert, a for­mer fed­eral pros­e­cu­tor who heads global pol­icy man­age­ment for Face­book.

Of all the large so­cial-media com­pa­nies, Twit­ter has been the most out­spo­ken about pro­tect­ing free­dom of speech on its plat­form. Still, the com­pany re­cently up­dated its abuse pol­icy, stat­ing that users may not threaten or pro­mote ter­ror­ism.

“Twit­ter con­tin­ues to strongly sup­port free­dom of ex­pres­sion and di­verse per­spec­tives,” ac­cord­ing to a state­ment by a Twit­ter of­fi­cial, who spoke on the con­di­tion of anonymity be­cause of re­cent death threats against em­ploy­ees by Is­lamic State sup­port­ers. “But it also has clear rules gov­ern­ing what is per­mis­si­ble. . . . The use of Twit­ter by vi­o­lent ex­trem­ist groups to threaten hor­rific acts of deprav­ity and vi­o­lence is of grave con­cern and against our poli­cies, pe­riod.”

Another chal­lenge for the com­pa­nies: It is of­ten dif­fi­cult to dis­tin­guish be­tween com­muni ques from ter­ror­ist groups and posts by news or­ga­ni­za­tions and le­git­i­mate users. In­ter­net free­dom ad­vo­cates also note that much of what groups such as the Is­lamic State are post­ing can be seen as part of the his­tor­i­cal record — even though many of the pho­to­graphs and videos are hor­rific.

They point to the mem­o­rable 1968 As­so­ci­ated Press pho­to­graph of South Viet­nam’s na­tional po­lice com­man­der shoot­ing a sus­pected Viet Cong fighter in the head on a Saigon street. They won­der how that Pulitzer Prizewin­ning im­age, which came to sym­bol­ize the chaos and bru­tal­ity of the Viet­nam War, would be han­dled in the age of so­cial media and mod­ern dig­i­tal war­fare.

“You want to live in a world

CON­FRONTING THE ‘ CALIPHATE’

An oc­ca­sional se­ries about the rise of the Is­lamic State mil­i­tant group, its im­pli­ca­tions for the Mid­dle East, and ef­forts by the U.S. gov­ern­ment and oth­ers to un­der­mine it. where peo­ple have ac­cess to news — in other words, doc­u­men­tary ev­i­dence of what is ac­tu­ally hap­pen­ing,” said An­drew McLaugh­lin, a for­mer Google ex­ec­u­tive and chief U.S. tech­nol­ogy of­fi­cer who now is a part­ner in the tech and media start-up firm Beta­works in New York. “And an ISIS video of hostages be­ing be­headed is both an act of pro­pa­ganda and is it­self a fact. And so if you’re a plat­form, you don’t want to sup­press the facts. On the other hand, you don’t want to par­tic­i­pate in ad­vanc­ing pro­pa­ganda.

“And there is the co­nun­drum.” ‘Pure evil’

Be­fore the rise of so­cial media, many of the three dozen video and au­dio mes­sages Osama bin Laden is­sued be­fore his death were recorded in re­mote lo­ca­tions, smug­gled out by couri­ers, and aired on what was then a largely un­known tele­vi­sion sta­tion based in Qatar called Al Jazeera. Weeks could pass be­tween the time when bin Laden spoke and when he was heard.

Al-Qaeda op­er­a­tives com­mu­ni­cated through pass­word­pro­tected fo­rums and mes­sage boards on the In­ter­net. Ac­cess was tightly con­trolled.

“It was a dif­fer­ent time,” said Steven Stalinsky, ex­ec­u­tive di­rec­tor of the Mid­dle East Media Re­search In­sti­tute, which tracks online com­mu­ni­ca­tions of ter­ror­ist or­ga­ni­za­tions. “The ji­hadi groups de­cided what could be posted and re­leased. Twit­ter be­came the way around the fo­rums. It be­came the WildWest of ji­had.”

Be­fore his death, bin Laden had come to rec­og­nize the revo­lu­tion that fol­lowed the launch of Face­book in 2004 and Twit­ter in 2006.

“The wide-scale spread of ji­hadist ide­ol­ogy, es­pe­cially on the In­ter­net, and the tremen­dous num­ber of young peo­ple who fre­quent the Ji­hadist Web sites [are] a ma­jor achieve­ment for ji­had,” bin Laden wrote in a May 2010 let­ter that was later found by U.S. Spe­cial Oper­a­tions forces in­side his Pak­istan com­pound.

Al-Shabab, a mil­i­tant group in So­ma­lia al­lied with al-Qaeda, be­came one of the first ter­ror­ist or­ga­ni­za­tions to use Twit­ter for both pro­pa­ganda and com­mand and con­trol dur­ing an at­tack, ac­cord­ing to ter­ror­ism an­a­lysts. The group set up Twit­ter ac­counts un­der al-Shabab’s media wing, called HMS Press.

In Septem­ber 2013, al-Shabab at­tracted world­wide at­ten­tion when it live-tweeted a terror at­tack it car­ried out at the up­scale West­gate shop­ping mall in Nairobi.

“What Kenyans are wit­ness­ing at #West­gate is re­tribu­tive jus­tice for crimes com­mit­ted by their mil­i­tary, al­beit mi­nus­cule in na­ture,” HMSPress tweeted. A short time later, the group posted another tweet: “Since our last con­tact, the Mu­jahideen in­side the mall con­firmed to @HMS_Press that they killed over 100 Kenyan kuf­far & bat­tle is on­go­ing.”

In the end, more than 60 peo­ple were killed and an ad­di­tional 175 wounded. Twit­ter took down those ac­counts that day, mark­ing one of the first times the com­pany re­moved ma­te­rial posted by a ter­ror­ist or­ga­ni­za­tion. But al-Shabab quickly cre­ated new Twit­ter ac­counts un­der dif­fer­ent names — il­lus­trat­ing both the util­ity of the plat­form and the dif­fi­culty of polic­ing it.

The at­tack and how it played out in real time inspired ter­ror­ists around the world.

“We must make ev­ery ef­fort to reach out to Mus­lims both through new media like Face­book and Twit­ter,” Adam Gadahn, an Amer­i­can-born al-Qaeda pro­pa­gan­dist, pro­claimed in a 2013 in­ter­view. (In Jan­uary, he was killed in a U.S. strike.)

The Is­lamic State has gone on to make Twit­ter one of its most im­por­tant tools.

FBI Di­rec­tor James B. Comey tes­ti­fied to Congress this month about how the Is­lamic State is reach­ing out through Twit­ter to about 21,000 English-lan­guage fol­low­ers. The group’s mes­sage, he said, is, “Come to the so-called caliphate and live the life of some sort of glory or some­thing; and if you can’t come, kill some­body where you are; kill some­body in uni­form; kill any­body; if you can cut their head off, great; video­tape it; do it, do it, do it.” He de­scribed it as “a devil on their shoul­der all day long, say­ing: Kill, kill, kill, kill.”

Comey also said that Twit­ter has be­come “par­tic­u­larly ag­gres­sive at shut­ting down and try­ing to stop ISIL-re­lated sites. I think it led ISIL to threaten to kill their CEO, which helped them un­der­stand the prob­lem in a bet­ter way.” Oth­ers are not con­vinced. “Twit­ter is pro­vid­ing a com­mu­ni­ca­tion de­vice, a loud­speaker for ISIS,” said Mark Wal­lace, a for­mer U.S. am­bas­sador who now runs the Counter Ex­trem­ism Pro­ject, a non­profit group that tracks ter­ror­ists and at­tempts to dis­rupt their online ac­tiv­i­ties. “If you are pro­mot­ing vi­o­lence and a call to vi­o­lence, you are pro­vid­ing ma­te­rial sup­port. Twit­ter should be part of the so­lu­tion. If not, they are part of the prob­lem.”

At a Con­sti­tu­tion Pro­ject din­ner in April honor­ing Twit­ter for its lead­er­ship on First Amend­ment is­sues, Colin Crow­ell, the firm’s head of global public pol­icy, ac­knowl­edged that Twit­ter has hosted “painful con­tent” and con­tent re­flect­ing “ter­ror­ism, gov­ern­ment re­pres­sion” on its site. But, he said, “it is also a place where peo­ple can find . . . in­for­ma­tion, con­ver­sa­tion and where em­pa­thy can be shared.”

The “key thing,” he said, “for us at Twit­ter is to rec­og­nize our role as the provider of this open plat­form for free ex­pres­sion . . . to rec­og­nize that that speech is not our own.”

It is “pre­cisely be­cause it’s not our own con­tent that we feel we have a duty to re­spect and to de­fend those voices on the plat­form,” Crow­ell said. “The plat­form of any de­bate is neu­tral. The plat­form doesn’t take sides.”

In Au­gust 2014, the Is­lamic State up­loaded a video on YouTube and other sites show­ing the be­head­ing of Amer­i­can jour­nal­ist James Fo­ley.

A suc­ces­sion of other video­taped be­head­ings of Amer­i­cans and Bri­tons fol­lowed — Steven Sot­loff, Peter Kas­sig, David Haines, Alan Hen­ning — as well as the im­mo­la­tion of the Jor­da­nian pi­lot Muath al-Kaseas­beh and the mass killings of Syr­i­ans, Kurds and Cop­tic Chris­tians, among oth­ers.

Each slay­ing be­came a care­fully or­ches­trated and slickly pro­duced event.

“Pure evil,” Pres­i­dent Obama called Kas­sig’s be­head­ing.

For Face­book, the killings marked a turn­ing point. The com­pany made it eas­ier for its 1.4 bil­lion users — the largest in the world — to re­port con­tent from sus­pected ter­ror­ist groups, and it be­gan to ag­gres­sively re­move their posts. The com­pany also de­ployed teams of peo­ple around the world to re­view con­tent that had been flagged as ter­ror­ist-re­lated to de­ter­mine whether the posts were in fact from ter­ror­ist groups in vi­o­la­tion of Face­book’s terms of ser­vice.

Face­book has banned ter­ror­re­lated con­tent from its pages for more than five years. In March, the com­pany up­dated its com­mu­nity stan­dards, ex­plic­itly pro­hibit­ing posts that praise or celebrate ter­ror­ist or­ga­ni­za­tions and their lead­ers.

Bick­ert, Face­book’s pol­icy chief, said posts flagged by users are ex­am­ined by “oper­a­tions teams” of con­tent re­view­ers sta­tioned in four of­fices around the world.

“We want to make sure we’re keep­ing our com­mu­nity safe, and

we’re not a tool for pro­pa­ganda,” Bick­ert said. “On the other hand, we can see that peo­ple are . . . talk­ing about ISIS and are con­cerned about ISIS, in part, be­cause they’ve seen this im­agery and it makes it very real to peo­ple. So none of these is­sues are easy.”

‘Good luck’

France’s in­te­rior min­is­ter was still reel­ing from the Jan. 7 terror at­tack on the Paris of­fices of the satir­i­cal news­pa­per Char­lie Hebdo when he at­tended a White House coun­tert­er­ror­ism sum­mit in Fe­bru­ary.

The Is­lamic State and al-Qaeda had turned the Paris at­tack that left 12 dead into a pro­pa­ganda coup. The groups boasted about the killings on so­cial media, trans­mit­ting im­ages that in­cluded the fa­tal shoot­ing of a po­lice of­fi­cer as he lay wounded on a side­walk, rais­ing his arm in sur­ren­der.

While in Washington, the French of­fi­cial, Bernard Cazeneuve, had lunch with then-U.S. At­tor­ney Gen­eral Eric H. Holder Jr. Cazeneuve told Holder that he was plan­ning to meet with ex­ec­u­tives of so­cial-media com­pa­nies in Sil­i­con Val­ley the fol­low­ing day, hop­ing to per­suade them to stop ter­ror­ists from us­ing their sites for pro­pa­ganda, re­cruit­ment and op­er­a­tional plan­ning.

Ac­cord­ing to a French of­fi­cial, Cazeneuve askedHolder whether he had any ad­vice be­fore he left for Cal­i­for­nia.

“Good luck,” the at­tor­ney gen­eral said.

Cazeneuve ar­rived in Cal­i­for­nia on Feb. 20, where he met with ex­ec­u­tives of sev­eral so­cial­me­dia com­pa­nies, in­clud­ing Face­book, Twit­ter and YouTube.

“We needed to have the help of the com­pa­nies,” said a French of­fi­cial who spoke on the con­di­tion of anonymity be­cause he was not au­tho­rized to dis­cuss the trip on the record. “How could we work [to­gether] much faster and quicker?”

The of­fi­cial said the meet­ing with Face­book went well. The com­pany’s vice pres­i­dent vowed that Face­book would con­tinue to take down terror-re­lated con­tent from the site.

At Google, the French of­fi­cials met with public pol­icy and le­gal ex­ec­u­tives, who said they had been re­mov­ing terror-re­lated posts and would con­tinue to do so; YouTube users flag about 100,000 posts each day that are sus­pected of be­ing in vi­o­la­tion of the com­pany’s terms of ser­vice.

Google of­fi­cials also noted that the air­ing of the Char­lie Hebdo video on YouTube was the sub­ject of in­tense de­bate in­side the com­pany. In the end, com­pany of­fi­cials de­cided to leave the video up, on the grounds that it was news­wor­thy and had be­come part of the his­tor­i­cal record. The video has since been deleted from YouTube’s chan­nel in France at the re­quest of French of­fi­cials.

The French min­is­ter’s meet­ing with Twit­ter did not go well.

“It was our most dif­fi­cult meet­ing,” the French of­fi­cial said. “The min­is­ter showed pic­tures of the Paris at­tack that were sent out on Twit­ter, in­clud­ing the ex­e­cu­tion of the po­lice of­fi­cer,” he re­called. “He was very graphic in his ex­pla­na­tion. They had a lengthy ex­pla­na­tion that it was not easy. We ar­gued that child pornog­ra­phy is be­ing taken down. They said their al­go­rithms were not as easy to set up to find ji­hadi in­for­ma­tion. You need a bunch of peo­ple to re­view the ma­te­rial.”

The meet­ing ended “with no spe­cific com­mit­ments” from Twit­ter, the of­fi­cial said.

The Twit­ter of­fi­cial said the firm does not com­ment on pri­vate meet­ings with gov­ern­ment of­fi­cials. “We have a strong work­ing re­la­tion­ship with French law en­force­ment that pre­dates the Char­lie Hebdo at­tack,” he said.

In April, an Is­lamic State sup­porter in So­ma­lia called for a Char­lie Hebdo-style at­tack in the United States. The post inspired two men to try to at­tack a Gar­land, Tex., event where car­toon­ists were draw­ing the prophet Muham­mad, ac­cord­ing to Rita Katz, ex­ec­u­tive di­rec­tor of the SITE In­tel­li­gence Group, which tracks ter­ror­ists’ online com­mu­ni­ca­tions.

The men were gunned down by se­cu­rity teams be­fore they could open fire, but Katz said the at­tack could have ended very dif­fer­ently.

“Once you start us­ing Twit­ter, you start to un­der­stand how pow­er­ful it is, and that is why ISIS is tak­ing ad­van­tage of it,” Katz said. “Twit­ter must un­der­stand that they have to be re­spon­si­ble for the kind of in­for­ma­tion that they dis­sem­i­nate.”

No quick fixes

Con­fronting the Is­lamic State online and re­mov­ing its ma­te­rial is a con­stant chal­lenge, com­puter sci­en­tists say. Law­mak­ers, gov­ern­ment of­fi­cials and ter­ror­ism ex­perts fre­quently cite so­cial­me­dia com­pa­nies’ ef­forts to rid their sites of child pornog­ra­phy. If they can re­move that con­tent, why can’t they screen out tweets and posts from terror groups?

From a com­puter science stand­point, solv­ing the child pornog­ra­phy prob­lem was rel­a­tively straight­for­ward. The Na­tional Cen­ter for Miss­ing & Ex­ploited Chil­dren main­tains a data­base of thou­sands of pho­to­graphs of child pornog­ra­phy, im­ages that are fre­quently down­loaded by pe­dophiles and traded over the In­ter­net. Us­ing soft­ware called Mi­crosoft Pho­toDNA, im­ages are scanned and iden­ti­fied us­ing unique dig­i­tal mark­ers.

Ev­ery time a new im­age is up­loaded onto a site, a com­pany can run it against the data­base, which com­pares the dig­i­tal mark­ers. Any­thing that matches is deleted and, by fed­eral law, re­ported to the na­tional cen­ter and then to law en­force­ment agen­cies.

Many so­cial-media com­pa­nies, in­clud­ing Twit­ter and Face­book, rely on the soft­ware, which can rec­og­nize im­ages in still photos, but not videos.

Flag­ging terror-re­lated con­tent is more com­plex — but not im­pos­si­ble, com­puter sci­en­tists say.

Hany Farid, a Dart­mouth com­puter science pro­fes­sor who code­vel­oped Mi­crosoft Pho­toDNA, said the soft­ware is li­censed to the na­tional cen­ter solely to iden­tify im­ages of child pornog­ra­phy. But he said the soft­ware could be used to flag terror-re­lated pro­pa­ganda. For ex­am­ple, the soft­ware could iden­tify a pho­to­graph of Fo­ley, the Amer­i­can jour­nal­ist, al­low­ing com­pa­nies to catch im­ages of his be­head­ing be­fore they ap­pear on their sites.

“The tech­nol­ogy is ex­tremely pow­er­ful, but it’s also lim­ited,” Farid said. “You can only find im­ages that you’ve al­ready found be­fore.”

So­cial-media com­pa­nies also could down­load im­ages of the Is­lamic State’s black flag, an im­age fre­quently dis­played on the group’s pro­pa­ganda posts and com­mu­niques, and cre­ate “hash val­ues,” or dig­i­tal fin­ger­prints, of the im­ages to search for them online, com­puter sci­en­tists say.

But while so­cial-media com­pa­nies could use such tech­niques to de­tect ev­ery post with an Is­lamic State flag, not all of those posts would nec­es­sar­ily have come from the ter­ror­ist group. A jour­nal­ist could have tweeted out a link con­tain­ing ma­te­rial from the Is­lamic State, or a gov­ern­ment agency or think tank could have is­sued a re­port about the group that con­tains an im­age of the flag.

The sheer vol­ume of the con­tent on so­cial-media sites also poses a chal­lenge, com­puter sci­en­tists said. Twit­ter has 302 mil­lion ac­tive users who send out 500 mil­lion tweets a day. YouTube has more than 1 bil­lion users. Ev­ery minute of ev­ery day, they upload more than 300 hours of video.

“There is a long history of gov­ern­ment ask­ing tech­nol­ogy com­pa­nies to do things they can’t do. They say Amer­ica has put a man on the moon. Why can’t the com­pa­nies do this?” said Christo­pher Soghoian, prin­ci­pal tech­nol­o­gist and se­nior pol­icy an­a­lyst for the Amer­i­can Civil Lib­er­ties Union. “Peo­ple treat com­put­ers like magic boxes. There is no sil­ver bullet here. Com­pa­nies are go­ing to be re­luc­tant to roll out tech­nol­ogy that is go­ing to have a high rate of false pos­i­tives.”

Whac-a-Mole

As the more es­tab­lished so­cial­me­dia com­pa­nies be­come more ag­gres­sive in mon­i­tor­ing and re­mov­ing terror-re­lated con­tent, groups such as the Is­lamic State are also mi­grat­ing to lesser­known sites, where they can share their mes­sages and videos. The sites in­clude In­sta­gram, Tum­blr and Soundcloud, ac­cord­ing to terror ex­perts.

One of the sites, the non­profit In­ter­net Archive in San Fran­cisco, has been around for nearly 20 years.

The archive was founded in 1996 to pro­vide the public with free ac­cess to mil­lions of doc­u­ments and videos and clips and Web pages — al­most any­thing that has been on the Web. It is prob­a­bly best known for its Way­back Ma­chine. So far, it has cap­tured and stored nearly 150 bil­lionWeb pages.

In the past year, the Is­lamic State has cre­ated sev­eral ac­counts on the archive and has been us­ing the site to host video and au­dio pro­duc­tions, online mag­a­zines and ra­dio broad­casts, ac­cord­ing to ter­ror­ism ex­perts.

In­ter­net Archive’s of­fice man­ager, Chris But­ler, told The Post that his or­ga­ni­za­tion is re­mov­ing videos of be­head­ings and ex­e­cu­tions when­ever it be­comes aware of them, ei­ther dur­ing rou­tine main­te­nance of the site or af­ter out­side com­plaints.

But un­like sites such as Face­book and Twit­ter, the archive does not have a flag­ging mech­a­nism. But­ler said the group is work­ing on a sys­tem that will en­able users to help iden­tify and re­port prob­lem­atic con­tent.

“We do our best with a very small team and no lawyers on staff, and have nowhere near the bud­get of larger com­mer­cial sites han­dling sim­i­lar quan­ti­ties of con­tent to us, like YouTube, Twit­ter and Face­book,” But­ler said.

Twit­ter has re­cently stepped up ef­forts to re­move ter­ror­ist ac­counts. In April, it took down 10,000 ac­counts over two days. That has led se­cu­rity re­searchers such as Daniel Cuth­bert to lament the loss of what he saw as a valu­able source of in­tel­li­gence.

Cuth­bert, chief op­er­at­ing of­fi­cer of Sense­post, a cy­ber­se­cu­rity firm, sup­ports re­moval of videos of be­head­ings and other con­tent that “glo­ri­fies ISIS.” But he said he has lost a win­dow into con­ver­sa­tions be­tween Is­lamic State mem­bers, sup­port­ers and po­ten­tial re­cruits.

“I no longer have the abil­ity to see who the key peo­ple are in ISIS when it comes to a so­cial-media cam­paign, and how they’re tweet­ing, who they’re tweet­ing to, and how many are Bri­tish na­tion­als who may be get­ting groomed,” said Cuth­bert, who is based in Lon­don.

Af­ter Twit­ter con­ducted the mass take­down, Cuth­bert re­quested ac­cess to Twit­ter’s “fire­hose” — its en­tire stream of tweets. But a Twit­ter em­ployee de­nied his re­quest, cit­ing con­cerns that he was shar­ing the ma­te­rial with law en­force­ment.

“We have cer­tain sen­si­tiv­i­ties with use cases that look at in­di­vid­u­als in an in­ves­tiga­tive man­ner, es­pe­cially when in­sights from that in­ves­ti­ga­tion are di­rectly de­liv­ered to law en­force­ment or gov­ern­ment agen­cies to be acted upon,” the em­ployee said in an e-mail to Cuth­bert, which he shared with The Post.

The FBI’s Comey told re­porters “there’s ac­tu­ally a dis­cus­sion within the coun­tert­er­ror­ism com­mu­nity” as to whether it is bet­ter to shut the ac­counts down or keep them up so they can be tracked for in­tel­li­gence pur­poses. “I can see the pros and cons on both sides. But it’s an is­sue that’s live,” he said.

Coun­tert­er­ror­ism of­fi­cials say the con­stantly evolv­ing so­cial­me­dia land­scape is pro­vid­ing more places for groups such as the Is­lamic State to hide in cy­berspace. Find­ing and shut­ting down sites and ac­counts is start­ing to re­sem­ble a car­ni­val game of Whac-a-Mole, they say. As soon as one site or ac­count is taken down, another pops up. As soon as one plat­form starts ag­gres­sively mon­i­tor­ing ter­ror­ist con­tent, mil­i­tants mi­grate to another.

Worse, in­ves­ti­ga­tors and ter­ror­ism an­a­lysts fear that the Is­lamic State and other ter­ror­ist groups are mov­ing be­yond public-fac­ing so­cial-media plat­forms for re­cruit­ment, in­creas­ingly re­ly­ing on en­crypted sites where their com­mu­ni­ca­tions can con­tinue largely un­de­tected.

Comey re­cently said he is con­cerned that the Is­lamic State will use Twit­ter or another pop­u­lar so­cial-media plat­form to make con­tact with fol­low­ers be­fore “steer­ing them off of Twit­ter to an en­crypted form of com­mu­ni­ca­tion.’’

John D. Co­hen served as a for­mer top in­tel­li­gence of­fi­cial at the Depart­ment of Home­land Se­cu­rity. He said coun­ter­in­tel­li­gence of­fi­cials have tra­di­tion­ally searched for the prover­bial nee­dle in a haystack when try­ing to iden­tify ter­ror­ists and their plots. The ex­plo­sion of so­cial-media sites, he said, has com­pli­cated the search be­yond com­pare.

“The haystack is the en­tire coun­try now,” Co­hen said. “Any­where there’s a trou­bled soul on the In­ter­net and a po­ten­tial Twit­ter fol­lower, that haystack ex­tends. We’re look­ing for nee­dles. But here’s the hard part: In­creas­ingly, the nee­dles are in­vis­i­ble to us.”

JONATHAN KALAN/AS­SO­CI­ATED PRESS

A woman runs for cover in Nairobi’sWest­gate mall, which came un­der at­tack on Sept. 21, 2013. The So­mali terror group al-Shabab at­tracted world­wide at­ten­tion when it live-tweeted its at­tack.

BRIT­TANY GREE­SON/THE WASHINGTON POST

U.S. tech­nol­ogy com­pa­nies “are be­ing ex­ploited by ter­ror­ists,” says As­sis­tant At­tor­ney Gen­eral for Na­tional Se­cu­rity John P. Car­lin, who was in­ter­viewed last month at the Jus­tice Depart­ment.

YOUTUBE

54,735 VIEWS|

Google, which owns YouTube, has said it won’t pro­vide a dis­tri­bu­tion chan­nel for Is­lamic State videos, but some ma­te­rial re­mains on the site.

4,427 VIEWS|

26,377 VIEWS|

4,475 VIEWS|

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.