Face­book auto-gen­er­ates videos cel­e­brat­ing ex­trem­ist im­ages

The News-Times - - BUSINESS -

The an­i­mated video be­gins with a photo of the black flags of ji­had. Sec­onds later, it flashes high­lights of a year of so­cial me­dia posts: plaques of anti-Semitic verses, talk of ret­ri­bu­tion and a photo of two men car­ry­ing more ji­hadi flags while they burn the stars and stripes.

It wasn’t pro­duced by ex­trem­ists; it was cre­ated by Face­book. In a clever bit of self-pro­mo­tion, the so­cial me­dia gi­ant takes a year of a user’s con­tent and auto-gen­er­ates a cel­e­bra­tory video. In this case, the user called him­self “Ab­del-Rahim Moussa, the Caliphate.”

“Thanks for be­ing here, from Face­book,” the video con­cludes in a car­toon bub­ble be­fore flash­ing the com­pany’s fa­mous “thumbs up.”

Face­book likes to give the im­pres­sion it’s stay­ing ahead of ex­trem­ists by tak­ing down their posts, of­ten be­fore users even see them. But a con­fi­den­tial whistle­blower’s com­plaint to the Se­cu­ri­ties and Ex­change Com­mis­sion ob­tained by The As­so­ci­ated Press al­leges the so­cial me­dia com­pany has ex­ag­ger­ated its success. Even worse, it shows that the com­pany is in­ad­ver­tently mak­ing use of pro­pa­ganda by mil­i­tant groups to au­to­gen­er­ate videos and pages that could be used for net­work­ing by ex­trem­ists.

Ac­cord­ing to the com­plaint, over a five-month pe­riod last year, re­searchers mon­i­tored pages by users who af­fil­i­ated them­selves with groups the U.S. State De­part­ment has des­ig­nated as ter­ror­ist or­ga­ni­za­tions. In that pe­riod, 38% of the posts with prom­i­nent sym­bols of ex­trem­ist groups were re­moved. In its own re­view, the AP found that as of this month, much of the banned con­tent cited in the study — an ex­e­cu­tion video, im­ages of sev­ered heads, pro­pa­ganda hon­or­ing mar­tyred mil­i­tants — slipped through the al­go­rith­mic web and re­mained easy to find on Face­book.

The com­plaint is land­ing as Face­book tries to stay ahead of a grow­ing ar­ray of crit­i­cism over its pri­vacy prac­tices and its abil­ity to keep hate speech, live-streamed mur­ders and sui­cides off its ser­vice. In the face of crit­i­cism, CEO Mark Zuckerberg has spo­ken of his pride in the com­pany’s abil­ity to weed out vi­o­lent posts au­to­mat­i­cally through ar­ti­fi­cial in­tel­li­gence. Dur­ing an earn­ings call last month, for in­stance, he re­peated a care­fully worded for­mu­la­tion that Face­book has been em­ploy­ing.

“In ar­eas like ter­ror­ism, for

al-Qaida and ISIS-re­lated con­tent, now 99 per­cent of the con­tent that we take down in the cat­e­gory our sys­tems flag proac­tively be­fore any­one sees it,” he said. Then he added: “That’s what re­ally good looks like.”

Zuckerberg did not of­fer an es­ti­mate of how much of to­tal pro­hib­ited ma­te­rial is be­ing re­moved.

The re­search be­hind the SEC com­plaint is aimed at spot­light­ing glar­ing flaws in the com­pany’s ap­proach. Last year, re­searchers be­gan mon­i­tor­ing users who ex­plic­itly iden­ti­fied them­selves as mem­bers of ex­trem­ist groups. It wasn’t hard to doc­u­ment. Some of these peo­ple even list the ex­trem­ist groups as their em­ploy­ers. One pro­file her­alded by the black flag of an al-Qaida af­fil­i­ated group listed his em­ployer, per­haps face­tiously, as Face­book. The pro­file that in­cluded the au­to­gen­er­ated video with the flag burn­ing also had a video of al-Qaida leader Ay­man al-Zawahiri urg­ing ji­hadi groups not to fight among them­selves.

While the study is far from com­pre­hen­sive — in part be­cause Face­book rarely makes much of its data pub­licly avail­able — re­searchers in­volved in the project say the ease of iden­ti­fy­ing these pro­files us­ing a ba­sic key­word search and the fact that so few of them have been re­moved sug­gest that Face­book’s claims that its sys­tems catch most ex­trem­ist con­tent are not ac­cu­rate.

“I mean, that’s just stretch­ing the imag­i­na­tion to be­yond in­credulity,” says Amr Al Azm, one of the re­searchers in­volved in the project. “If a small group of re­searchers can find hun­dreds of pages of con­tent by sim­ple searches, why can’t a gi­ant com­pany with all its re­sources do it?”

Al Azm, a pro­fes­sor of his­tory and an­thro­pol­ogy at Shawnee State Univer­sity in Ohio, has also di­rected a group in Syria doc­u­ment­ing the loot­ing and smug­gling of an­tiq­ui­ties.

Face­book con­cedes that its sys­tems are not per­fect, but says it’s mak­ing im­prove­ments.

“Af­ter mak­ing heavy in­vest­ments, we are de­tect­ing and re­mov­ing ter­ror­ism con­tent at a far higher success rate than even two years ago,” the com­pany said in a state­ment. “We don’t claim to find ev­ery­thing and we re­main vig­i­lant in our ef­forts against ter­ror­ist groups around the world.”

Re­act­ing to the AP’s reporting, Rep. Ben­nie Thomp­son, D-Miss., the chair­man of the House Home­land Se­cu­rity Com­mit­tee ex­pressed frus­tra­tion that Face­book has made so lit­tle progress on block­ing con­tent de­spite re­as­sur­ances he re­ceived from the com­pany.

“This is yet an­other deeply wor­ri­some ex­am­ple of Face­book’s in­abil­ity to man­age its own plat­forms — and the ex­tent to which it needs to clean up its act,” he said. “Face­book must not only rid its plat­forms of ter­ror­ist and ex­trem­ist con­tent, but it also needs to be able to pre­vent it from be­ing am­pli­fied.”

But as a stark in­di­ca­tion of how eas­ily users can evade Face­book, one page from a user called “Nawan al-Farancsa” has a header whose white let­ter­ing against a black back­ground says in English “The Is­lamic State.” The ban­ner is punc­tu­ated with a photo of an ex­plo­sive mush­room cloud ris­ing from a city.

The pro­file should have caught the at­ten­tion of Face­book — as well as counter-in­tel­li­gence agen­cies. It was cre­ated in June 2018, lists the user as com­ing from Chech­nya, once a mil­i­tant hotspot. It says he lived in Hei­del­berg, Ger­many, and stud­ied at a univer­sity in In­done­sia. Some of the user’s friends also posted mil­i­tant con­tent.

The page, still up in re­cent days, ap­par­ently es­caped Face­book’s sys­tems, be­cause of an ob­vi­ous and long-run­ning eva­sion of mod­er­a­tion that Face­book should be adept at rec­og­niz­ing: The let­ters were not search­able text but em­bed­ded in a graphic block. But the com­pany says its tech­nol­ogy scans au­dio, video and text — in­clud­ing when it is em­bed­ded — for im­ages that re­flect vi­o­lence, weapons or lo­gos of pro­hib­ited groups.

The so­cial net­work­ing gi­ant has en­dured a rough two years be­gin­ning in 2016, when Rus­sia’s use of so­cial me­dia to med­dle with the U.S. pres­i­den­tial elec­tions came into fo­cus. Zuckerberg ini­tially down­played the role Face­book played in the in­flu­ence op­er­a­tion by Rus­sian in­tel­li­gence, but the com­pany later apol­o­gized.

Face­book says it now em­ploys 30,000 peo­ple who work on its safety and se­cu­rity prac­tices, re­view­ing po­ten­tially harm­ful ma­te­rial and any­thing else that might not be­long on the site. Still, the com­pany is putting a lot of its faith in ar­ti­fi­cial in­tel­li­gence and its sys­tems’ abil­ity to even­tu­ally weed out bad stuff with­out the help of hu­mans. The new re­search sug­gests that goal is a long way away and some crit­ics al­lege that the com­pany is not mak­ing a sin­cere ef­fort.

When the ma­te­rial isn’t re­moved, it’s treated the same as any­thing else posted by Face­book’s 2.4 bil­lion users — cel­e­brated in an­i­mated videos, linked and cat­e­go­rized and rec­om­mended by al­go­rithms.

As­so­ci­ated Press

Amr Al Azm, a pro­fes­sor of Mid­dle East His­tory and An­thro­pol­ogy at Shawnee State Univer­sity, in his of­fice in Portsmouth, Ohio.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.