The News-Times

Facebook auto-generates videos celebratin­g extremist images

-

The animated video begins with a photo of the black flags of jihad. Seconds later, it flashes highlights of a year of social media posts: plaques of anti-Semitic verses, talk of retributio­n and a photo of two men carrying more jihadi flags while they burn the stars and stripes.

It wasn’t produced by extremists; it was created by Facebook. In a clever bit of self-promotion, the social media giant takes a year of a user’s content and auto-generates a celebrator­y video. In this case, the user called himself “Abdel-Rahim Moussa, the Caliphate.”

“Thanks for being here, from Facebook,” the video concludes in a cartoon bubble before flashing the company’s famous “thumbs up.”

Facebook likes to give the impression it’s staying ahead of extremists by taking down their posts, often before users even see them. But a confidenti­al whistleblo­wer’s complaint to the Securities and Exchange Commission obtained by The Associated Press alleges the social media company has exaggerate­d its success. Even worse, it shows that the company is inadverten­tly making use of propaganda by militant groups to autogenera­te videos and pages that could be used for networking by extremists.

According to the complaint, over a five-month period last year, researcher­s monitored pages by users who affiliated themselves with groups the U.S. State Department has designated as terrorist organizati­ons. In that period, 38% of the posts with prominent symbols of extremist groups were removed. In its own review, the AP found that as of this month, much of the banned content cited in the study — an execution video, images of severed heads, propaganda honoring martyred militants — slipped through the algorithmi­c web and remained easy to find on Facebook.

The complaint is landing as Facebook tries to stay ahead of a growing array of criticism over its privacy practices and its ability to keep hate speech, live-streamed murders and suicides off its service. In the face of criticism, CEO Mark Zuckerberg has spoken of his pride in the company’s ability to weed out violent posts automatica­lly through artificial intelligen­ce. During an earnings call last month, for instance, he repeated a carefully worded formulatio­n that Facebook has been employing.

“In areas like terrorism, for

al-Qaida and ISIS-related content, now 99 percent of the content that we take down in the category our systems flag proactivel­y before anyone sees it,” he said. Then he added: “That’s what really good looks like.”

Zuckerberg did not offer an estimate of how much of total prohibited material is being removed.

The research behind the SEC complaint is aimed at spotlighti­ng glaring flaws in the company’s approach. Last year, researcher­s began monitoring users who explicitly identified themselves as members of extremist groups. It wasn’t hard to document. Some of these people even list the extremist groups as their employers. One profile heralded by the black flag of an al-Qaida affiliated group listed his employer, perhaps facetiousl­y, as Facebook. The profile that included the autogenera­ted video with the flag burning also had a video of al-Qaida leader Ayman al-Zawahiri urging jihadi groups not to fight among themselves.

While the study is far from comprehens­ive — in part because Facebook rarely makes much of its data publicly available — researcher­s involved in the project say the ease of identifyin­g these profiles using a basic keyword search and the fact that so few of them have been removed suggest that Facebook’s claims that its systems catch most extremist content are not accurate.

“I mean, that’s just stretching the imaginatio­n to beyond incredulit­y,” says Amr Al Azm, one of the researcher­s involved in the project. “If a small group of researcher­s can find hundreds of pages of content by simple searches, why can’t a giant company with all its resources do it?”

Al Azm, a professor of history and anthropolo­gy at Shawnee State University in Ohio, has also directed a group in Syria documentin­g the looting and smuggling of antiquitie­s.

Facebook concedes that its systems are not perfect, but says it’s making improvemen­ts.

“After making heavy investment­s, we are detecting and removing terrorism content at a far higher success rate than even two years ago,” the company said in a statement. “We don’t claim to find everything and we remain vigilant in our efforts against terrorist groups around the world.”

Reacting to the AP’s reporting, Rep. Bennie Thompson, D-Miss., the chairman of the House Homeland Security Committee expressed frustratio­n that Facebook has made so little progress on blocking content despite reassuranc­es he received from the company.

“This is yet another deeply worrisome example of Facebook’s inability to manage its own platforms — and the extent to which it needs to clean up its act,” he said. “Facebook must not only rid its platforms of terrorist and extremist content, but it also needs to be able to prevent it from being amplified.”

But as a stark indication of how easily users can evade Facebook, one page from a user called “Nawan al-Farancsa” has a header whose white lettering against a black background says in English “The Islamic State.” The banner is punctuated with a photo of an explosive mushroom cloud rising from a city.

The profile should have caught the attention of Facebook — as well as counter-intelligen­ce agencies. It was created in June 2018, lists the user as coming from Chechnya, once a militant hotspot. It says he lived in Heidelberg, Germany, and studied at a university in Indonesia. Some of the user’s friends also posted militant content.

The page, still up in recent days, apparently escaped Facebook’s systems, because of an obvious and long-running evasion of moderation that Facebook should be adept at recognizin­g: The letters were not searchable text but embedded in a graphic block. But the company says its technology scans audio, video and text — including when it is embedded — for images that reflect violence, weapons or logos of prohibited groups.

The social networking giant has endured a rough two years beginning in 2016, when Russia’s use of social media to meddle with the U.S. presidenti­al elections came into focus. Zuckerberg initially downplayed the role Facebook played in the influence operation by Russian intelligen­ce, but the company later apologized.

Facebook says it now employs 30,000 people who work on its safety and security practices, reviewing potentiall­y harmful material and anything else that might not belong on the site. Still, the company is putting a lot of its faith in artificial intelligen­ce and its systems’ ability to eventually weed out bad stuff without the help of humans. The new research suggests that goal is a long way away and some critics allege that the company is not making a sincere effort.

When the material isn’t removed, it’s treated the same as anything else posted by Facebook’s 2.4 billion users — celebrated in animated videos, linked and categorize­d and recommende­d by algorithms.

 ?? Associated Press ?? Amr Al Azm, a professor of Middle East History and Anthropolo­gy at Shawnee State University, in his office in Portsmouth, Ohio.
Associated Press Amr Al Azm, a professor of Middle East History and Anthropolo­gy at Shawnee State University, in his office in Portsmouth, Ohio.

Newspapers in English

Newspapers from United States