Facebook works to block terror propaganda
Social network strengthens effort to stop extremists’ propaganda
Social network using algorithms to flag, delete terrorist activity
With attacks on Western targets increasing pressure on Facebook, the giant social network says it’s making a new push to crack down on terrorist activity by using sophisticated algorithms to mine words, images and videos to root out and remove extremists’ propaganda and messages.
Artificial intelligence can’t do the job alone, so Facebook says it has amassed a team of 150, including counterterrorism experts, who are dedicated to tracking and taking down propaganda and other materials.
It’s also collaborating with fellow technology companies and consulting with researchers to keep up with the ever-changing social media tactics of the Islamic State and other terror groups.
“Just as terrorist propaganda has changed over the years, so have our enforcement efforts. We are now really focused on using technology to find this content so that we can remove it before people are seeing it,” says Monika Bickert, a former federal prosecutor who runs global policy management, the team that decides what can be posted on Facebook.
“We want Facebook to be a very hostile environment for terrorists, and we are doing everything we can to keep terror propaganda off Facebook.”
Sharp criticism from European officials, advertiser boycotts and lawsuits from family members of people killed in terrorist attacks are pushing Facebook, Google, Microsoft and Twitter to find more effective ways to banish terrorist activity.
New video digital fingerprinting technologies called “hashes” are helping flag and intercept extremist videos before they are posted. But these new tools can’t yet keep terrorists from gathering on Facebook to recruit and communicate with followers.
In the wake of the London attacks, British Prime Minister Theresa May has accused Facebook and other companies of not doing enough to crack down on terrorist activity. This week, May said she and French President Emmanuel Macron were working on a plan that would make Inter-
net companies legally liable for extremist materials on their services.
“They want to hear that social media companies are taking this seriously. We are taking it seriously,” Bickert said. “The measures they are talking about, we are already doing.”
For years Facebook balanced the threat to free speech with its ongoing efforts to eradicate terrorist propaganda.
About a year ago, it intensified efforts to combat terrorism, resulting in the removal of a great deal of that activity from its platform, says Seamus Hughes, deputy director of the program on extremism at George Washington University.
“Facebook at some point in the last year planted a flag in the ground and said: Not on our platform,” Hughes said.
Even as Facebook makes progress on one terrain, new battlefields emerge.
Researchers such as Hughes say much of the terrorist activity that has left Facebook has migrated to encrypted messaging services such as Telegram and Facebook-owned WhatsApp.
Facebook Live, the real-time streaming service, also presents a new challenge. And terrorists are still lurking out of sight on Facebook in private groups.
Artificial intelligence is already improving the ability to stop the spread of terrorist content on Facebook, such as flagging and intercepting known terrorist videos before they can be uploaded, says Brian Fishman, lead policy manager for counterterrorism at Facebook and the author of The Master Plan: ISIS, al-Qaeda, and the Jihadi Strategy for Final Victory.
Artificial intelligence is also being used to analyze text that has been removed for supporting or praising terrorist organizations such as the Islamic State and alQaeda as well as their affiliates to detect other content that may be terrorist propaganda.
That same technology is being used to ferret out private groups that support terrorism.
Facebook says it finds more than half of accounts are removed from the social network for terrorist activity.
But artificial intelligence has its limits, making human intervention necessary, for example, to distinguish between an image in a news article about terrorism and terrorism propaganda, so Face- book relies on content moderators.
“There is no switch you can flip. There is no ‘Find the Terrorist’ button,” Fishman said.
As in the offline world, terrorists tend to operate in clusters, so it identifies pages, groups, posts or profiles supporting terrorism to identify other accounts and content that support terrorism.
Facebook is also getting better at keeping these terrorists and their sympathizers from setting up new fake accounts so that it is not engaged in an endless game of Whac-a-Mole as terrorists create accounts as quickly as they can be deleted, he said.
Facebook CEO Mark Zuckerberg wrote about the use of artificial intelligence to police content among the “billions of posts, comments and messages across our services each day” in his nearly 6,000-word community letter in February.
“Artificial intelligence can help provide a better approach,” he wrote. “We are researching systems that can look at photos and videos to flag content our team should review. This is still very early in development, but we have started to have it look at some content, and it already generates about one-third of all reports to the team that reviews content for our community.”
Zuckerberg also underscored the importance of “protecting individual security and liberty.”