What the big firms are doing
Major web companies say they’re taking measures to prevent the spread of extremist material
In some cases, companies have been moved to act not only because of terrorist atrocities, but because they’ve come under commercial pressures, too. Google, for example, lost advertising revenue after companies and the UK government pulled their adverts because they were being run alongside terror videos.
Google recently promised to invest in AI tools to spot and remove content that breaches its terms, as well as boosting staff numbers for YouTube’s Trusted Flagger programme and adding a new warning screen for dubious content. “The uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now,” Google said in a blog post.
Facebook said it also was deploying AI to remove content before it was seen by members, but admitted it was narrowly focused. “We are currently focusing our most cutting-edge techniques to combat terrorist content about ISIS, al-Qaeda and their affiliates, and we expect to expand to other terrorist organisations in due course,” Facebook stated.
Facebook said it was using a combination of AI tools, content review staff and counterterrorism experts to weed out material and accounts, with an arsenal including: IMAGE MATCHING Looks for terrorist photo or video and prevent re-uploads; LANGUAGE UNDERSTANDING
Experimenting with AI to understand terror-related text;
TERRORIST CLUSTERS Identifying pages, groups and profiles supporting terrorism and employing algorithms to “fan out” to identify related material;
RECIDIVISM AI tools to remove fake accounts created by repeat offenders; CROSS-PLATFORM COLLABORATION Working with other platforms to develop removal systems; REPORT AND REVIEW 3,000 staff to review reports of inappropriate material;
REAL-WORLD SECURITY SPECIALISTS 150 counterterrorism staff.