PC Pro

What the big firms are doing

Major web companies say they’re taking measures to prevent the spread of extremist material

-

In some cases, companies have been moved to act not only because of terrorist atrocities, but because they’ve come under commercial pressures, too. Google, for example, lost advertisin­g revenue after companies and the UK government pulled their adverts because they were being run alongside terror videos.

Google recently promised to invest in AI tools to spot and remove content that breaches its terms, as well as boosting staff numbers for YouTube’s Trusted Flagger programme and adding a new warning screen for dubious content. “The uncomforta­ble truth is that we, as an industry, must acknowledg­e that more needs to be done. Now,” Google said in a blog post.

Facebook said it also was deploying AI to remove content before it was seen by members, but admitted it was narrowly focused. “We are currently focusing our most cutting-edge techniques to combat terrorist content about ISIS, al-Qaeda and their affiliates, and we expect to expand to other terrorist organisati­ons in due course,” Facebook stated.

Facebook said it was using a combinatio­n of AI tools, content review staff and counterter­rorism experts to weed out material and accounts, with an arsenal including: IMAGE MATCHING Looks for terrorist photo or video and prevent re-uploads; LANGUAGE UNDERSTAND­ING

Experiment­ing with AI to understand terror-related text;

TERRORIST CLUSTERS Identifyin­g pages, groups and profiles supporting terrorism and employing algorithms to “fan out” to identify related material;

RECIDIVISM AI tools to remove fake accounts created by repeat offenders; CROSS-PLATFORM COLLABORAT­ION Working with other platforms to develop removal systems; REPORT AND REVIEW 3,000 staff to review reports of inappropri­ate material;

REAL-WORLD SECURITY SPECIALIST­S 150 counterter­rorism staff.

 ??  ??

Newspapers in English

Newspapers from United Kingdom