PC Probe: Terror clampdown – what more can net firms do?
Politicians are demanding action against extremist content. Stewart Mitchell investigates whether that might backfire
Politicians want action against extremist content. We ask whether that might backfire.
Deflecting attention or rightly apportioning blame? That was the debate after the prime minister told internet firms that they must do more to tackle extremism in the aftermath of the recent terror attacks on the UK.
The extent to which any of the recent terrorists were radicalised online is unclear, but there’s no doubt the internet is increasingly the vehicle for terrorist activity. According to figures from international policy group, Counter Extremism Project (CEP), searches for the dead al-Qaeda operative Anwar al-Awlaki alone yielded 80,300 pieces of extremist content as of 5 June 2017, up from 61,900 results in December 2015.
“Despite YouTube’s pledge to remove hateful material, CEP has instead found Awlaki content to be increasingly available on the platform,” said Steven Cohen, director of CEP. “ISIS changed the landscape of extremism by being the first group to fully exploit the digital world to propagandise, radicalise and recruit new members.
“The power of the web to radicalise and inspire attacks has been shown time and time again, as has the inability of internet and social media companies to effectively combat it.”
While all parties acknowledge that there’s a problem, some experts believe it is wrong to assert access to extremist material is solely responsible for radicalising terrorists. “The suggestion that the internet intermediaries could solve it if they wanted to is completely misplaced,” said Paul Bernal, a lecturer in IT, human rights and media law at the University of East Anglia. “It’s not like they don’t try.”
“To push the blame onto the internet when there are so many contributory factors is an illusion,” Bernal added. “It deflects from the criticism of police cuts.”
Filter and flag
There are two key tactics that security professionals and technology companies use to reduce the spread of radicalising material. The first, and easiest to implement, is removing and filtering material that could be used to radicalise people – those videos glorifying atrocities and preachers calling for action.
By identifying and fingerprinting content, and listing addresses or domains that push such content, filters can block flagged material; but that works best on private networks, such as in schools. “Schools have to abide by the Prevent Duty, which says anyone in your care, you should be preventing from being radicalised,” said Claire Stead, online safety expert with Smoothwall, which provides network filters for schools. “You have to have appropriate filtering and monitoring in place to make sure that people can’t see things that are going to radicalise them.”
The traffic management – fed by lists of extremist sites provided by the police and other sources – may protect children at school, but that protection often evaporates as soon as the home bell rings. “Kids get a good service at school, where they can visit some sites and get blocked from others, but they go home and can get on everything,” said Stead. “There aren’t off-the-shelf filters for phones and the parental filters at home are a block – [but] they don’t block places that are generally safe but could be used for bad reasons. Social networks should be doing more.”
Work is being carried out to improve network-level filtering, which aims to block access to certain materials, in the same ways child abuse images are flagged and blocked. CEP, for example, has developed a tool based on “robust hashing” technologies, similar to the PhotoDNA algorithms that are currently used to identify and fingerprint child abuse images.
According to CEP, its eGLYPH technology could be deployed on web platforms to detect content and flag for removal at the point of upload. This could also be attached to a web crawler to actively scrape the internet for content, with takedown notices submitted to companies, hosts or carriers. “eGLYPH can efficiently identify known extremist content,” said CEP’s Cohen. “Notably, it can detect video and audio files as well as images.”
“Companies already work to take down this horrific content, and many already deploy ‘robust hashing’ to address child pornography and copyright infringement online,” said Cohen.
The problem with filtering and flagging, however, is that terror groups’ tactics extend beyond posting images and videos online. There’s also communication via social networks and systems such as Snapchat, as well as the dark web, all of which remain beyond reach.
ISIS changed the landscape of extremism by being the first group to fully exploit the digital world to propagandise
Direct intervention
The second key approach to stopping radicalisation is to address these issues by infiltrating the groups and finding the people who are doing the grooming. “That’s much harder and much more expensive, but instinct and some
of the evidence says that’s what’s far more likely to be effective,” said Bernal. “The dark web isn’t found through Google and Facebook and that’s where the problem is – it emphasises where you have to put the human intelligence, human experts going in there and doing it.”
Ironically, the two different approaches counteract each other. If you take down all the extremist material, it doesn’t act as a honey trap to those susceptible to radicalisation, making them harder to find. “If we think the problem is people coming across this stuff by accident, then it’s better to block everything we can,” Bernal said. “But if we think that the problem is people in the process of being radicalised – having that protester turned into an actual terrorist – then we need to be looking at the human intensive approach.”
“Otherwise you can’t find the people who are visiting these sites and they are precisely the people you want to stop taking the next steps.”
More resources, not more laws
Although successive governments have used terror as justification for stronger surveillance laws, what might prove more effective is additional staff for the security services. “If the police are to take this problem more seriously they have to be given the resources,” said Bernal. “It doesn’t mean greater surveillance powers – they have that already – it means investing in more manpower and expertise and that all costs money. You can’t do it without more money.”
The Home Office hasn’t provided details of funding, but the government did say last year that it was provisioning £1.9 billion over five years to fight cybercrime, although it’s unclear how much of that fund will be dedicated to combatting terrorism. “The UK government has said there’s going to be a vast amount given to cybersecurity strategy and some of that could be given to fighting extremism, because this is part of our cyber threat at the moment,” said Stead.