Western Mail

‘Action now as tech giants fail to block access to foul child images’

- RYAN HOOPER Press Associatio­n chief reporter newsdesk@walesonlin­e.co.uk

TECHNOLOGY giants and social media firms have “failed” in attempts to prevent access to child sex abuse images, allowing for “an explosion in online-facilitate­d” crimes against children, an inquiry has found.

Industry leaders such as Microsoft, Facebook, Google and Apple have all struggled to get to grips with “the scale of the problem on their platforms and services”, and should “do more to identify the true scale of the different types of offending”, such as child grooming, the Independen­t Inquiry into Child Sex Abuse report found.

It said regulation of the internet “was now required”, and called on the government to press industry leaders into a raft of action designed to limit abuse, including pre-screening images uploaded to the web and to introduce new ageverific­ation technology.

Professor Alexis Jay, inquiry chairwoman, said: “The serious threat of child sexual abuse facilitate­d by the internet is an urgent problem which cannot be overstated.

“Despite industry advances in technology to detect and combat online-facilitate­d abuse, the risk of immeasurab­le harm to children and their families shows no sign of diminishin­g.

“The panel and I hope this report and its recommenda­tions lead internet companies, law enforcemen­t and government to implement vital measures to prioritise the protection of children and prevent abuse facilitate­d online.”

The report is based on 14 days of public hearings held in January 2018 and May 2019, during which the Met – Britain’s biggest police force – said it witnessed a 700% spike in the number of online child abuse cases referred to them by national investigat­ors over three years.

It also heard that live-streaming websites were “enabling” paedophile­s to widely share videos of child sexual abuse by failing to effectivel­y combat the threat.

In its 114-page report, published yesterday, the inquiry made four recommenda­tions to government:

to require industry to prescreen material before it is uploaded to the internet to prevent access to known indecent images of children;

to press the WeProtect Global Alliance – a group comprising 97 government­s, 25 technology companies and 30 civil society organisati­ons – to take more action internatio­nally to ensure that those countries hosting indecent images of children implement legislatio­n and procedures to prevent access to such imagery;

to introduce legislatio­n requiring providers of online services and social media platforms to implement more stringent age verificati­on techniques on all relevant devices, and;

■ to publish, without further delay, the interim code of practice in respect of child sexual abuse and exploitati­on as proposed by the Online Harms White Paper.

The government is currently working on new legislatio­n around online harms, including placing a statutory duty of care on tech companies to keep their users safe, overseen by an independen­t regulator.

Earlier this month the UK joined the US, Canada, Australia and New Zealand in launching the Voluntary Principles to Counter Online Child Sexual Exploitati­on and Abuse, which detailed actions tech companies should take to protect younger users on their platforms.

The pledges range from stopping existing and new child sex abuse material appearing on platforms to taking steps to stop the livestream­ing of abuse, and identifyin­g and stopping grooming and predatory behaviour.

The proposals were endorsed by Facebook, Google, Microsoft, TikTok, Twitter and Snap.

But yesterday’s report identified that there were no evident barriers to pre-screening images.

It said: “Industry has failed to do all it can to prevent access to images of child sexual abuse.

“The time has come to stop access to such imagery by requiring industry to pre-screen material. No industry witness said that such a step was technologi­cally impossible.”

It said there had “been an explosion in online-facilitate­d child sexual abuse” and said “law enforcemen­t is struggling to keep pace”.

The report also found that indecent images could “be accessed all too easily”, saying that the child involved was re-victimised every time the image was viewed.

The report said: “The time has come for the Government to stop access to indecent images of children by requiring industry to prescreen material.”

It added that while there was evidence of “the positive intentions by industry to tackle online facilitate­d child sexual abuse and exploitati­on”, there was “a lack of a coherent long-term strategy on how this is to be achieved”.

The report concludes the latest strand of the inquiry, which has also focused on the role of the political establishm­ent in dealing with allegation­s of child sexual abuse.

Sue Hargreaves, chief executive of the Internet Watch Foundation charity, which contribute­d evidence to the inquiry, welcomed the report. She said: “There is no longer any reason not to be decisive on taking action against the predators online.

“This report makes it abundantly clear there is no room for excuses.”

Andy Burrows, head of child safety online policy at the NSPCC, described the report as “a damning indictment of Big Tech’s failure to take seriously their duty to protect young people from child abuse, which has been facilitate­d on their platforms on a massive scale”.

 ??  ??

Newspapers in English

Newspapers from United Kingdom