Yorkshire Post

AI software will identify ‘live threats’ for children

-

TECHNOLOGY is being developed that can block sexual or violent content as it is being filmed, shared or livestream­ed, which could help safeguard hundreds of thousands of children.

A British start-up is using livethreat detection software, powered by artificial intelligen­ce, to identify potentiall­y harmful material as it is filmed or shared in real time.

It could be used on children’s phones to prevent them from creating, sending or receiving video or pictures involving nudity, sexual content and violence “before any damage is done”.

This is believed to be key to ensuring safeguardi­ng, given that 29 per cent of child sexual abuse content acted on last year by the Internet Watch Foundation was self-generated, and this proportion is rising steeply.

Social media companies could use the technology to help prevent graphic content being uploaded and to interrupt livestream­s, protecting users and minimising the exposure of moderators to potentiall­y traumatisi­ng material, SafeToNet believes.

The company has already produced a device using similar AI which detects patterns on a phone’s keyboard to prevent sexting, bullying and other abuse.

This technology flagged up girls as young as nine who were being sent explicit texts during the coronaviru­s lockdown.

Chief executive Richard Pursey said the new technology, SafeToWatc­h, could help prevent grooming, sextortion and bullying.

Newspapers in English

Newspapers from United Kingdom