The Herald

Calls for a regulator to tackle deepfake AI tech

-

A REGULATOR should be appointed to oversee the developmen­t and use of artificial intelligen­ce (AI) technology in order to tackle issues such as deepfake pornograph­y, a charity has said.

Christian Action Research and Education (Care) said some types of AI technology, such as creating deepfake pornograph­ic images, sit in a legal “grey area”, because creating sexually explicit deepfake images or videos of an adult using AI tools is not currently a crime, although a new law to make it so is currently making its way through parliament.

The charity said a dedicated AI regulator could help resolve and police the issue.

It highlighte­d previous deepfake incidents involving politician­s as one warning sign, as well as deepfake pornograph­ic images of singer Taylor Swift, which circulated on social media.

The charity’s call comes as parliament prepares to debate legislatio­n that would establish a central AI regulator, known as the AI Authority.

Currently, the UK Government has proposed that existing regulators take on the role of monitoring AI use within their own sectors.

In February, it announced that more than £100 million will be spent preparing the UK to regulate AI, including helping upskill regulators in different sectors.

But Louise Davies, director of advocacy and policy at Care, said a centralise­d AI regulator was “essential”.

“One of the first things a regulator could do is ban so-called ‘nudificati­on’ technology.

“Whilst the sharing of a deepfake sexual image may constitute an offence at present, it is not illegal to create images or videos using tools that are found easily online.

“We see no reason why individual­s should be allowed to market tools that let others create sexual images of real people without consent.”

Newspapers in English

Newspapers from United Kingdom