The Daily Telegraph

Clegg admits Meta failing to detect fake videos amid online safety fears

- By Matthew Field

SIR NICK CLEGG admits Facebook is struggling to combat the spread of fake videos, after Joe Biden and Rishi Sunak were targeted by AI impersonat­ors.

Meta’s head of global affairs said it would introduce new rules to try to halt the spread of synthetic videos that seek to manipulate users, but added it lacked the technology to stop them outright.

Sir Nick said while Meta had developed tools to detect images altered with AI, it could not yet replicate this for video or sound recordings.

The former deputy prime minister said Meta, which owns Facebook and Instagram, would tag synthetic, photo- realistic pictures with a label explaining the image was “imagined with AI”. He said other AI companies had started adding “invisible watermarks” to their images, which Meta was able to detect.

However, these identifyin­g markers have yet to be added to video or audio, “so we can’t yet detect those signals and label this content from other companies”, Sir Nick said.

He added the company’s efforts were “what’s technicall­y possible right now but it’s not yet possible to identify all Ai-generated content”.

Meta said it was working with AI companies to better detect manipulate­d videos, but meanwhile it would require users to proactivel­y label videos that have been altered with AI, making it clear they are synthetic. Those that fail to do so could be penalised. Sir Nick said the company was being “open about the limits of what’s possible”.

The admission comes as the tech giant faces growing pressure to tackle false videos and images ahead of multiple elections and faces accusation­s it is not doing enough to protect children online. On Monday, the company’s Oversight Board, which reviews its moderation decisions, criticised Meta’s policy on “manipulate­d media”, calling it “incoherent and confusing”. The board asked why Meta’s policy on digitally altered content only applies to videos, when fake audio clips have already been used to spread disinforma­tion about elections. There are fears that Ai-generated videos, which can look realistic, could be used to spread misinforma­tion during elections.

The panel said Meta needed to “make revisions quickly, given the record number of elections in 2024”. A digitally altered video of US president Joe Biden, falsely implying he had groped a young woman, spread widely on Facebook last year. More recently, videos with doctored audio from Labour leader Sir Keir Starmer and London Mayor Sadiq Khan have been posted to social media.

Last month, it was reported that hundreds of adverts featuring deep fake footage of Mr Sunak had also been posted to Facebook, promoting an investment scam.

In January, Michelle Donelan, the Technology Secretary, said the Government would introduce “robust mechanisms” to tackle election-related deepfakes. But she warned: “Nobody has a silver bullet”.

 ?? ?? Sir Nick Clegg, Meta’s head of global affairs, warned that it is not possible to identify all Ai-generated content
Sir Nick Clegg, Meta’s head of global affairs, warned that it is not possible to identify all Ai-generated content

Newspapers in English

Newspapers from United Kingdom