The Guardian (USA)

Elections in UK and US at risk from AI-driven disinforma­tion, say experts

- Dan Milmo and Alex Hern

Next year’s elections in Britain and the US could be marked by a wave of AIpowered disinforma­tion, experts have warned, as generated images, text and deepfake videos go viral at the behest of swarms of AI-powered propaganda bots.

Sam Altman, CEO of the ChatGPT creator, OpenAI, told a congressio­nal hearing in Washington this week that the models behind the latest generation of AI technology could manipulate users.

“The general ability of these models to manipulate and persuade, to provide one-on-one interactiv­e disinforma­tion is a significan­t area of concern,” he said.

“Regulation would be quite wise: people need to know if they’re talking to an AI, or if content that they’re looking at is generated or not. The ability to really model … to predict humans, I think is going to require a combinatio­n of companies doing the right thing, regulation and public education.”

The prime minister, Rishi Sunak, said on Thursday the UK would lead on limiting the dangers of AI. Concerns over the technology have soared after breakthrou­ghs in generative AI, where tools like ChatGPT and Midjourney produce convincing text, images and even voice on command.

Where earlier waves of propaganda bots relied on simple pre-written messages sent en masse, or buildings full of “paid trolls” to perform the manual work of engaging with other humans, ChatGPT and other technologi­es raise the prospect of interactiv­e election interferen­ce at scale.

An AI trained to repeat talking points about Taiwan, climate breakdown or LGBT+ rights could tie up political opponents in fruitless arguments while convincing onlookers – over thousands of different social media accounts at once.

Prof Michael Wooldridge, director of foundation AI research at the UK’s Alan Turing Institute, said AI-powered disinforma­tion was his main concern about the technology.

“Right now in terms of my worries for AI, it is number one on the list. We have elections coming up in the UK and the US and we know social media is an incredibly powerful conduit for misinforma­tion. But we now know that generative AI can produce disinforma­tion on an industrial scale,” he said.

Wooldridge said chatbots such as ChatGPT could produce tailored disinforma­tion targeted at, for instance, a Conservati­ve voter in the home counties, a Labour voter in a metropolit­an area, or a Republican supporter in the midwest.

“It’s an afternoon’s work for somebody with a bit of programmin­g experience to create fake identities and just start generating these fake news stories,” he said.

After fake pictures of Donald Trump being arrested in New York went viral in March, shortly before eye-catching AI generated images of Pope Francis in a Balenciaga puffer jacket spread even further, others expressed concern about generated imagery being used to confused and misinform. But, Altman told the US Senators, those concerns could be overblown.

“Photoshop came on to the scene a long time ago and for a while people were really quite fooled by Photoshopp­ed images – then pretty quickly developed an understand­ing that images might be Photoshopp­ed.”

But as AI capabiliti­es become more and more advanced, there are concerns it is becoming increasing­ly difficult to believe anything we encounter online, whether it is misinforma­tion, when a falsehood is spread mistakenly, or disinforma­tion, where a fake narrative is generated and distribute­d on purpose.

Voice cloning, for instance, came to prominence in January after the emergence of a doctored video of the US president, Joe Biden, in which footage of him talking about sending tanks to Ukraine was transforme­d via voice simulation technology into an attack on transgende­r people – and was shared on social media.

A tool developed by the US firm ElevenLabs was used to create the fake version. The viral nature of the clip helped spur other spoofs, including one of Bill Gates purportedl­y saying the Covid-19 vaccine causes Aids. ElevenLabs, which admitted in January it was seeing “an increasing number of voice cloning misuse cases”, has since toughened its safeguards against vexatious use of its technology.

Recorded Future, a US cybersecur­ity firm, said rogue actors could be found selling voice cloning services online, including the ability to clone voices of corporate executives and public figures.

Alexander Leslie, a Recorded Future analyst, said the technology would only improve and become more widely available in the run-up to the US presidenti­al election, giving the tech industry and government­s a window to act now.

“Without widespread education and awareness this could become a real threat vector as we head into the presidenti­al election,” said Leslie.

A study by NewsGuard, a US organisati­on that monitors misinforma­tion and disinforma­tion, tested the model behind the latest version of ChatGPT by prompting it to generate 100 examples of false news narratives, out of approximat­ely 1,300 commonly used fake news “fingerprin­ts”.

NewsGuard found that it could generate all 100 examples as asked, including “Russia and its allies were not responsibl­e for the crash of Malaysia Airlines flight MH17 in Ukraine”. A test of Google’s Bard chatbot found that it could produce 76 such narratives.

NewsGuard also announced on Friday that the number of AI-generated news and informatio­n websites it was aware of had more than doubled in two weeks to 125.

Steven Brill, NewsGuard’s co-CEO, said he was concerned that rogue actors could harness chatbot technology to mass-produce variations of fake stories. “The danger is someone using it deliberate­ly to pump out these false narratives,” he said.

 ?? Photograph: Jim Lo Scalzo/EPA ?? Sam Altman, the CEO of OpenAI, told a congressio­nal hearing that AI models could manipulate humans.
Photograph: Jim Lo Scalzo/EPA Sam Altman, the CEO of OpenAI, told a congressio­nal hearing that AI models could manipulate humans.

Newspapers in English

Newspapers from United States