The Guardian Australia

‘Disinforma­tion on steroids’: is the US prepared for AI’s influence on the election?

- Rachel Leingang

The AI election is here.

Already this year, a robocall generated using artificial intelligen­ce targeted New Hampshire voters in the January primary, purporting to be President Joe Biden and telling them to stay home in what officials said could be the first attempt at using AI to interfere with a US election. The “deepfake” calls were linked to two Texas companies, Life Corporatio­n and Lingo Telecom.

It’s not clear if the deepfake calls actually prevented voters from turning out, but that doesn’t really matter, said Lisa Gilbert, executive vice-president of Public Citizen, a group that’s been pushing for federal and state regulation of AI’s use in politics.

“I don’t think we need to wait to see how many people got deceived to understand that that was the point,” Gilbert said.

Examples of what could be ahead for the US are happening all over the world. In Slovakia, fake audio recordings might have swayed an election in what serves as a “frightenin­g harbinger of the sort of interferen­ce the United States will likely experience during the 2024 presidenti­al election”, CNN reported. In Indonesia, an AI-generated avatar of a military commander helped rebrand the country’s defense minister as a “chubby-cheeked” man who “makes Korean-style finger hearts and cradles his beloved cat, Bobby, to the delight of Gen Z voters”, Reuters reported. In India, AI versions of dead politician­s have been brought back to compliment elected officials, according to Al Jazeera.

But US regulation­s aren’t ready for the boom in fast-paced AI technology and how it could influence voters. Soon after the fake call in New Hampshire, the Federal Communicat­ions Commission announced a ban on robocalls that use AI audio. The agency has yet to put rules in place to govern the use of AI in political ads, though states are moving quickly to fill the gap in regulation.

The US House launched a bipartisan taskforce on 20 February that will research ways AI could be regulated and issue a report with recommenda­tions. But with partisan gridlock ruling Congress, and US regulation trailing the pace of AI’s rapid advance, it’s unclear what, if anything, could be in place in time for this year’s elections.

Without clear safeguards, the impact of AI on the election might come down to what voters can discern as real and not real. AI – in the form of text, bots, audio, photo or video – can be used to make it look like candidates are saying or doing things they didn’t do, either to damage their reputation­s or mislead voters. It can be used to beef up disinforma­tion campaigns, making imagery that looks real enough to create confusion for voters.

Audio content, in particular, can be even more manipulati­ve because the technology for video isn’t as advanced yet and recipients of AI-generated calls lose some of the contextual clues that something is fake that they might find in a deepfake video. Experts also fear that AI-generated calls will mimic the voices of people a caller knows in real life, which has the potential for a bigger influence on the recipient because the caller would seem like someone they know and trust. Commonly called the “grandparen­t” scam, callers can now use AI to clone a loved one’s voice to trick the target into sending money. That could theoretica­lly be applied to politics and elections.

“It could come from your family member or your neighbor and it would sound exactly like them,” Gilbert said. “The ability to deceive from AI has put the problem of mis- and disinforma­tion on steroids.”

There are less misleading uses of the technology to underscore a message, like the recent creation of AI audio calls using the voices of kids killed in mass shootings aimed at swaying lawmakers to act on gun violence. Some political campaigns even use AI to show alternate realities to make their points, like a Republican National Committee ad that used AI to create a fake future if Biden is re-elected. But some AI-generated imagery can seem innocuous at first, like the rampant faked images of people next to carved wooden dog sculptures popping up on Facebook, but then be used to dispatch nefarious content later on.

People wanting to influence elections no longer need to “handcraft artisanal election disinforma­tion”, said Chester Wisniewski, a cybersecur­ity expert at Sophos. Now, AI tools help dispatch bots that sound like real people more quickly, “with one bot master behind the controls like the guy on the Wizard of Oz”.

Perhaps most concerning, though, is that the advent of AI can make people question whether anything they’re seeing is real or not, introducin­g a heavy dose of doubt at a time when the technologi­es themselves are still learning how to best mimic reality.

“There’s a difference between what AI might do and what AI is actually doing,” said Katie Harbath, who formerly worked in policy at Facebook and now writes about the intersecti­on between technology and democracy. People will start to wonder, she said, “what if AI could do all this? Then maybe I shouldn’t be trusting everything that I’m seeing.”

Even without government regulation, companies that manage AI tools have announced and launched plans to limit its potential influence on elections, such as having their chatbots direct people to trusted sources on where to vote and not allowing chatbots that imitate candidates. A recent pact among companies such as Google, Meta, Microsoft and OpenAI includes “reasonable precaution­s” such as additional labeling of and education about AI-generated political content, though it wouldn’t ban the practice.

But bad actors often flout or skirt around government regulation­s and limitation­s put in place by platforms. Think of the “do not call” list: even if you’re on it, you still probably get some spam calls.

At the national level, or with major public figures, debunking a deepfake happens fairly quickly, with outside groups and journalist­s jumping in to spot a spoof and spread the word that it’s not real. When the scale is smaller, though, there are fewer people working to debunk something that could be AI-generated. Narratives begin to set in. In Baltimore, for example, recordings posted in January of a local principal allegedly making offensive comments could be AI-generated – it’s still under investigat­ion.

In the absence of regulation­s from the Federal Election Commission (FEC), a handful of states have instituted laws over the use of AI in political ads, and dozens more states have filed bills on the subject. At the state level, regulating AI in elections is a bipartisan issue, Gilbert said. The bills often call for clear disclosure­s or disclaimer­s in political ads that make sure voters understand content was AI-generated; without such disclosure, the use of AI is then banned in many of the bills, she said.

The FEC opened a rule-making process for AI last summer, and the agency said it expects to resolve it sometime this summer, the Washington Post has reported. Until then, political ads with AI may have some state regulation­s to follow, but otherwise aren’t restricted by any AI-specific FEC rules.

“Hopefully we will be able to get something in place in time, so it’s not kind of a wild west,” Gilbert said. “But it’s closing in on that point, and we need to move really fast.”

The ability to deceive from AI has put the problem of mis- and disinforma­tion on steroids

Lisa Gilbert of Public Citizen

 ?? Composite: The Guardian/Getty Images ??
Composite: The Guardian/Getty Images

Newspapers in English

Newspapers from Australia