Los Angeles Times

Love deepfakes? Election 2024 shapes up as Year of Lies

Recent advances in AI allow for the creation of lifelike audio and images for campaigns.

- By David Klepper and Ali Swenson Klepper and Swenson write for the Associated Press.

WASHINGTON — Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligen­ce tools would soon allow anyone to create fake images, video and audio that were realistic enough to fool voters and perhaps sway an election.

The synthetic images that emerged were often crude, unconvinci­ng and costly to produce, especially when other kinds of misinforma­tion were so inexpensiv­e and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.

No more. Sophistica­ted generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentiall­y taking campaign dirty tricks to a new low.

The implicatio­ns for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonat­e candidates and undermine elections on a scale and at a speed not yet seen.

“We’re not prepared for this,” warned A.J. Nash, vice president of intelligen­ce at the cybersecur­ity firm ZeroFox. “To me, the big leap forward is the audio and video capabiliti­es that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”

AI experts can quickly rattle off a number of alarming scenarios in which generative AI is used to create synthetic media for the purposes of confusing voters, slandering a candidate or even inciting violence.

Here are a few: Automated robocall messages, in a candidate’s voice, instructin­g voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video clips showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.

“What if Elon Musk personally calls you and tells you to vote for a certain candidate?” said Oren Etzioni, the founding chief executive of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. “A lot of people would listen. But it’s not him.”

Former President Trump, who is running in 2024, has shared AI-generated content with his followers on social media. A manipulate­d video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday, which distorted Cooper’s reaction to the CNN town hall last week with Trump, was created using an AI voicecloni­ng tool.

A dystopian campaign ad released last month by the Republican National Committee offers another glimpse of this digitally manipulate­d future. The online ad, which came after President Biden announced his reelection campaign, starts with a strange, slightly warped image of Biden and the text “What if the weakest president we’ve ever had was re-elected?”

A series of AI-generated images follows: Taiwan under attack; boarded up storefront­s in the United States as the economy crumbles; soldiers and armored military vehicles patrolling local streets as tattooed criminals and waves of immigrants create panic.

“An AI-generated look into the country’s possible future if Joe Biden is reelected in 2024,” reads the ad’s descriptio­n from the RNC.

The RNC acknowledg­ed its use of AI, but others, including nefarious political campaigns and foreign adversarie­s, will not, said Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecur­ity company based in Austin, Texas. Stoyanov predicted that groups looking to meddle with U.S. democracy will employ AI and synthetic media as a way to erode trust.

“What happens if an internatio­nal entity — a cybercrimi­nal or a nation-state — impersonat­es someone. What is the impact? Do we have any recourse?” Stoyanov said. “We’re going to see a lot more misinforma­tion from internatio­nal sources.”

AI-generated political disinforma­tion already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgende­r people to AI-generated images of children supposedly learning satanism in libraries.

AI images appearing to show Trump’s mug shot also fooled some social media users even though the former president didn’t have one taken when he was booked and arraigned in a Manhattan criminal court for falsifying business records. Other AI-generated images showed Trump resisting arrest, though their creator was quick to acknowledg­e their origin.

Legislatio­n that would require candidates to label campaign advertisem­ents created with AI has been introduced in the House by Rep. Yvette Clarke (D-N.Y.), who has also sponsored legislatio­n that would require anyone creating synthetic images to add a watermark indicating the fact.

Some states have offered their own proposals for addressing concerns about deepfakes.

Clarke said her greatest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and turns Americans against one another.

“It’s important that we keep up with the technology,” Clarke told the Associated Press. “We’ve got to set up some guardrails. People can be deceived, and it only takes a split second. People are busy with their lives and they don’t have the time to check every piece of informatio­n. AI being weaponized, in a political season, it could be extremely disruptive.”

This month, a trade associatio­n for political consultant­s in Washington condemned the use of deepfakes in political advertisin­g, calling them “a deception” with “no place in legitimate, ethical campaigns.”

Other forms of artificial intelligen­ce have for years been a feature of political campaignin­g, using data and algorithms to automate tasks such as targeting voters on social media or tracking down donors. Campaign strategist­s and tech entreprene­urs hope the most recent innovation­s will offer some positives in 2024 too.

Mike Nellis, chief executive of the progressiv­e digital agency Authentic, said he uses ChatGPT “every single day” and encourages his staff to use it too, as long as any content drafted with the tool is reviewed by human eyes afterward.

Nellis’ newest project, in partnershi­p with Higher Ground Labs, is an AI tool called Quiller. It will write, send and evaluate the effectiven­ess of fundraisin­g emails — all typically tedious tasks for campaigns.

“The idea is every Democratic strategist, every Democratic candidate will have a co-pilot in their pocket,” he said.

 ?? Elise Amendola Associated Press ?? ADVANCES in artificial intelligen­ce and so-called deepfakes could generate misinforma­tion that can sway elections. Above, a voting booth in Cambridge, Mass.
Elise Amendola Associated Press ADVANCES in artificial intelligen­ce and so-called deepfakes could generate misinforma­tion that can sway elections. Above, a voting booth in Cambridge, Mass.

Newspapers in English

Newspapers from United States