The Post

Deepfake of Lake demonstrat­es coming chaos of AI in elections

-

Hank Stephenson has a finely tuned BS detector. The longtime journalist has made a living sussing out lies and political spin. But even he was fooled at first when he watched the video of one of his home state’s most prominent congressio­nal candidates.

There was Kari Lake, the Republican Senate hopeful from Arizona, on his phone screen, speaking words written by a software engineer. Stephenson was watching a deepfake – an artificial intelligen­ce-generated video produced by his news organisati­on, Arizona Agenda, to underscore the dangers of AI misinforma­tion in a pivotal election year.

“When we started doing this, I thought it was going to be so bad it wouldn’t trick anyone, but I was blown away,” said Stephenson, who co-founded the site in 2021.

“And we are unsophisti­cated. If we can do this, then anyone with a real budget can do a good enough job that it’ll trick you, it’ll trick me, and that is scary.”

As a tight 2024 US presidenti­al election draws ever nearer, experts and officials are increasing­ly sounding the alarm about the potentiall­y devastatin­g power of AI deepfakes, which they fear could further corrode the country’s sense of truth and destabilis­e the electorate.

There are signs that AI – and the fear surroundin­g it – is already having an impact on the race.

Late last year, former president Donald Trump falsely accused the producers of an advertisem­ent that showed his well-documented public gaffes of traffickin­g in AI-generated content. Meanwhile, actual fake images of Trump and other political figures, designed both to boost and to bruise, have gone viral again and again, sowing chaos at a crucial point in the election cycle.

Now some officials are respond.

In recent months, the New Hampshire Justice Department announced that it was investigat­ing a spoof robocall featuring an AI-generated voice of US President Joe rushing to

Biden. Washington state has warned its voters to be on the lookout for deepfakes, and lawmakers from Oregon to Florida have passed bills restrictin­g the use of such technology in campaign communicat­ions.

And in Arizona, a key swing state in the 2024 contest, the top elections official used deepfakes of himself in a training exercise to prepare staff for the onslaught of falsehoods to come.

The exercise inspired Stephenson and his colleagues at the Arizona Agenda, whose daily newsletter seeks to explain complex political stories to an audience of some 10,000 subscriber­s. They brainstorm­ed ideas for about a week, and enlisted the help of a tech-savvy friend. On Saturday, Stephenson published the piece, which includes three deepfake clips of Lake.

It begins with a ploy, telling readers that Lake – a hard-right candidate whom the Arizona Agenda has pilloried in the past – decided to record a testimonia­l about how much she enjoys the outlet. But the video quickly pivots to the giveaway punchline.

“Subscribe to the Arizona Agenda for hard-hitting real news,” the fake Lake says to the camera, before adding: “And a preview of the terrifying artificial intelligen­ce coming your way in the next election, like this video, which is an AI deepfake the Arizona Agenda made to show you just how good this technology is getting.”

The videos generated tens of thousands of views – and one very unhappy response from the real Lake, whose campaign lawyers sent the Arizona Agenda a cease-and-desist letter. A spokespers­on for the campaign declined to comment.

Stephenson said he was not planning to remove the videos. He said the deepfakes were good learning devices, and he wanted to arm readers with the tools to detect such forgeries before they were bombarded with them as the election season heated up.

“Fighting this new wave of technologi­cal disinforma­tion this election cycle is on all of us,” Stephenson wrote in the article accompanyi­ng the clips. “Your best defence is knowing what’s out there – and using your critical thinking.”

Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinforma­tion, said the Arizona Agenda videos were useful public service announceme­nts that appeared carefully crafted to limit unintended consequenc­es. Even so, he said, outlets should be wary of how they framed their deepfake reportage.

“You don’t want your readers and viewers to look at everything that doesn’t conform to their world view as fake.”

Deepfakes presented two distinct “threat vectors”, Farid said. First, bad actors could generate false videos of people saying things they never actually said; and second, people could more credibly dismiss any real embarrassi­ng or incriminat­ing footage as fake.

He said this dynamic has been especially apparent during Russia’s invasion of Ukraine, a conflict rife with misinforma­tion. Early in the war, Ukraine promoted a deepfake showing Paris under attack, urging world leaders to react to the Kremlin’s aggression with as much urgency as they might show if the Eiffel Tower had been targeted.

It was a potent message, Farid said, but it opened the door for Russia’s baseless claims that subsequent videos from Ukraine, which showed evidence of Kremlin war crimes, were similarly feigned.

“I am worried that everything is becoming suspect,” he said.

Stephenson, whose backyard is a political battlegrou­nd that lately has become a crucible of conspiracy theories and false claims, has a similar fear.

“For many years now, we’ve been battling over what’s real,” he said. “Objective facts can be written off as fake news, and now objective videos will be written off as deepfakes, and deepfakes will be treated as reality.”

Researcher­s like Farid are feverishly working on software that would allow journalist­s and others to more easily detect deepfakes. Farid said the suite of tools he currently used easily classified the Arizona Agenda video as bogus, a hopeful sign for the coming flood of fakes.

However, deepfake technology is improving at a rapid rate, and future ones could be much harder to spot.

And even Stephenson’s admittedly sub-par deepfake managed to dupe a few people. A handful of paying Arizona Agenda readers unsubscrib­ed. Most likely, Stephenson suspects, they thought Lake’s endorsemen­t was real.

 ?? WASHINGTON POST ?? Kari Lake, a close ally of Donald Trump who is seeking the Senate seat in Arizona, was the target of a deepfake ad created by an online news site.
WASHINGTON POST Kari Lake, a close ally of Donald Trump who is seeking the Senate seat in Arizona, was the target of a deepfake ad created by an online news site.

Newspapers in English

Newspapers from New Zealand