Deepfake of Lake demonstrates coming chaos of AI in elections
Hank Stephenson has a finely tuned BS detector. The longtime journalist has made a living sussing out lies and political spin. But even he was fooled at first when he watched the video of one of his home state’s most prominent congressional candidates.
There was Kari Lake, the Republican Senate hopeful from Arizona, on his phone screen, speaking words written by a software engineer. Stephenson was watching a deepfake – an artificial intelligence-generated video produced by his news organisation, Arizona Agenda, to underscore the dangers of AI misinformation in a pivotal election year.
“When we started doing this, I thought it was going to be so bad it wouldn’t trick anyone, but I was blown away,” said Stephenson, who co-founded the site in 2021.
“And we are unsophisticated. If we can do this, then anyone with a real budget can do a good enough job that it’ll trick you, it’ll trick me, and that is scary.”
As a tight 2024 US presidential election draws ever nearer, experts and officials are increasingly sounding the alarm about the potentially devastating power of AI deepfakes, which they fear could further corrode the country’s sense of truth and destabilise the electorate.
There are signs that AI – and the fear surrounding it – is already having an impact on the race.
Late last year, former president Donald Trump falsely accused the producers of an advertisement that showed his well-documented public gaffes of trafficking in AI-generated content. Meanwhile, actual fake images of Trump and other political figures, designed both to boost and to bruise, have gone viral again and again, sowing chaos at a crucial point in the election cycle.
Now some officials are respond.
In recent months, the New Hampshire Justice Department announced that it was investigating a spoof robocall featuring an AI-generated voice of US President Joe rushing to
Biden. Washington state has warned its voters to be on the lookout for deepfakes, and lawmakers from Oregon to Florida have passed bills restricting the use of such technology in campaign communications.
And in Arizona, a key swing state in the 2024 contest, the top elections official used deepfakes of himself in a training exercise to prepare staff for the onslaught of falsehoods to come.
The exercise inspired Stephenson and his colleagues at the Arizona Agenda, whose daily newsletter seeks to explain complex political stories to an audience of some 10,000 subscribers. They brainstormed ideas for about a week, and enlisted the help of a tech-savvy friend. On Saturday, Stephenson published the piece, which includes three deepfake clips of Lake.
It begins with a ploy, telling readers that Lake – a hard-right candidate whom the Arizona Agenda has pilloried in the past – decided to record a testimonial about how much she enjoys the outlet. But the video quickly pivots to the giveaway punchline.
“Subscribe to the Arizona Agenda for hard-hitting real news,” the fake Lake says to the camera, before adding: “And a preview of the terrifying artificial intelligence coming your way in the next election, like this video, which is an AI deepfake the Arizona Agenda made to show you just how good this technology is getting.”
The videos generated tens of thousands of views – and one very unhappy response from the real Lake, whose campaign lawyers sent the Arizona Agenda a cease-and-desist letter. A spokesperson for the campaign declined to comment.
Stephenson said he was not planning to remove the videos. He said the deepfakes were good learning devices, and he wanted to arm readers with the tools to detect such forgeries before they were bombarded with them as the election season heated up.
“Fighting this new wave of technological disinformation this election cycle is on all of us,” Stephenson wrote in the article accompanying the clips. “Your best defence is knowing what’s out there – and using your critical thinking.”
Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinformation, said the Arizona Agenda videos were useful public service announcements that appeared carefully crafted to limit unintended consequences. Even so, he said, outlets should be wary of how they framed their deepfake reportage.
“You don’t want your readers and viewers to look at everything that doesn’t conform to their world view as fake.”
Deepfakes presented two distinct “threat vectors”, Farid said. First, bad actors could generate false videos of people saying things they never actually said; and second, people could more credibly dismiss any real embarrassing or incriminating footage as fake.
He said this dynamic has been especially apparent during Russia’s invasion of Ukraine, a conflict rife with misinformation. Early in the war, Ukraine promoted a deepfake showing Paris under attack, urging world leaders to react to the Kremlin’s aggression with as much urgency as they might show if the Eiffel Tower had been targeted.
It was a potent message, Farid said, but it opened the door for Russia’s baseless claims that subsequent videos from Ukraine, which showed evidence of Kremlin war crimes, were similarly feigned.
“I am worried that everything is becoming suspect,” he said.
Stephenson, whose backyard is a political battleground that lately has become a crucible of conspiracy theories and false claims, has a similar fear.
“For many years now, we’ve been battling over what’s real,” he said. “Objective facts can be written off as fake news, and now objective videos will be written off as deepfakes, and deepfakes will be treated as reality.”
Researchers like Farid are feverishly working on software that would allow journalists and others to more easily detect deepfakes. Farid said the suite of tools he currently used easily classified the Arizona Agenda video as bogus, a hopeful sign for the coming flood of fakes.
However, deepfake technology is improving at a rapid rate, and future ones could be much harder to spot.
And even Stephenson’s admittedly sub-par deepfake managed to dupe a few people. A handful of paying Arizona Agenda readers unsubscribed. Most likely, Stephenson suspects, they thought Lake’s endorsement was real.