AI is destabilizing ‘the concept of truth itself ’
New technology playing a dirty role in politics
Experts in artificial intelligence have long warned that AI generated content could muddy the waters of perceived reality. Weeks into a pivotal election year, such AI confusion is on the rise.
Politicians around the globe have been swatting away potentially damning pieces of evidence — grainy video footage of hotel trysts, voice recordings criticizing political opponents — by dismissing them as AI-generated fakes. At the same time, AI deep-fakes are being used to spread misinformation.
On Monday, the New Hampshire Justice Department said it was investigating robocalls featuring what appeared to be an AI-generated voice that sounded like President Biden telling voters to skip the Tuesday primary, the first notable use of AI for voter suppression this campaign cycle.
Last month, former president Donald Trump dismissed an ad on Fox News featuring video of his well-documented public gaffes — including his struggle to pronounce the word “anonymous” in Montana and his visit to the California town of “Pleasure,” a.k.a. Paradise, both in 2018 — claiming the footage was generated by AI.
“The perverts and losers at the failed and once disbanded Lincoln Project, and others, are using A.I. (Artificial Intelligence) in their Fake television commercials in order to make me look as bad and pathetic as Crooked Joe Biden, not an easy thing to do,” Trump wrote on Truth Social. “FoxNews shouldn’t run these ads.”
The Lincoln Project, a political action committee formed by moderate Republicans to oppose Trump, swiftly denied the claim; the ad featured incidents during Trump’s presidency that were widely covered at the time and witnessed in real life by many independent observers.
Still, AI creates a “liar’s dividend,” said Hany Farid, a professor at the University of California Berkeley who studies digital propaganda and misinformation tion. “When you actually do catch a police officer or politician saying something awful, they have plausible deniability” in the age of AI.
AI “destabilizes the concept of truth itself,” added Libby Lange, an analyst at the misinformation tracking organization Graphika. “If everything could be fake, and if everyone’s claiming everything is fake or manipulated in some way, there’s really no sense of ground truth. Politically motivated actors, especially, can take whatever interpretathey choose.”
Trump is not alone in seizing this advantage. Around the world, AI is becoming a common scapegoat for politicians trying to fend off damaging allegations. Late last year, a grainy video surfaced of a ruling-party Taiwanese politician entering a hotel with a woman, indicating he was having an affair. Commentators and other politicians quickly came to his defense, saying the footage was AI-generated, though it remains unclear whether it actually was.
In April, a 26-second voice recording was leaked in which a politician in the Indian state of Tamil Nadu appeared to accuse his own party of illegally amassing $3.6 billion. The politician denied the recording’s veracity, calling it “machine generated”; experts have said they are unsure whether the audio is real or fake.
AI companies have generally said their tools shouldn’t be used in political campaigns now, but enforcement has been spotty. On Friday, OpenAI banned a developer from using its tools after the developer built a bot mimicking long-shot Democratic presidential candidate nominee Dean Phillips. Phillips’s campaign had supported the bot, but after The Washington Post reported on it, OpenAI deemed that it broke rules against use of its tech for campaigns.
AI-related confusion is also swirling beyond politics. Last week, social media users began circulating an audio clip they claimed was a Baltimore County, Md., school principal on a racist tirade. The union that represents the principal has said the audio is AI-generated.
Several signs do point to that conclusion, including the uniform cadence of the speech and indications of splicing, said Farid, who analyzed the audio. But without knowing where it came from or in what context it was recorded, he said, it’s impossible to say for sure.
These claims hold weight because AI deepfakes are more common now and better at replicating a person’s voice and appearance. Deepfakes regularly go viral on X, Facebook, and other social platforms. Meanwhile, the tools and methods to identify an AI-created piece of media are not keeping up with rapid advances in AI’s ability to generate such content.