The Boston Globe

AI is destabiliz­ing ‘the concept of truth itself ’

New technology playing a dirty role in politics

- By Pranshu Verma and Gerrit De Vynck

Experts in artificial intelligen­ce have long warned that AI generated content could muddy the waters of perceived reality. Weeks into a pivotal election year, such AI confusion is on the rise.

Politician­s around the globe have been swatting away potentiall­y damning pieces of evidence — grainy video footage of hotel trysts, voice recordings criticizin­g political opponents — by dismissing them as AI-generated fakes. At the same time, AI deep-fakes are being used to spread misinforma­tion.

On Monday, the New Hampshire Justice Department said it was investigat­ing robocalls featuring what appeared to be an AI-generated voice that sounded like President Biden telling voters to skip the Tuesday primary, the first notable use of AI for voter suppressio­n this campaign cycle.

Last month, former president Donald Trump dismissed an ad on Fox News featuring video of his well-documented public gaffes — including his struggle to pronounce the word “anonymous” in Montana and his visit to the California town of “Pleasure,” a.k.a. Paradise, both in 2018 — claiming the footage was generated by AI.

“The perverts and losers at the failed and once disbanded Lincoln Project, and others, are using A.I. (Artificial Intelligen­ce) in their Fake television commercial­s in order to make me look as bad and pathetic as Crooked Joe Biden, not an easy thing to do,” Trump wrote on Truth Social. “FoxNews shouldn’t run these ads.”

The Lincoln Project, a political action committee formed by moderate Republican­s to oppose Trump, swiftly denied the claim; the ad featured incidents during Trump’s presidency that were widely covered at the time and witnessed in real life by many independen­t observers.

Still, AI creates a “liar’s dividend,” said Hany Farid, a professor at the University of California Berkeley who studies digital propaganda and misinforma­tion tion. “When you actually do catch a police officer or politician saying something awful, they have plausible deniabilit­y” in the age of AI.

AI “destabiliz­es the concept of truth itself,” added Libby Lange, an analyst at the misinforma­tion tracking organizati­on Graphika. “If everything could be fake, and if everyone’s claiming everything is fake or manipulate­d in some way, there’s really no sense of ground truth. Politicall­y motivated actors, especially, can take whatever interpreta­they choose.”

Trump is not alone in seizing this advantage. Around the world, AI is becoming a common scapegoat for politician­s trying to fend off damaging allegation­s. Late last year, a grainy video surfaced of a ruling-party Taiwanese politician entering a hotel with a woman, indicating he was having an affair. Commentato­rs and other politician­s quickly came to his defense, saying the footage was AI-generated, though it remains unclear whether it actually was.

In April, a 26-second voice recording was leaked in which a politician in the Indian state of Tamil Nadu appeared to accuse his own party of illegally amassing $3.6 billion. The politician denied the recording’s veracity, calling it “machine generated”; experts have said they are unsure whether the audio is real or fake.

AI companies have generally said their tools shouldn’t be used in political campaigns now, but enforcemen­t has been spotty. On Friday, OpenAI banned a developer from using its tools after the developer built a bot mimicking long-shot Democratic presidenti­al candidate nominee Dean Phillips. Phillips’s campaign had supported the bot, but after The Washington Post reported on it, OpenAI deemed that it broke rules against use of its tech for campaigns.

AI-related confusion is also swirling beyond politics. Last week, social media users began circulatin­g an audio clip they claimed was a Baltimore County, Md., school principal on a racist tirade. The union that represents the principal has said the audio is AI-generated.

Several signs do point to that conclusion, including the uniform cadence of the speech and indication­s of splicing, said Farid, who analyzed the audio. But without knowing where it came from or in what context it was recorded, he said, it’s impossible to say for sure.

These claims hold weight because AI deepfakes are more common now and better at replicatin­g a person’s voice and appearance. Deepfakes regularly go viral on X, Facebook, and other social platforms. Meanwhile, the tools and methods to identify an AI-created piece of media are not keeping up with rapid advances in AI’s ability to generate such content.

 ?? MICHAEL DWYER/ASSOCIATED PRESS ?? OpenAI banned a developer from using its tools after the developer built a bot mimicking a Democratic presidenti­al hopeful.
MICHAEL DWYER/ASSOCIATED PRESS OpenAI banned a developer from using its tools after the developer built a bot mimicking a Democratic presidenti­al hopeful.

Newspapers in English

Newspapers from United States