AI gives sleazy pols an alibi — that’s not me, it’s my avatar
Ron DeSantis did more than just lose to Donald Trump. His unfortunate campaign created a deepfake, AI-generated counterfeit Trump that the real guy has turned to his lasting advantage.
Thanks Ron. Politics will never be the same.
Never Back Down (the DeSantis super PAC, lately minus the never) posted two mendacious videos on social media last summer featuring an avatar who looked and sounded like the actual Trump. Except this version hugged Anthony Fauci and dissed Kim Reynolds, the pro-DeSantis governor of Iowa. Both tableaus were phony.
Obviously, the fictional antics of Deep Fake Donald didn’t faze the 56,260 Iowans who gave him 51% of the caucus vote. Supporters of the corporeal Trump have come to expect a gush of contradictory bombast from their hero. It’s all part of the show. (I doubt the likes of ChatGPT has enough computing power to out-demogogue the genuine article.)
The loss sent DeSantis slouching home, but the unintended consequences of his deception are still around. Instead of crippling his candidacy, the deepfakes have had the perverse effect of immunizing Trump against future recorded revelations. Doesn’t matter if the video or audio clips are authentic, Trump can claim they’re high-tech fakery.
On Truth Social last month, Trump responded to an attack ad featuring a compilation of his verbal gaffes. “The perverts and losers at the failed and once disbanded Lincoln Project, and others, are using AI (artificial Intelligence) in their Fake television commercials in order to make me look as bad and pathetic as Crooked Joe Biden, not an easy thing to do.”
Except, the anti-Trump Republicans at the Lincoln Project created their ad with authentic recordings of Trump mangling words like a confused old man. According to the Washington Post, “The ad featured incidents during Trump’s presidency that were widely covered at the time and witnessed in real life by many independent observers.”
A dizzying cascade of advances in AI technology are revolutionizing science, medicine, engineering, policing, writing, teaching, art, climatology, business practices, military defense, homework, dissertations and (thankfully) wedding toasts.
In May, an executive with the business services giant PricewaterhouseCoopers explained the company’s billion dollar investment in AI technology: “We are at a tipping point in business and society where AI will revolutionize how we work, live and interact at scale.”
But there’s the shady aspect of generative AI, which along with the fabulous has enabled an onslaught of unseemly and illicit applications. Suddenly, we’re targeted by phone scams with perfect imitations of relatives’ voices. AI has provided bad actors a lightning-fast mechanism for stealing identities, hacking retirement accounts, plagiarizing art, disseminating disinformation and turning innocents into porn actors. AI transforms political operatives into dirty tricksters. (Just last weekend, a computer-generated, sounds-just-like Joe Biden robocall urged New Hampshire Democrats not to vote in last Tuesday’s primary — not at all what the real Joe wanted.)
On January 19, the Florida Bar issued ethical guidelines dealing with the “escalating threat” of AI intrusions into their clients’ confidential data. The Bar was also concerned about lawyers depending on AI to compose legal briefs. “Every place I go, a judge comes up to me and says a lawyer filed a brief using ChatGPT and there were errors in it,” Bar President Scott Westheimer told the Florida Bar News.
And now, with the most consequential election in memory approaching, along comes this superpowered technology that can fabricate videos of incriminating incidents that never happened.
Therein lies a 21st century conundrum: If AI can insert anyone’s voice and image into a counterfeit scenario, then anyone accurately recorded in a real-life situation can yell “Fake. That ain’t me. It’s a computerized lie.”
AI technology has given all those dishonorable politicians, rogue cops, skulking criminals, despicable racists and cheating husbands caught on tape an all-purpose alibi.
In September, a University of South Florida/Florida Atlantic University poll found that while Floridians were evenly split on whether AI will improve our lives, 75% were worried that AI posed a risk to human safety and 54% were worried that the technology would eliminate their jobs.
Next time, the pollsters should ask if Floridians were worried about AI corrupting their elections.
Legislation wending through the Florida House and Senate would, at least, require a disclaimer with any political advertisement generated with AI. Violators, if they can be traced, face a first degree misdemeanor charge.
That seems a meager solution to a crisis created by the most disruptive new technology since the Internet. Not that legislators or anyone else knows where generative AI is taking us.
Someone ought to ask ChatCPT. Remember to ask nicely, because there’s no telling what incriminating scenario awaits your avatar.