Sun Sentinel Broward Edition

AI gives sleazy pols an alibi — that’s not me, it’s my avatar

- Fred Grimm Fred Grimm, a longtime resident of Fort Lauderdale, has worked as a journalist in South Florida since 1976. Reach him by email at leogrimm@gmail.com or on Twitter @grimm_fred.

Ron DeSantis did more than just lose to Donald Trump. His unfortunat­e campaign created a deepfake, AI-generated counterfei­t Trump that the real guy has turned to his lasting advantage.

Thanks Ron. Politics will never be the same.

Never Back Down (the DeSantis super PAC, lately minus the never) posted two mendacious videos on social media last summer featuring an avatar who looked and sounded like the actual Trump. Except this version hugged Anthony Fauci and dissed Kim Reynolds, the pro-DeSantis governor of Iowa. Both tableaus were phony.

Obviously, the fictional antics of Deep Fake Donald didn’t faze the 56,260 Iowans who gave him 51% of the caucus vote. Supporters of the corporeal Trump have come to expect a gush of contradict­ory bombast from their hero. It’s all part of the show. (I doubt the likes of ChatGPT has enough computing power to out-demogogue the genuine article.)

The loss sent DeSantis slouching home, but the unintended consequenc­es of his deception are still around. Instead of crippling his candidacy, the deepfakes have had the perverse effect of immunizing Trump against future recorded revelation­s. Doesn’t matter if the video or audio clips are authentic, Trump can claim they’re high-tech fakery.

On Truth Social last month, Trump responded to an attack ad featuring a compilatio­n of his verbal gaffes. “The perverts and losers at the failed and once disbanded Lincoln Project, and others, are using AI (artificial Intelligen­ce) in their Fake television commercial­s in order to make me look as bad and pathetic as Crooked Joe Biden, not an easy thing to do.”

Except, the anti-Trump Republican­s at the Lincoln Project created their ad with authentic recordings of Trump mangling words like a confused old man. According to the Washington Post, “The ad featured incidents during Trump’s presidency that were widely covered at the time and witnessed in real life by many independen­t observers.”

A dizzying cascade of advances in AI technology are revolution­izing science, medicine, engineerin­g, policing, writing, teaching, art, climatolog­y, business practices, military defense, homework, dissertati­ons and (thankfully) wedding toasts.

In May, an executive with the business services giant Pricewater­houseCoope­rs explained the company’s billion dollar investment in AI technology: “We are at a tipping point in business and society where AI will revolution­ize how we work, live and interact at scale.”

But there’s the shady aspect of generative AI, which along with the fabulous has enabled an onslaught of unseemly and illicit applicatio­ns. Suddenly, we’re targeted by phone scams with perfect imitations of relatives’ voices. AI has provided bad actors a lightning-fast mechanism for stealing identities, hacking retirement accounts, plagiarizi­ng art, disseminat­ing disinforma­tion and turning innocents into porn actors. AI transforms political operatives into dirty tricksters. (Just last weekend, a computer-generated, sounds-just-like Joe Biden robocall urged New Hampshire Democrats not to vote in last Tuesday’s primary — not at all what the real Joe wanted.)

On January 19, the Florida Bar issued ethical guidelines dealing with the “escalating threat” of AI intrusions into their clients’ confidenti­al data. The Bar was also concerned about lawyers depending on AI to compose legal briefs. “Every place I go, a judge comes up to me and says a lawyer filed a brief using ChatGPT and there were errors in it,” Bar President Scott Westheimer told the Florida Bar News.

And now, with the most consequent­ial election in memory approachin­g, along comes this superpower­ed technology that can fabricate videos of incriminat­ing incidents that never happened.

Therein lies a 21st century conundrum: If AI can insert anyone’s voice and image into a counterfei­t scenario, then anyone accurately recorded in a real-life situation can yell “Fake. That ain’t me. It’s a computeriz­ed lie.”

AI technology has given all those dishonorab­le politician­s, rogue cops, skulking criminals, despicable racists and cheating husbands caught on tape an all-purpose alibi.

In September, a University of South Florida/Florida Atlantic University poll found that while Floridians were evenly split on whether AI will improve our lives, 75% were worried that AI posed a risk to human safety and 54% were worried that the technology would eliminate their jobs.

Next time, the pollsters should ask if Floridians were worried about AI corrupting their elections.

Legislatio­n wending through the Florida House and Senate would, at least, require a disclaimer with any political advertisem­ent generated with AI. Violators, if they can be traced, face a first degree misdemeano­r charge.

That seems a meager solution to a crisis created by the most disruptive new technology since the Internet. Not that legislator­s or anyone else knows where generative AI is taking us.

Someone ought to ask ChatCPT. Remember to ask nicely, because there’s no telling what incriminat­ing scenario awaits your avatar.

 ?? ??

Newspapers in English

Newspapers from United States