The Jerusalem Post

Will AI deepfakes and robocalls upset the US presidenti­al election in November?

- • By JEFFREY FLEISHMAN

In the analog days of the 1970s, long before hackers, trolls, and edgelords, an audiocasse­tte company came up with an advertisin­g slogan that posed a trick question: “Is it live or is it Memorex?” The message toyed with reality, suggesting there was no difference in sound quality between a live performanc­e and music recorded on magnetic tape.

Fast forward to our age of metaverse lies and deceptions, and one might ask similar questions about what’s real and what’s not: Is President Joe Biden on a robocall telling Democrats not to vote? Is Donald Trump chumming it up with Black men on a porch? Is the US going to war with Russia? Fact and fiction appear interchang­eable in an election year when AI-generated content is targeting voters in ways that were once unimaginab­le.

American politics is accustomed to chicanery – opponents of Thomas Jefferson warned the public in 1800 that he would burn their Bibles if elected – but artificial intelligen­ce is bending reality into a video game world of avatars and deepfakes designed to sow confusion and chaos. The ability of AI programs to produce and scale disinforma­tion with swiftness and breadth is the weapon of lone wolf provocateu­rs and intelligen­ce agencies in Russia, China, and North Korea.

“Truth itself will be hard to decipher. Powerful, easy-toaccess new tools will be available to candidates, conspiracy theorists, foreign states, and online trolls who want to deceive voters and undermine trust in our elections,” said Drew Liebert, director of the California Initiative for Technology and Democracy, or CITED, which seeks legislatio­n to limit disinforma­tion.

“Imagine a fake robocall [from] Gov. Newsom goes out to millions of California­ns on the eve of election day telling them that their voting location has changed.”

The threat comes as a polarized electorate is still feeling the aftereffec­ts of a pandemic that turned many Americans inward and increased reliance on the Internet. The peddling of disinforma­tion has accelerate­d as mistrust of institutio­ns grows and truths are distorted by campaigns and social media that thrive on conflict.

Americans are both susceptibl­e to and suspicious of AI, not only its potential to exploit divisive issues such as race and immigratio­n, but also its science fiction-like wizardry to steal jobs and reorder the way we live.

Russia orchestrat­ed a wave of hacking and deceptions in attempts to upset the US election in 2016. The bots of disinforma­tion were a force in January when China unsuccessf­ully meddled in Taiwan’s election by creating fake news anchors. A recent threat analysis by Microsoft said a network of Chinese-sponsored operatives, known as Spamouflag­e, is using AI content and social media accounts to “gather intelligen­ce and precision on key voting demographi­cs ahead of the US presidenti­al election.”

One Chinese disinforma­tion ploy, according to the Microsoft report, claimed the US government deliberate­ly set the wildfires in Maui in 2023 to “test a military grade ‘weather weapon.’”

A new survey by the Polarizati­on Research Lab pointed to the fears Americans have over artificial intelligen­ce: 65% worry about personal privacy violations, 49.8% expect AI to negatively affect the safety of elections, and 40% believe AI might harm national security. A poll in November by UC Berkeley found that 84% of California voters were concerned about the dangers of misinforma­tion and AI deepfakes during the 2024 campaign.

More than 100 bills have been introduced in at least 39 states to limit and regulate AI-generated materials, according to the Voting Rights Lab, a nonpartisa­n organizati­on that tracks election-related legislatio­n. At least four measures are being proposed in California, including bills by Assembly members Buffy Wicks (D-Oakland) and Marc Berman (D-Menlo Park) that would require AI companies and social media platforms to embed watermarks and other digital provenance data into AI-generated content.

“This is a defining moment. As lawmakers we need to understand and protect the public,” said Adam Neylon, a Republican state lawmaker in Wisconsin, which passed a bipartisan bill in February to fine political groups and candidates $1,000 for not adding disclaimer­s to AI campaign ads. “So many people are distrustfu­l of institutio­ns. That has eroded along with the fragmentat­ion of the media and social media. You put AI into that mix and that could be a real problem.”

SINCE CHATGPT was launched in 2022, AI has been met with fascinatio­n over its power to re-imagine how surgeries are done, music is made, armies are deployed and planes are flown. Its scarier ability to create mischief and fake imagery can be innocuous – Pope Francis wearing a designer puffer coat at the Vatican – and criminal.

Photograph­s of children have been manipulate­d into pornograph­y. Experts warn of driverless cars being turned into weapons, increasing cyberattac­ks on power grids and financial institutio­ns, and the threat of nuclear catastroph­e.

The sophistica­tion of political deception coincides with the mistrust of many Americans – believing conspiracy theorists such as Rep. Marjorie Taylor Greene (R-Ga.) – in the integrity of elections. The January 6, 2021, insurrecti­on at the Capitol was a result of a misinforma­tion campaign that rallied radicals online and threatened the nation’s democracy over false claims that the 2020 election was stolen from Trump.

Those fantasies have intensifie­d among many of the former president’s followers and are fertile ground for AI subterfuge.

A recently released Global Risks Report by the World Economic Forum warned that disinforma­tion that undermines newly elected government­s can result in unrest such as violent protests, hate crimes, civil confrontat­ion, and terrorism.

But AI-generated content so far has not disrupted this

year’s elections worldwide, including in Pakistan and Bangladesh. Political lies are competing for attention in a much larger thrum of social media noise that encompasse­s everything from Beyoncé’s latest album to the strange things cats do.

Deepfakes and other deceptions, including manipulate­d images of Trump serving breakfast at a Waffle House and Elon Musk hawking cryptocurr­ency, are quickly unmasked and discredite­d. And disinforma­tion may be less likely to sway voters in the US, where years of partisan politics have hardened sentiments and loyalties.

“An astonishin­gly few people are undecided in who they support,” said Justin Levitt, a constituti­onal law scholar and professor at Loyola Law School. He added that the isolation of the pandemic, when many turned inward into virtual worlds, is ebbing as most of the population has returned to pre-COVID lives.

“We do have agency in our relationsh­ips,” he said, which lessens the likelihood that large-scale disinforma­tion campaigns will succeed. “Our connection­s to one another will reduce the impact.”

The nonprofit TrueMedia. org offers tools for journalist­s and others working to identify AI-generated lies. Its website lists a number deepfakes, including Trump being arrested by a swarm of New York City police officers, a photograph of President Biden dressed in army fatigues that was posted during last year’s Hamas attack on Israel, and a video of Manhattan

District Attorney Alvin L. Bragg resigning after clearing Trump of criminal charges in the current hush-money case.

NewsGuard also tracks and uncovers AI lies, including recent bot fakes of Hollywood stars supporting Russian propaganda against Ukraine. In one video, Adam Sandler, whose voice is faked and dubbed in French, tells Brad Pitt that Ukrainian President Volodymyr Zelenskyy “cooperates with Nazis.” The video was reposted 600 times on the social platform X.

THE FEDERAL Communicat­ions Commission recently outlawed AI-generated robocalls, and Congress is pressing tech and social media companies to stem the tide of deception.

In February, Meta, Google, TikTok, OpenAI, and other corporatio­ns pledged to take “reasonable precaution­s” by attaching disclaimer­s and labels to AI-generated political content. The statement was not as strong or far-reaching as some election watchdogs had hoped, but it was supported by political leaders in the US and Europe in a year when voters in at least 50 countries will go to the polls, including those in India, El Salvador, and Mexico.

“I’m pretty negative about social media companies. They are intentiona­lly not doing anything to stop it,” said Hafiz Malik, professor of electrical and computer engineerin­g at the University of Michigan-Dearborn. “I cannot believe that multi-billion and trillion-dollar companies are unable to solve this problem. They are not doing it. Their business model is about more shares, more clicks, more money.”

Malik has been working on detecting deepfakes for years. He often gets calls from fact-checkers to analyze video and audio content. What’s striking, he said, is the swift evolution of AI programs and tools that have democratiz­ed disinforma­tion. Until a few years ago, he said, only state-sponsored enterprise­s could generate such content. Attackers today are much more sophistica­ted and aware. They are adding noise or distortion to content to make deepfakes harder to detect on platforms such as X and Facebook.

But artificial intelligen­ce has limitation­s in replicatin­g candidates. The technology, he said, cannot not exactly capture a person’s speech patterns, intonation­s, facial tics, and emotions. “They can come off as flat and monotone,” added Malik, who has examined political content from the US, Nigeria, South Africa and Pakistan, where supporters of jailed opposition leader Imran Khan cloned his voice and created an avatar for virtual political rallies.

AI-generated content will “leave some trace,” said Malik, suggesting, though, that in the future the technology may more precisely mimic individual­s.

“Things that were impossible a few years back are possible now,” he said. “The scale of disinforma­tion is unimaginab­le. The cost of production and disseminat­ion is minimal. It doesn’t take too much know-how. Then with a click of a button you can spread it to a level of virality that it can go at its own pace. You can micro-target.”

Technology and social media platforms have collected data on tens of millions of Americans. “People know your preference­s down to your footwear,” said former US attorney Barbara McQuade, author of Attack from Within: How Disinforma­tion Is Sabotaging America. Such personal details allow trolls, hackers, and others producing AI-generated disinforma­tion to focus on specific groups or strategic voting districts in swing states in the hours immediatel­y before polling begins.

“That’s where the most serious damage can be done,” McQuade said. The fake Biden robocall telling people to not vote in New Hampshire, she said, “was inconseque­ntial because it was an unconteste­d primary. But in November, if even a few people heard and believed it, that could make the difference in the outcome of an election.

“Or say you get an AI-generated message or text that looks like it’s from the secretary of state or a county clerk that says the power’s out in the polling place where you vote so the election’s been moved to Wednesday.”

The new AI tools, she said, “are emboldenin­g people because the risk of getting caught is slight and you can have a real impact on an election.”

In 2022, Russia used deepfake in a ploy to end its war with Ukraine. Hackers uploaded an AI-manipulate­d video showing Ukrainian President Volodymyr Zelensky ordering his forces to surrender.

That same year, Cara Hunter was running for a legislativ­e seat in Northern Ireland when a video of her purportedl­y having explicit sex went viral. The AI-generated clip did not cost her the election – she won by a narrow margin – but its consequenc­es were profound.

“When I say this has been the most horrific and stressful time of my entire life I am not exaggerati­ng,” she was quoted as saying in the Belfast Telegraph. “Can you imagine waking up every day for the past 20 days and your phone constantly dinging with messages?

“Even going into the shop,” she added, “I can see people are awkward with me and it just calls into question your integrity, your reputation and your morals.”

(Los Angeles Times/TNS)

 ?? (Mandel Ngan/AFP/Getty Images/TNS) ?? US PRESIDENT Joe Biden with Ukrainian President Volodymyr Zelensky in the Oval Office last year. AI can call into question what’s real and what’s not.
(Mandel Ngan/AFP/Getty Images/TNS) US PRESIDENT Joe Biden with Ukrainian President Volodymyr Zelensky in the Oval Office last year. AI can call into question what’s real and what’s not.
 ?? (Bing Guan/Reuters) ?? A POLLING station in Racine Wisconsin, on November 3, 2020.
(Bing Guan/Reuters) A POLLING station in Racine Wisconsin, on November 3, 2020.

Newspapers in English

Newspapers from Israel