The Hamilton Spectator

Digital Deception, Easier for Anyone

A.I. has brought powerful new tools for video manipulati­on.

- By STUART A. THOMPSON

It would not be completely out of character for Joe Rogan, the comedian turned podcaster, to endorse a “libido-boosting” coffee brand for men.

But when a video circulatin­g on TikTok recently showed Mr. Rogan and his guest, Andrew Huberman, promoting the coffee, some observant viewers were shocked — including Dr. Huberman, a neuroscien­tist.

“Yep that’s fake,” Dr. Huberman wrote on Twitter after seeing the ad, in which he appears to praise the coffee’s testostero­ne-boosting potential, even though he never did.

The ad was one of a growing number of fake videos on social media made with technology powered by artificial intelligen­ce. Experts said Mr. Rogan’s voice appeared to have been synthesize­d using A.I. tools. Dr. Huberman’s comments were taken from an unrelated interview.

Making realistic fake videos, often called deep fakes, once required elaborate software to put one person’s face onto another’s. But now, many of the tools to create them are available to everyday consumers — even on smartphone apps, and often for little to no money.

The new altered videos — mostly, so far, the work of meme-makers and marketers — have gone viral on social media sites. The content works by cloning celebrity voices, altering mouth movements to match alternativ­e audio and writing persuasive dialogue.

The videos, and the accessible technology behind them, have some A.I. researcher­s fretting about their dangers, and have raised concerns over whether social media companies are prepared to moderate the growing digital fakery.

Disinforma­tion watchdogs are also preparing for a wave of digital fakes that could deceive viewers or make it harder to know what is true online.

“What’s different is that everybody can do it now,” said Britt Paris, an assistant professor of informatio­n science at Rutgers University in New Jersey. “It’s not just people with sophistica­ted computatio­nal technology and fairly sophistica­ted computatio­nal know-how. Instead, it’s a free app.”

Reams of manipulate­d content have circulated on TikTok and elsewhere for years, typically using tricks like careful editing or the swapping of one audio clip for another.

Graphika, a research firm that studies disinforma­tion, spotted

deepfakes of fictional news anchors that pro-China bot accounts distribute­d late last year, in the first known example of the technology’s being used for state-aligned influence campaigns. But several new tools offer similar technology to everyday internet users, giving comedians and partisans the chance to make their own convincing spoofs.

Last month, a fake video circulated showing President Joseph R. Biden Jr. declaring a U.S. draft for the war between Russia and Ukraine. The video was produced by the team behind “Human Events Daily,” a podcast run by Jack Posobiec, a right-wing influencer known for spreading conspiracy theories.

In a segment explaining the video, Mr. Posobiec said his team had created it using A.I. technology. A tweet about the video from The Patriot Oasis, a conservati­ve account, used a breaking news label without indicating the video was fake. The tweet was viewed more than eight million times.

Many of the video clips featuring synthesize­d voices appeared to use technology from ElevenLabs, an American start-up. In November, the company debuted a speech-cloning tool. It attracted attention last month after users of 4chan, a message board known for racist content, used the tool to create a recording of an anti-Semitic text using a voice that mimicked the actor Emma Watson. ElevenLabs said on Twitter that it would introduce new safeguards and provide a new A.I. detecting tool. But 4chan users said they would create their own version of the technology, posting demos that sound similar to audio produced by ElevenLabs.

A viral fake video posted on Twitter by Elon Musk, the site’s owner, showed him having a profanity-laced conversati­on with Mr. Rogan and Jordan Peterson, a Canadian men’s rights activist. In another on YouTube, Mr. Rogan appeared to interview Prime Minister Justin Trudeau of Canada.

“The production of such fakes should be a crime with a mandatory ten-year sentence,” Mr. Peterson said in a tweet about fake videos featuring his voice. “This tech is dangerous beyond belief.”

A spokeswoma­n for YouTube said the video of Mr. Rogan and Mr. Trudeau did not violate the platform’s policies because it “provides sufficient context.” (The creator had described it as a “fake video.”) The company said its misinforma­tion policies banned content that was doctored in a misleading way, which is similar to other social media companies’ policies.

Regulators have been slow to respond. One American law from 2019 required government agencies to notify Congress if deepfakes targeted elections in the United States.

“We cannot wait for two years until laws are passed,” said Ravit Dotan, a researcher who runs the Collaborat­ive A.I. Responsibi­lity Lab at the University of Pittsburgh in Pennsylvan­ia. “By then, the damage could be too much. We have an election coming up here in the U.S. It’s going to be an issue.”

Apps that can put new words in a celebrity’s mouth.

 ?? ??
 ?? ??
 ?? AARON FERNANDEZ ??
AARON FERNANDEZ

Newspapers in English

Newspapers from Canada