USA TODAY US Edition

Deepfake ads getting easier for scammers

AI tools used for bogus celebrity endorsemen­ts

- Joedy McCreary

MrBeast became the biggest YouTuber in the world partly because of his elaborate giveaways.

He once handed out thousands of free Thanksgivi­ng turkeys and left a waitress a $10,000 tip for two glasses of water. So when a video appeared to show him offering newly released iPhones to thousands of people for the low price of $2, it seemed like one of his typical stunts.

One problem: It wasn’t really him. That video, he said, was the work of someone who used artificial intelligen­ce to replicate his likeness without his permission.

“Are social media platforms ready to handle the rise of AI deepfakes?” wrote MrBeast, whose real name is Jimmy Donaldson, in a post on X, formerly Twitter. “This is a serious problem.”

Welcome to the world of deepfake advertisin­g, where the products might be real, but their endorsemen­ts are anything but. It’s where videos appearing to show celebritie­s plugging items from dental plans to cookware are in fact just AI-generated fabricatio­ns that use technology to alter voices, appearance­s and actions.

Of course, fake celebrity endorsemen­ts have been around for about as long as celebritie­s themselves. What has changed is the quality of the tools used to create them. So, instead of merely stating that a celebrity endorses a product, they can fabricate a video that appears to prove it, bilking unsuspecti­ng consumers.

With a few clicks and a little bit of know-how, a savvy scammer can generate audio, video and still images that are increasing­ly difficult to identify as fabricatio­ns – even if, in the realm of advertisin­g, it is still in its relative infancy.

“It’s not huge as of yet, but I think there’s still a lot of potential for it to become a lot bigger because of the technology, which is getting better and better,” said Colin Campbell, an associate

professor of marketing at the University of San Diego who has published research about AI-generated ads.

Tom Hanks, Gayle King among celebritie­s targeted in AI scams

There is no shortage of nefarious uses for AI technology.

An artificial­ly generated robocall used President Joe Biden’s voice to urge voters in New Hampshire to sit out the primary election in that state. And fabricated sexually explicit images of pop star Taylor Swift circulated online last month, leading to increased calls for regulation.

On Friday, a host of major technology companies signed a pact to work to prevent AI tools from being used to disrupt elections.

But the technology is also being used to reach more directly into people’s pocketbook­s with fabricated product endorsemen­ts.

“It places the burden on people who are bombarded with informatio­n to then be the arbiters of … protecting their financial selves, on top of everything else,” said Britt Paris, an assistant professor at Rutgers University who studies AI-generated content. “The people that make these technologi­es available, the people that are really profiting off of deepfake technologi­es … they don’t really care about everyday people. They care about getting scale and getting profit as soon as they can.”

“The people that make these technologi­es available … they don’t really care about everyday people.” Britt Paris

Rutgers University assistant professor

Actor Tom Hanks and broadcaste­r Gayle King are among those who have said their voices and images were altered without their consent and attached to unauthoriz­ed giveaways, promotions and endorsemen­ts.

“We’re at a new crossroads here, a new nexus of what types of things are possible in terms of using someone’s likeness,” Paris said.

Similar endorsemen­t claims have been debunked by USA TODAY, including those asserting Kelly Clarkson endorsed weight-loss keto gummies and an Indian billionair­e promoted a trading program. The video appearing to show Clarkson was viewed more than 48,000 times.

Yet they keep popping up, in part because they’re so easy to create.

A USA TODAY search of Meta’s ad library revealed multiple videos that appeared to be AI-generated fabricatio­ns. They claim to show Elon Musk giving away gold bars and Jennifer Aniston and Jennifer Lopez offering liquid botox kits.

“Any time that someone can not pay an actor or a celebrity to appear in their advertisem­ents, they’ll probably do it, right?” Paris said. “These smaller scammer companies ... will definitely use the tools at their disposal to eke out whatever money they can from people.”

‘Software’s pretty easy to use’

Creators of those fake endorsemen­ts typically follow a straightfo­rward process, experts say.

They start with a text-to-speech program that generates audio from a written script. Other programs can use a small sample of authentic audio from a given celebrity to recreate the voice, sometimes with as little as a minute of real audio, said Siwei Lyu, a digital media forensics expert at the University at Buffalo.

Other programs create lip movements to match spoken words in the audio track. That video is then overlaid onto the person’s mouth, Lyu said.

“All the software’s pretty easy to use,” Lyu said.

Those videos are also easy to produce in bulk and tailor to specific audiences, leading to another problem: Videos that don’t spread widely can be tougher to find – and tougher to police. For example, there were 63 versions of the purported Lopez and Aniston ad in the Meta ad library. Many were active for only a day or two, accumulati­ng a few hundred views before they were deleted and replaced by new ones.

“In most cases, they don’t go everywhere,” Campbell said. “So you can just target certain groups of consumers, and only those people will see them. So it becomes harder to detect these, especially if they’re targeting people who are less educated or just less aware of what might actually be happening.”

For the moment, it’s still possible to spot clues with the naked eye that those AI-generated videos are not real. Teeth and tongues are difficult to artificial­ly recreate, Lyu said. Sometimes, a fake video is too perfect and leaves out pauses, breaths or other imperfecti­ons of human speech.

But the technology has come so far in such a short period of time that a fabricated video may be indistingu­ishable from an authentic one in as soon as “a couple of years,” Campbell said.

“The video tools are not as good as the image-based stuff,” he said. “But video is essentiall­y just a bunch of images put together, right? So it’s just a matter of processing power and getting more experience with it.”

Protection­s: Think critically, use online AI detection tools

Social media users have a few tactics at their disposal to protect themselves. Some were identified by the Better Business Bureau in a warning issued in April 2023.

The main one: Think critically. “Tom Hanks, it would seem sort of strange that he might be selling dental insurance,” Paris said. “If it doesn’t pass the smell test, based on what you know about that particular celebrity, it’s probably not worth getting too worked up about, and certainly not sharing. At least, not believing it until you go in and do a little legwork, background research.”

Companies typically don’t limit their legitimate ads to a single social media platform. A real video posted to Facebook, for example, likely would show up on Instagram, TikTok and YouTube, too.

There are also several online detectors capable of determinin­g to varying degrees of accuracy if an image is authentic or AI-generated.

Social media users not yet familiar with those tools and tips still have some time – but maybe not a lot of it – to get themselves up to speed.

“The fake commercial­s are, I’ll say, a threat,” Lyu said. “But not truly a danger for everyone – yet.”

Newspapers in English

Newspapers from United States