Santa Fe New Mexican

AI hustlers stole womens’ faces to put in ads — and law can’t help

- By Nitasha Tiku and Pranshu Verma

Michel Janse was on her honeymoon when she found out she had been cloned.

The 27-year-old content creator was with her husband in a rented cabin in snowy Maine when messages from her followers began trickling in, warning that a YouTube commercial was using her likeness to promote erectile dysfunctio­n supplement­s.

The commercial showed Janse — a Christian social media influencer who posts about travel, home decor and wedding planning — in her real bedroom, wearing her real clothes but describing a nonexisten­t partner with sexual health problems.

“Michael spent years having a lot of difficulty maintainin­g an erection and having a very small member,” her doppelgäng­er says in the ad.

Scammers appeared to have stolen and manipulate­d her most popular video likely using a new wave of artificial intelligen­ce tools that make it easier to create realistic deepfakes, a catchall term for media altered or created with AI.

With just a few seconds of footage, scammers can now combine video and audio using tools from companies like HeyGen and Eleven Labs to generate a synthetic version of a real person’s voice, swap out the sound on an existing video and animate the speaker’s lips — making the doctored result more believable.

Because it’s simpler and cheaper to base fake videos on real content, bad actors are scooping up videos on social media that match the demographi­c of a sales pitch, leading to what experts predict will be an explosion of ads made with stolen identities.

Celebritie­s like Taylor Swift, Kelly Clarkson, Tom Hanks and YouTube star MrBeast have had their likenesses used in the past six months to hawk deceptive diet supplement­s, dental plan promotions and iPhone giveaways. But as these tools proliferat­e, those with a more modest social media presence are facing a similar type of identity theft — finding their faces and words twisted by AI to push often offensive products and ideas.

Online criminals or state-sponsored disinforma­tion programs are essentiall­y “running a small business, where there’s a cost for each attack,” said Lucas Hansen, co-founder of the nonprofit CivAI, which raises awareness about the risks of AI. But given cheap promotiona­l tools, “the volume is going to drasticall­y increase.”

The technology requires just a small sample to work, said Ben Colman, CEO and co-founder of Reality Defender, which helps companies and government­s detect deepfakes.

“If audio, video, or images exist publicly — even if just for a handful of seconds — it can be easily cloned, altered or outright fabricated to make it appear as if something entirely unique happened,” Colman wrote by text.

The videos are difficult to search for and can spread quickly — meaning victims are often unaware their likenesses are being used.

By the time Olga Loiek, a 20-year-old student at the University of Pennsylvan­ia, discovered she had been cloned for an AI video, nearly 5,000 videos had spread across Chinese social media sites. For some of the videos, scammers used an AI-cloning tool from the company HeyGen, according to a recording of direct messages shared by Loiek with The Washington Post.

In December, Loiek saw a video featuring a girl who looked and sounded exactly like her. It was posted on Little Red Book, China’s version of Instagram, and the clone was speaking Mandarin, a language Loiek does not know.

In one video, Loiek, who was born and raised in Ukraine, saw her clone — named Natasha — stationed in front of an image of the Kremlin, saying “Russia was the best country in the world” and praising President Vladimir Putin. “I felt extremely violated,” Loiek said in an interview. “These are the things that I would obviously never do in my life.”

Newspapers in English

Newspapers from United States