Generative AI looks set to give advertising a credibility crisis
Deepfakes cloning real people are the latest in identity theft for ads
Advertising has always walked a thin line between embellishment and fabrication. In the new age of generative artificial intelligence, the latter is becoming easier. Making an online ad no longer requires careful staging of well-lit photographs because now they can be made and enhanced in fantastical ways. Consumers need to sharpen their wits as we move from unnaturally juicy burgers to depictions of people and food that aren’t physically plausible. An example is the bizarre pasta concoction that Instacart, a US-based grocery-delivery service, used in a recent marketing campaign.
Instacart has now deleted the Frankenstein’s monster of food and recipes that don’t (or probably shouldn’t) exist, which included fare like “watermelon popsicles with chocolate chips.” It appears to have been conjured with new image-generation tools. But it was not alone. Restaurants that sell food exclusively through delivery apps like DoorDash and Grubhub have also used images of unidentifiable breaded objects on their pasta, according to 404 Media.
Topping them was a recent Willy Wonka exhibition in Glasgow, Scotland, whose AI-generated posters suggested that ticket holders would stroll through a vivid world of ceiling-high lollipops and chocolate bars. They instead entered a bleak, grey warehouse scattered with some cheap props.
Generative AI has allowed for even more sinister marketing, something Olga Loiek found out the hard way last December. The 20-year-old student was dabbling in the art of being a YouTube influencer when she discovered dozens of video advertisements of her hawking candy on Chinese social media sites. Loiek doesn’t speak the language but her unauthorized likeness did.
A raft of other influencers and celebrities have been cloned to endorse everything from language apps to self-help courses, all without their permission. But it’s surprising that Loiek was picked to front a promotion too. She was a relative greenhorn on YouTube, having only posted eight videos for a month before the deepfaked videos started cropping up. Loiek thinks her cloners might have been drawn to her “Slavic” looks to appeal to Chinese consumers who support Russia. “This audience might like my avatar… and in the end they’re more likely to buy the product,” she says. The deepfakes, which she says were in the hundreds, found their way to the Chinese Instagram-style platform Xiaohongshu and video-sharing site BiliBili.
Loiek’s efforts to report the videos to both companies went nowhere. Scroll through Xiaohongshu long enough and you’ll find many other videos of suspiciously artificial influencer promotions. And the issue isn’t limited to Chinese apps. Last year, TikTok hosted an ad in which podcaster Joe Rogan and Stanford University neuroscientist Andrew Huberman were cloned to sell supplements for men.
History is littered with innovations that were exploited by unscrupulous marketers. The telephone opened up the floodgates to robocalls and e-mail to spam. Generative AI seems to have opened the door to a new era of fantasy typified by alien-looking shellfish.
It is bad enough for people like Loiek to have their identities stolen and publicized without permission. Now low-level fakery, like the inauthentic food, poses a new challenge for consumers.
One way to address the problem is to become more sceptical about ads on webbased platforms. Social media networks like TikTok and Instagram will need to improve their methods of detection, and regulators should step in.
The UK’s main advertising regulator banned two ads from L’Oreal in 2011 over complaints that it had used “excessive airbrushing” on its models. But that was the era of Photoshop. Now the Advertising Standards Authority (ASA) is carefully reviewing the use of generative AI, a spokesman for the regulator tells me, which could lead to new guidelines for advertisers this year. The technology shouldn’t be used, for example, to exaggerate a product’s efficacy, the spokesman said. The US Federal Trade Commission says it’s also “focusing intensely” on the problem.
Disclaimers could be a way to tackle the issue. In 2021, the Norwegian government amended its laws so that advertisers and influencers had to disclose their use of digitally altered images of people. The goal was to target unrealistic beauty standards, but similar forced disclaimers on AI-generated ads could increase public awareness of entirely conjured ‘photos’ or ‘videos.’
Of course, policymakers can’t do much to stop whoever cloned Olga Loiek. That seems to be the crux of the problem. “I will keep doing it,” she says of her nascent YouTube channel. “But I think there has to be some regulation in place. I just don’t know who to reach out to.”