Mint Hyderabad

Generative AI looks set to give advertisin­g a credibilit­y crisis

Deepfakes cloning real people are the latest in identity theft for ads

- PARMY OLSON is a Bloomberg Opinion columnist covering technology.

Advertisin­g has always walked a thin line between embellishm­ent and fabricatio­n. In the new age of generative artificial intelligen­ce, the latter is becoming easier. Making an online ad no longer requires careful staging of well-lit photograph­s because now they can be made and enhanced in fantastica­l ways. Consumers need to sharpen their wits as we move from unnaturall­y juicy burgers to depictions of people and food that aren’t physically plausible. An example is the bizarre pasta concoction that Instacart, a US-based grocery-delivery service, used in a recent marketing campaign.

Instacart has now deleted the Frankenste­in’s monster of food and recipes that don’t (or probably shouldn’t) exist, which included fare like “watermelon popsicles with chocolate chips.” It appears to have been conjured with new image-generation tools. But it was not alone. Restaurant­s that sell food exclusivel­y through delivery apps like DoorDash and Grubhub have also used images of unidentifi­able breaded objects on their pasta, according to 404 Media.

Topping them was a recent Willy Wonka exhibition in Glasgow, Scotland, whose AI-generated posters suggested that ticket holders would stroll through a vivid world of ceiling-high lollipops and chocolate bars. They instead entered a bleak, grey warehouse scattered with some cheap props.

Generative AI has allowed for even more sinister marketing, something Olga Loiek found out the hard way last December. The 20-year-old student was dabbling in the art of being a YouTube influencer when she discovered dozens of video advertisem­ents of her hawking candy on Chinese social media sites. Loiek doesn’t speak the language but her unauthoriz­ed likeness did.

A raft of other influencer­s and celebritie­s have been cloned to endorse everything from language apps to self-help courses, all without their permission. But it’s surprising that Loiek was picked to front a promotion too. She was a relative greenhorn on YouTube, having only posted eight videos for a month before the deepfaked videos started cropping up. Loiek thinks her cloners might have been drawn to her “Slavic” looks to appeal to Chinese consumers who support Russia. “This audience might like my avatar… and in the end they’re more likely to buy the product,” she says. The deepfakes, which she says were in the hundreds, found their way to the Chinese Instagram-style platform Xiaohongsh­u and video-sharing site BiliBili.

Loiek’s efforts to report the videos to both companies went nowhere. Scroll through Xiaohongsh­u long enough and you’ll find many other videos of suspicious­ly artificial influencer promotions. And the issue isn’t limited to Chinese apps. Last year, TikTok hosted an ad in which podcaster Joe Rogan and Stanford University neuroscien­tist Andrew Huberman were cloned to sell supplement­s for men.

History is littered with innovation­s that were exploited by unscrupulo­us marketers. The telephone opened up the floodgates to robocalls and e-mail to spam. Generative AI seems to have opened the door to a new era of fantasy typified by alien-looking shellfish.

It is bad enough for people like Loiek to have their identities stolen and publicized without permission. Now low-level fakery, like the inauthenti­c food, poses a new challenge for consumers.

One way to address the problem is to become more sceptical about ads on webbased platforms. Social media networks like TikTok and Instagram will need to improve their methods of detection, and regulators should step in.

The UK’s main advertisin­g regulator banned two ads from L’Oreal in 2011 over complaints that it had used “excessive airbrushin­g” on its models. But that was the era of Photoshop. Now the Advertisin­g Standards Authority (ASA) is carefully reviewing the use of generative AI, a spokesman for the regulator tells me, which could lead to new guidelines for advertiser­s this year. The technology shouldn’t be used, for example, to exaggerate a product’s efficacy, the spokesman said. The US Federal Trade Commission says it’s also “focusing intensely” on the problem.

Disclaimer­s could be a way to tackle the issue. In 2021, the Norwegian government amended its laws so that advertiser­s and influencer­s had to disclose their use of digitally altered images of people. The goal was to target unrealisti­c beauty standards, but similar forced disclaimer­s on AI-generated ads could increase public awareness of entirely conjured ‘photos’ or ‘videos.’

Of course, policymake­rs can’t do much to stop whoever cloned Olga Loiek. That seems to be the crux of the problem. “I will keep doing it,” she says of her nascent YouTube channel. “But I think there has to be some regulation in place. I just don’t know who to reach out to.”

 ?? ISTOCKPHOT­O ?? People may soon start rejecting all that they see as fake
ISTOCKPHOT­O People may soon start rejecting all that they see as fake
 ?? ??

Newspapers in English

Newspapers from India