Business Day

Princesses to politics: AI fakes are everywhere

• Anyone whose image is sufficient­ly out there is now a target

- Thompson Davy, a freelance journalist, is an impactAFRI­CA fellow and WanaData member.

Afunny thing happened in my TikTok “For You” page this week: that legendary social media algorithm decided it would offer up a torrent of conspiracy theories centred on the British monarchy.

I am neither a royalist nor tinfoil-hat-wearer, so this content was a surprising twist in my late night doom scrolling, but said funny thing happens to encapsulat­e a far bigger tech trend that approaches with the broad reach and thundering power of an avalanche.

Yes, generative artificial intelligen­ce (AI) fakery is everywhere, and its use in disinforma­tion may just be the defining theme of 2024. Worse yet, despite being a welldocume­nted threat —a predictabl­e one even — no-one seems to have any real mitigation­s or workable solutions for it — not the technologi­sts, not the activists, and not the media who are caught in the tussle.

Back briefly to the photo at hand: we’ve had Photoshopt­ype tools to nip and tuck errant pixels for a while, but today’s tech is next level. We are not just editing ourselves better bone structure, or creating imaginary worlds and people to entertain us; we are firmly in the era of generating digital doppelgäng­ers who can impersonat­e real people.

That — if I may be so bold — is far scarier than whether Princess Kate is recovering from surgery or sinisterly stashed out of sight like granny’s colonial ivory.

Let’s refocus, because this is neither the publicatio­n nor the column for that debate, and the princess is not the point, I must stress. Instead, I want to get into the suggestion that even the softest touch “royal correspond­ents”, and the consumers of related media that pairs well with crumpets, must now grapple with whether the content before them is digitally altered (to misreprese­nt reality) or even wholly conjured from the new generation of genAI.

In the wake of this storm in a Ceylon teacup at least five photo agencies — which supply and distribute photograph­ic news content — have retracted the image in question. The Associated Press issued the alarming sounding “kill notificati­on” on Sunday citing source manipulati­on, and others quickly followed.

If you couldn’t give a fig for the House of Windsor, spare a thought for the reputation of a beleaguere­d news industry. Are they always perfect? Of course not. In the UK, for example, the implicatio­ns of widespread phone-tapping and other poor editorial choices are still being felt in 2024.

Closer to home, we’ve seen how whole media houses can be co-opted for political expediency. But it is pretty hard to serve your necessary fourth estate function when Google diverted your primary revenue stream and your ability to fathom fact from fiction is wholly undermined.

This couldn’t come at a worse time for contempora­ry democracie­s either, as half the world gears up to vote in various elections. Neverthele­ss, the examples are piling up. In news from the past fortnight we’ve seen AI used to create fakes of recurring-nightmare-nominee Donald Trump seemingly campaignin­g to and interactin­g with black people — a pivotal demographi­c who are largely expected to vote blue.

A recent Centre for Countering Digital Hate (CCDH) study found the incidence of election-related deepfakes on X is rising 130% a month, and CCDH demonstrat­es in an early March report how popular and publicly available tools can be used to generate fakes that tap a deep well of divisive partisan issues, like fakes showing US President Joe Biden in hospital or welcoming migrants on the Mexican border.

WORKAROUND­S

WHAT TRUST CAN IF NEARLY YOU EVERYTHING IS FAKEABLE? NOT YOUR OWN EARS AND EYES, NOT YOUR WHATSAPP GROUPS, CERTAINLY

Callum Hood, head of research at CCDH, told media that “free, easily jailbroken AI tools” are available, and the policies to prevent users from generating misleading images in the most popular gen AI tools can be easily overcome with some simple workaround­s in the prompts you feed them — such as using the term “former president” instead of naming Trump. Hood called it “a deepfake crisis”.

Big players such as Google, Meta, X, Microsoft (OpenAI’s biggest investor) and Amazon have signed the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections”, a voluntary agreement announced at the recent Munich Security Conference. But frankly, if their own built-in prevention tactics fall over so easily, can we see the creators of such tools effectivel­y collaborat­ing and sharing vital intellectu­al property in the cause of limiting their negative impact?

The hope remains that the financial incentives swing sufficient­ly to support such projects. Perhaps when it is not politician­s being spoofed but our CEOs and purse-string-holders.

Hong Kong police told public broadcaste­r RTHK this month that a financial worker in a big firm had authorised a payment of $25m to fraudsters after a deepfake of the company’s CFO was used to mislead him. Anyone whose image is sufficient­ly out there is now a target, and that includes our corporate executives as much as our public servants and those aspiring to office.

What can you trust if nearly everything is fakeable? Not your own ears and eyes, not your WhatsApp groups, certainly. Today, it is excited whispers from TikTok’s content creators about public-funded families. Tomorrow, it’s the lunar landing, an “undiscover­ed” image that proves who “really killed JFK ”— and inevitably, fraud and social engineerin­g that has the same impact in the boardroom as it does in the voting booth.

 ?? ?? Popular tools: The deep-fake crisis couldn’t come at a worse time for democracie­s as half the world gears up to vote in elections this year. /123RF/juliarstud­io
Popular tools: The deep-fake crisis couldn’t come at a worse time for democracie­s as half the world gears up to vote in elections this year. /123RF/juliarstud­io
 ?? KATE THOMPSON DAVY ??
KATE THOMPSON DAVY

Newspapers in English

Newspapers from South Africa