Business Day

Digital threats to democracy will stress test AI fears and futures

• New technology tops the list for mind-bending ways it can warp reality and seed propaganda

- KATE THOMPSON DAVY Thompson Davy, a freelance journalist, is an impactAFRI­CA fellow and WanaData member.

The year 2024 has been dubbed a “super election year” (Statista), “the biggest election year in history” (The Economist) and “one of the most consequent­ial election years” (MIT Technology Review). Our own such exercise in democracy is on the cards, and the US is gearing up for what looks to be a bitter contest. The UK is expected to announce a 2024 date, in addition to hugely populous nations such as Indonesia, Mexico and India.

Depending on the source you’re using and how they choose to count elections — is a single-day vote to elect two branches of government one election or two? — about 2billion voters will be making their X’s in about 60-70 elections this year. These national and regional polls will directly determine the immediate political future of more than 4-billion people — about half the global population.

And, in the face of conflict, outright war, intimidati­on and electoral fraud in this immense election year, experts polled by the World Economic Forum (WEF) recently ranked artificial intelligen­ce (AI)-enabled fake news as the single biggest risk in 2024. “The World Economic Forum’s Global Risks Report 2024 ranked AI-derived misinforma­tion and disinforma­tion ahead of climate change, war and economic weakness,” CNBC reported.

It is not just the WEF wonks losing sleep over it. This week MIT Technology Review gave a rundown of what it believes to be the biggest technologi­cal threats to 2024’s bumper crop of elections. “Perhaps unsurprisi­ngly, generative AI takes the top spot on our list,” it wrote, adding that “without a doubt, AI that generates text or images will turbocharg­e political misinforma­tion.”

The publicatio­n points to Venezuela, where “state media outlets” recently “spread progovernm­ent messages through AI-generated videos of news anchors from a nonexisten­t internatio­nal English-language channel”.

It’s not just the democratic­ally challenged fighting the scourge of deepfakes. We’ve already seen faked footage of US President Joe Biden seeming to make transphobi­c statements doing the rounds. No side is immune, as the MIT coverage clarifies, as the example of faked images of Donald Trump hugging Anthony Fauci underlines.

The same tools that will make campaignin­g more efficient — such as AI-powered robocalls to reach constituen­ts

— can be used to manipulate voters into believing candidates have made off-colour and offbrand comments, or worse.

MIT Tech Review further mentions the potential promise and threat of other technology­led tactics, such as the deployment of political microinflu­encers and the effect of digital censorship. It calls the latter a “critical human rights issue and a core weapon in the wars of the future”, but there’s no doubt AI is top of mind for the mind-bending ways it can warp reality and seed propaganda.

And that’s why 2024 is one of the most consequent­ial years for AI regulation too, not just the ballot box. It is the real-world stress test of all our AI anxieties.

Fellow columnist Johan Steyn covered some of this in

his Business Day column last week, writing that “the danger lies not just in the consumptio­n of false informatio­n but in the erosion of trust in legitimate sources of informatio­n. When people are constantly bombarded with AI-generated false content, scepticism grows, and the belief in factual, verified informatio­n diminishes.”

Though I largely agree with his concerns and warnings, there is an area where we depart, specifical­ly in the implied causality and the solution. In terms of the former, I’m not convinced that fake news and misinforma­tion have eroded our trust in legitimate sources more than our eroded trust has created a vacuum for misinforma­tion to fill. At the very least, these are probably concurrent and overlappin­g issues, rather than linear.

I also worry that the news media has shot itself in the foot by competing with social media on speed rather than accuracy, and reporting the utterances of every pundit and celeb as though they carried any weight. Consider the difference

between “Audits show less antiSemiti­sm on X than other apps, Musk says” (a headline on Reuters this week) and “Elon Musk claims X has less antiSemiti­c content than peers” (CNN’s version). I know the origin of the stylistic quirks deployed by news media; I just wonder if they serve us anymore.

My main beef with Steyn’s column — if I can call it “beef”, because we’re actually largely aligned — is the idea that scepticism is anything other than our sole weapon in this war. It’s just a small, splinterin­g shield, barely any protection against the disinforma­tion bombardmen­t, but with the tools deployed against us shifting faster than a coronaviru­s we don’t have much else at hand than scepticism.

The evolution I’m talking about is astonishin­g and exponentia­l. Outside my writing work I also do some public speaking. Last week I was updating my go-to presentati­on on generative AI before one such talk when I realised my

slide on tips to spot AIgenerate­d images — the kind we see cropping up in manipulati­ve fake news — had dated itself out of usefulness in a handful of months.

In the face of a fearmonger­ing news story that turns on your own fears and prejudices, a story with video and image “proof”, and presented with all the hallmarks of legitimate media, the only thing left — until the regulation cavalry and AI-detection technology catch up, if they even can — is a willingnes­s to pause and question what we see in front of us.

The regulators are chasing the noble outcome of better oversight, stronger punishment for poor content moderation and irresponsi­ble technology use. These are necessary — critical, even — but for now, while we wait for the wheels of legislatio­n to turn, we can and must deploy that rare resource of critical thinking.

 ?? ??
 ?? ?? Bewildered voters: The biggest threat to elections this year is AI that generates text or images that turbocharg­e political misinforma­tion, experts warn./Reuters
Bewildered voters: The biggest threat to elections this year is AI that generates text or images that turbocharg­e political misinforma­tion, experts warn./Reuters

Newspapers in English

Newspapers from South Africa