Digital threats to democracy will stress test AI fears and futures
• New technology tops the list for mind-bending ways it can warp reality and seed propaganda
The year 2024 has been dubbed a “super election year” (Statista), “the biggest election year in history” (The Economist) and “one of the most consequential election years” (MIT Technology Review). Our own such exercise in democracy is on the cards, and the US is gearing up for what looks to be a bitter contest. The UK is expected to announce a 2024 date, in addition to hugely populous nations such as Indonesia, Mexico and India.
Depending on the source you’re using and how they choose to count elections — is a single-day vote to elect two branches of government one election or two? — about 2billion voters will be making their X’s in about 60-70 elections this year. These national and regional polls will directly determine the immediate political future of more than 4-billion people — about half the global population.
And, in the face of conflict, outright war, intimidation and electoral fraud in this immense election year, experts polled by the World Economic Forum (WEF) recently ranked artificial intelligence (AI)-enabled fake news as the single biggest risk in 2024. “The World Economic Forum’s Global Risks Report 2024 ranked AI-derived misinformation and disinformation ahead of climate change, war and economic weakness,” CNBC reported.
It is not just the WEF wonks losing sleep over it. This week MIT Technology Review gave a rundown of what it believes to be the biggest technological threats to 2024’s bumper crop of elections. “Perhaps unsurprisingly, generative AI takes the top spot on our list,” it wrote, adding that “without a doubt, AI that generates text or images will turbocharge political misinformation.”
The publication points to Venezuela, where “state media outlets” recently “spread progovernment messages through AI-generated videos of news anchors from a nonexistent international English-language channel”.
It’s not just the democratically challenged fighting the scourge of deepfakes. We’ve already seen faked footage of US President Joe Biden seeming to make transphobic statements doing the rounds. No side is immune, as the MIT coverage clarifies, as the example of faked images of Donald Trump hugging Anthony Fauci underlines.
The same tools that will make campaigning more efficient — such as AI-powered robocalls to reach constituents
— can be used to manipulate voters into believing candidates have made off-colour and offbrand comments, or worse.
MIT Tech Review further mentions the potential promise and threat of other technologyled tactics, such as the deployment of political microinfluencers and the effect of digital censorship. It calls the latter a “critical human rights issue and a core weapon in the wars of the future”, but there’s no doubt AI is top of mind for the mind-bending ways it can warp reality and seed propaganda.
And that’s why 2024 is one of the most consequential years for AI regulation too, not just the ballot box. It is the real-world stress test of all our AI anxieties.
Fellow columnist Johan Steyn covered some of this in
his Business Day column last week, writing that “the danger lies not just in the consumption of false information but in the erosion of trust in legitimate sources of information. When people are constantly bombarded with AI-generated false content, scepticism grows, and the belief in factual, verified information diminishes.”
Though I largely agree with his concerns and warnings, there is an area where we depart, specifically in the implied causality and the solution. In terms of the former, I’m not convinced that fake news and misinformation have eroded our trust in legitimate sources more than our eroded trust has created a vacuum for misinformation to fill. At the very least, these are probably concurrent and overlapping issues, rather than linear.
I also worry that the news media has shot itself in the foot by competing with social media on speed rather than accuracy, and reporting the utterances of every pundit and celeb as though they carried any weight. Consider the difference
between “Audits show less antiSemitism on X than other apps, Musk says” (a headline on Reuters this week) and “Elon Musk claims X has less antiSemitic content than peers” (CNN’s version). I know the origin of the stylistic quirks deployed by news media; I just wonder if they serve us anymore.
My main beef with Steyn’s column — if I can call it “beef”, because we’re actually largely aligned — is the idea that scepticism is anything other than our sole weapon in this war. It’s just a small, splintering shield, barely any protection against the disinformation bombardment, but with the tools deployed against us shifting faster than a coronavirus we don’t have much else at hand than scepticism.
The evolution I’m talking about is astonishing and exponential. Outside my writing work I also do some public speaking. Last week I was updating my go-to presentation on generative AI before one such talk when I realised my
slide on tips to spot AIgenerated images — the kind we see cropping up in manipulative fake news — had dated itself out of usefulness in a handful of months.
In the face of a fearmongering news story that turns on your own fears and prejudices, a story with video and image “proof”, and presented with all the hallmarks of legitimate media, the only thing left — until the regulation cavalry and AI-detection technology catch up, if they even can — is a willingness to pause and question what we see in front of us.
The regulators are chasing the noble outcome of better oversight, stronger punishment for poor content moderation and irresponsible technology use. These are necessary — critical, even — but for now, while we wait for the wheels of legislation to turn, we can and must deploy that rare resource of critical thinking.