Threats from AI loom large over the polls and our democracy
As the second and final voter registration campaign comes to a close this weekend, our attention turns to President Cyril Ramaphosa, who is expected to make an announcement on the 2024 election date any day now. Whether we go to the polls in May or in the winter months, there is no question that our sixth general election will be markedly different from any other in the last 30 years.
In December, the Constitutional Court lowered the high electoral threshold that created an insurmountable barrier to independent candidates, who will be contesting seats in parliament and the provincial legislatures for the first time this year.
Speculation is also rife that, for the first time in the democratic era, the ANC will win less than 50% of the national vote in this year’s poll — a hypothesis reinforced by a raft of opinion polls of varying credibility. In addition, it is likely the ANC will lose its Gauteng and KwaZulu-Natal provincial majorities. If both of these possibilities come to pass, this will pave the way for coalition negotiations at both the national and provincial levels of government.
Politically, in a maturing democracy such as ours, these junctures are also moments of great vulnerability. As we adapt to changes in our electoral system and anticipate historic shifts in governance, we should also remain vigilant about the dangers to the poll posed by misinformation and organised disinformation.
This year’s World Economic Forum Global Risks Report ranks misinformation and disinformation as the world’s biggest socioeconomic and political risks over the next two years. The report, which analyses global risks over one-, two- and 10-year time frames, is the outcome of an annual risk perception survey done by consulting 1,500 global experts in government, academia, civil society, international organisations and the private sector.
The report outlines how the rapid proliferation of false information could “radically disrupt electoral processes in several economies over the next two years”, and points to “a growing distrust of information, [which] will deepen polarised views — a vicious cycle that could trigger civil unrest and possibly confrontation”.
In 2016, the Internet Research Agency — the infamous Russian troll farm created by Yevgeny
Prigozhin, the assassinated Russian oligarch and former ally of Vladimir Putin, to infiltrate social media and spread misleading content to amplify social divisions in the US in the run-up to the presidential election held that year — was estimated every month to have cost more than $1m (about R18.9m) to operate.
Today, creating deepfakes has become cheaper and easier owing to the advent of powerful and user-friendly AI tools and software. It takes just a few clicks of a mouse or taps of a smartphone screen for anyone, anywhere, to transform innocuous videos and images into harmful, malicious content.
Last week, the White House expressed alarm at the proliferation of sexually explicit “deepfake” images of American singer-songwriter Taylor Swift across multiple social media platforms. One image on X garnered 47-million views before the account publishing it was suspended.
The line between fact and fiction is becoming increasingly blurred. It is difficult to know what is real anymore, and malicious actors are willing and able to use powerful new tools to damage democratic discourse and amplify social divisions.
South Africa is not immune to this global menace. Our nation has already fought powerful disinformation campaigns by state and non-state actors in the last few decades. We have overcome the odious machinations of Bell Pottinger, Cambridge Analytica and the many lingering sources of disinformation that thrived at the height of the Covid-19 pandemic. The share of the South African population with access to the internet is set to reach 80.71% this year, deepening the vulnerability of the electorate to these online threats.
We must advocate stronger legislation to criminalise the creation and proliferation of deliberately misleading online content. Legislation such as the Disrupt Explicit Forged Images and Non-Consensual Edits (“Defiance”) Act, tabled in the US Congress in the wake of the deepfake images of Swift, proposes a right of civil action for “digital forgeries” that depict an identifiable person without their consent. In December last year, the European Commission, council and parliament reached agreement on the introduction of an AI Act — the world’s first comprehensive law on AI.
Addressing the pupils at St John’s College in Johannesburg in 2003, former president Nelson Mandela famously said that “an educated, enlightened and informed population is one of the surest ways of promoting the health of a democracy”. Today, this statement has never been truer. While policing and removing false information and its sources where they appear online are critical, those alone are not enough to stem the tide. An informed citizenry and electorate, empowered by the media and possessed of civic literacy, is the most important weapon in the fight against democratic disinformation.