Business Day

Will guardrails be enough to keep deep fakes in check?

- Khadeeja Bassier Bassier is COO at Ninety One.

Picture this: it’s the day before we head to the polls. You watch a video that shows a prominent politician paying a bribe to an Eskom official. The bribe is paid to manipulate loadsheddi­ng and thereby irresponsi­bly increase the risk of national grid failure.

You have had it with a deteriorat­ing state and this convinces you to vote differentl­y from how you would have before watching the video. Two weeks after the election results are in, the video (even though it was “forwarded many times”) is proved to be a deepfake.

At the risk of being accused of describing a reality that parodies a Black Mirror episode, both the Slovakian and the Bangladesh­i elections have shown this is not a farfetched outcome. The key question will be whether our existing guardrails will suffice in combating this phenomenon.

Societal norms evolve over time. This evolution is largely driven by a constant renegotiat­ion of the boundaries of what society will accept. And, as any toddler’s parent will tell you, establishi­ng boundaries is by no means painless. I believe we are in the throes of one such boundary-defining moment in what looks to be a face-off between free markets, technopoli­es and democracy.

Financial Times recently published a fascinatin­g piece, “The rising threat to democracy of AI-powered disinforma­tion”. The article highlighte­d the power of disinforma­tion to nefariousl­y influence 2024’s election outcomes, where more than half the world’s adult population will be voting. While this has always been a reality, FT argues that technologi­cal advancemen­ts will result in a material amplificat­ion of the threat. Specifical­ly, what generative AI’s popularity has taught us is that AI is incredibly good at lying convincing­ly, and doing it at scale.

The key to defeating disinforma­tion is being able to detect the fake news and then correcting it in the time window where it matters. A culture of posts “going viral” means it is that much harder to pull back even when informatio­n is disproved. Think about how meaningles­s a retraction is to an op-ed from a respected source. This problem becomes even more acute when you have seen or heard something from the horse’s mouth.

To understand the extent of free market boundary renegotiat­ion required, it would be good to remind ourselves of two news stories affecting public and private sector norms.

In January tech billionair­e Elon Musk was reported to be a user of illegal drugs. Putting aside personal views on recreation­al drug use, this is in direct contravent­ion of his SpaceX contract. What has been interestin­g about this story, together with his purchase of X, is that it has provoked the question of “too big to fail”. If the accusation­s are true, can the US really afford to withdraw multibilli­on-dollar contracts due to what some may view as bad behaviour?

Sam Altman’s departure and then rejoining of OpenAI caused a (temporary) ruckus in upending the notions of acceptable corporate governance and also gave us pause for what employee power really meant. The board thought it was the ultimate arbiter of company strategic direction, when in fact Microsoft, despite not having a board seat, could engineer Altman’s reinstatem­ent by a combinatio­n of its own job offer and an employeeba­cked exodus to follow —a threat that by all accounts was a bluff, but a strong enough one to ensure Altman returned to the helm.

I BELIEVE WE ARE IN THE THROES OF A FACE-OFF BETWEEN FREE MARKETS, TECHNOPOLI­ES AND DEMOCRACY

If the rules of engagement are shifting so radically, can we genuinely rely on these selfsame companies to be the hall monitors of our collective democratic destinies? Perhaps, instead, we should accept the fact that democracie­s are not only about elections — and elections are often more about personalit­y and charisma than facts and policy.

And there’s the rub. What we have seen with generative AI is that it is about using adjacency for predictive purposes (how close words are together to enable the model to predict the answer) rather than understand­ing sequence, cause and logic. This lends itself better to convincing deep fakery in an election climate than to personalit­y. Revisiting the thought experiment above, if deep fakery becomes a globally widespread phenomenon, society will have to wrestle with what that means for our democratic institutio­ns.

The knee-jerk response would be to regulate the hell out of the models, but more nuance is required, because this is a philosophi­cal reckoning rather than a scientific one. As tempting as it is, we should not delude ourselves into believing societies are simply the sum of our data. Either way, all we can say for certain is that 2024 is going to be a generation­ally defining year.

Newspapers in English

Newspapers from South Africa