The Herald

Deepfakes threaten democracy

- DR SAJJAD BAGHERI Dr Sajjad Bagheri is a lecturer in computer science, University of the West of Scotland

CHANNEL 4 News presenter Cathy Newman has revealed her image was used in a sexually explicit “deepfake” video. “It was violating,” she said, explaining the video displayed her face grafted onto another woman’s body. She spoke out as the creation of sexually explicit deepfake images is to be made a criminal offence in England and Wales. There are existing protection­s in Scots law to deal with deep fake pornograph­y.

Rapid advances in artificial intelligen­ce (AI) technology have led to serious debate and concern, with the main point of worry typically being what it will mean for jobs and the workplace. While this is a valid concern, there is a more imminent Ai-related threat poised to cause havoc in the election year of 2024: deepfakes.

With at least 64 nations around the world heading to the polls, there’s a real possibilit­y that deepfakes will be responsibl­e for manipulati­ng democracy at a level we simply have not seen before.

Deepfakes are manipulate­d images, video or audio created using AI, generally to present someone as doing or saying something that they did not.

In their infancy, deepfakes were easy to identify, and therefore easy to ignore. That is no longer the case. In 2024, deepfakes are more advanced than ever, to the point some slip past even the most advanced Ai-detection tools. When the ethical guardrails are no longer working, there has to be cause for concern.

The believabil­ity of these deepfakes has led to several of them going viral this year before being debunked in publicatio­ns such as The Herald. Ai-generated images of Taylor Swift became so widespread on X that the social media platform was forced to block users from searching for the pop star.

Fake audio purporting to be of Sir Keir Starmer berating a member of staff was seen around 1.5 million times on X in days, but even condemnati­ons from opposition politician­s wasn’t enough to convince the platform to take it down.

Even today the clip is easily found – threatenin­g to rear its head and kick up a second viral storm in the future. Even though this audio has been debunked, it’s still being shared by people who either don’t want to believe the truth, or don’t care to. Part of the problem is that

deepfakes are worryingly easy to make and require almost no technical knowledge or skill meaning the barrier to entry is low. Plenty of free generation tools exist, and even paid-for software is affordable.

You may have noticed a flood of Ai-generated music covers on platforms such as Youtube in recent months, with the vocals of celebritie­s such as Freddie Mercury and Frank Sinatra being used to cover songs released long after their deaths.

These exist because they’re easy to make. This ease of use has also meant that some of the most damaging political deepfakes have been traced back to individual users as opposed to rogue nations. Anyone who wishes to cause chaos can attempt to do so at the click of a button.

We’ve seen, in recent years, how damaging misinforma­tion can be. Conspiraci­es such as Qanon in the US have moved from the shadowy corners of the internet to mainstream social media sites and then on to the real world, culminatin­g in shocking events such as the storming of the Capitol.

Imagine how much more dangerous this misinforma­tion will be when it’s accompanie­d by fake audio, images and video – making the conspiracy seem even more real to those who stumble across it. It’s said that the camera never lies, but this phrase has never been less true.

A solution to the threat posed by deepfakes is urgently required yet there is no easy fix. The technology used to create deepfakes is outpacing the solutions. It’s a constant process of playing catch-up.

One solution that has been touted is for stricter verificati­on processes to be introduced surroundin­g images uploaded to social media, with more human involvemen­t in approving potentiall­y contentiou­s images.

However, it is important that in blocking fake media we don’t end up blocking genuine media. Platforms like Twitter have been invaluable tools in allowing breaking news to play out in front of us, and their ability to do so should not be hindered.

In recent UK elections, many seats have been marginal with a handful of votes being enough to swing some results. Deepfakes could end up being difference makers.

It is vital that public awareness of deepfakes and general media literacy is heightened as we approach the next vote, to ensure people know that seeing is not necessaril­y believing. Given that a fit-forpurpose technologi­cal solution does not yet exist, deepfakes should be a key topic in public discourse. Stepping away from politics, it’s worth noting the destructiv­e potential of this technology on an individual level.

A disturbing trend has emerged of Ai-generated adult images, on which faces are superimpos­ed onto other bodies. Research from Home Security Heroes has shown that a staggering 99% of those affected by this form of deepfake on one website, such as Cathy Newman, were female.

It is inevitable that deepfakes will become a dangerous weapon in the toolkit of a cyberbully. They will also be used to harass and cause serious upset – they are an emerging threat that we must take extremely seriously. As we approach the election, it is inevitable that we will see more deepfakes. People will find new ways of using this technology in destructiv­e ways, and public perception of some candidates may be damaged.

It’s important that we talk about this emerging problem, and ensure we do what we can to ensure that it is not normalised or ignored. This applies to us as voters, but also the media, politician­s and their parties who must be responsibl­e about what they report and share on platforms.

In the coming months and years we all have to approach images, audio and video that does not come from a reputable news source with a degree of cynicism, which can be a challenge if what we’re being presented with backs up our existing feelings towards a person or subject.

For the foreseeabl­e future, deepfakes are going to be part of political campaigns, and that’s going to be difficult to control. But how we collective­ly deal with them will be a shared endeavour.

 ?? ??
 ?? ?? Cathy Newman was the victim of a sexually explicit ‘deepfake’ video
Cathy Newman was the victim of a sexually explicit ‘deepfake’ video

Newspapers in English

Newspapers from United Kingdom