Deepfakes no longer just shallow jokes — they are a huge threat
• Manipulated audio and video clips could cause havoc socially, politically and economically
Last week in this column I mentioned deepfakes, almost in passing in my little love letter to Reddit, but a few recent conversations I’ve had about these suggest they are not as well known or understood as I’d thought. And they really should be.
I did a quick survey of friends and family. My digital media lecturer friend knew what I was on about — no surprise there — but the majority didn’t. That most people — smart, educated people from a range of cities and professions — hadn’t heard of these is almost as scary as the existence of deepfakes themselves.
They are potentially a huge threat — socially, politically, and economically — and I’m not overstating here.
Let’s back up a step. Deepfakes can take the form of audio or text, but the term is mostly associated with altered (faked) images and video. The “deep” in deepfakes comes from deep learning, as in an area of machine learning, which in turn falls under the broad artificial intelligence (AI) banner. So essentially it’s like computergenerated images (CGI) using AI.
This name, by the way, is generally accepted to have originated from a Redditor whose username was Deepfakes.
About two years ago Deepfakes (the user) began sharing fake pornography on Reddit, or more specifically modified versions of real porn. Using opensource software, he managed to replace the faces of adult film stars with those of more mainstream (or at least familyfriendly) actors.
Some other users began to run with the idea. Someone created an app you could use to generate these. And just like that, we could make almost anyone do almost anything.
Deepfakes 101: there are two computer systems at play. There is a generative or synthesising one that makes the fake media. Then there is a detection or discriminatory system that assesses how realistic the media appears. They go back and forth until the outcome is a fake that passes muster.
Some of the outputs of this are pretty innocuous and even legitimate. For example, it can be used to “de-age” an actor so they can play the past and present forms of their characters. This wouldn’t raise many eyebrows except that it is pretty cool tech.
There are also funny fakes. My favourite example is that some bright creative sparks have used deepfake technology to insert Nicolas Cage into a delightful range of incongruous roles, such as Maria in The
Sound of Music or Yoda in Star Wars. There is a “mega mix” version of these on YouTube that is totally worth two minutes of your time. The affectionate name for these is “derpfakes”.
Then there are the less innocuous ones, the ethically ambiguous and the downright revolting. We’ve seen examples in advertising of raising celebs from the grave to dance around or endorse your product. Am I really the only one who finds that deeply unsettling?
And there is the case of Indian journalist Rana Ayyub, who had her face transposed onto a short clip from a sex film as an unsettling threat from some mouth-breather who didn’t like her strong stance on the Kathua gang rape incident. And earlier in October, actress Bella Thorne was the victim of another deepfake porn.
Are you sensing a theme here? A new report from Deeptracelabs.com found that something like 96% of deepfakes are porn related, and what some are calling “nonconsensual porn”.
But they don’t have to be. Imagine the damage a fake CEO statement could create. Even if later proved fake, the rate at which the markets move makes stock manipulation more than possible.
Initially deepfakes were pretty clunky, easily spotted by a vaguely discerning eye. But the rate at which these have improved is alarming.
The podcast Make Me Smart
With Kai and Molly recently featured an interview with Hany Farid, a professor at the University of California Berkeley’s school of information. The professor is an expert in this space and helps US legislators and politicians understand the technology and its potential (mis)uses. He boasts a joint appointment in electrical engineering and computer science, with research focuses of digital forensics and image analysis.
So when he says six months ago he could spot them but they are getting harder and harder to see visually, that is cause for alarm. Farid and many others around the world are developing computational techniques to identify these, anticipating an end game where the human eye simply won’t be able to tell.
He makes a great point in the interview: these videos have even more power because of the adversarial nature of politics today. People are willing and ready to fall for this, to jump on anything that makes the other side look bad, he warns. This plays into confirmation bias and short-circuits our scepticism. To counter this, he says we must rethink how we interact with each other online. Are you ready to give US President Donald Trump the benefit of the doubt?
If you want to learn more about deepfakes, I highly recommend listening to the Farid podcast episode I mentioned above. It is a short and non-technical discussion that serves as a great primer on this topic. A second podcast worth a listen is the episode of Stuff You Should
Know called “Will deepfakes ruin the world?” There is also an eye-opening Ted Talk by law professor Danielle Citron you can find on YouTube.
HE MANAGED TO REPLACE THE FACES OF ADULT FILM STARS WITH THOSE OF MORE MAINSTREAM ACTORS
THESE VIDEOS HAVE EVEN MORE POWER BECAUSE OF THE ADVERSARIAL NATURE OF POLITICS TODAY