Business Day

Deepfakes no longer just shallow jokes — they are a huge threat

• Manipulate­d audio and video clips could cause havoc socially, politicall­y and economical­ly

- KATE THOMPSON FERREIRA ● Thompson Ferreira is a freelance journalist, impactAFRI­CA fellow and WanaData member.

Last week in this column I mentioned deepfakes, almost in passing in my little love letter to Reddit, but a few recent conversati­ons I’ve had about these suggest they are not as well known or understood as I’d thought. And they really should be.

I did a quick survey of friends and family. My digital media lecturer friend knew what I was on about — no surprise there — but the majority didn’t. That most people — smart, educated people from a range of cities and profession­s — hadn’t heard of these is almost as scary as the existence of deepfakes themselves.

They are potentiall­y a huge threat — socially, politicall­y, and economical­ly — and I’m not overstatin­g here.

Let’s back up a step. Deepfakes can take the form of audio or text, but the term is mostly associated with altered (faked) images and video. The “deep” in deepfakes comes from deep learning, as in an area of machine learning, which in turn falls under the broad artificial intelligen­ce (AI) banner. So essentiall­y it’s like computerge­nerated images (CGI) using AI.

This name, by the way, is generally accepted to have originated from a Redditor whose username was Deepfakes.

About two years ago Deepfakes (the user) began sharing fake pornograph­y on Reddit, or more specifical­ly modified versions of real porn. Using opensource software, he managed to replace the faces of adult film stars with those of more mainstream (or at least familyfrie­ndly) actors.

Some other users began to run with the idea. Someone created an app you could use to generate these. And just like that, we could make almost anyone do almost anything.

Deepfakes 101: there are two computer systems at play. There is a generative or synthesisi­ng one that makes the fake media. Then there is a detection or discrimina­tory system that assesses how realistic the media appears. They go back and forth until the outcome is a fake that passes muster.

Some of the outputs of this are pretty innocuous and even legitimate. For example, it can be used to “de-age” an actor so they can play the past and present forms of their characters. This wouldn’t raise many eyebrows except that it is pretty cool tech.

There are also funny fakes. My favourite example is that some bright creative sparks have used deepfake technology to insert Nicolas Cage into a delightful range of incongruou­s roles, such as Maria in The

Sound of Music or Yoda in Star Wars. There is a “mega mix” version of these on YouTube that is totally worth two minutes of your time. The affectiona­te name for these is “derpfakes”.

Then there are the less innocuous ones, the ethically ambiguous and the downright revolting. We’ve seen examples in advertisin­g of raising celebs from the grave to dance around or endorse your product. Am I really the only one who finds that deeply unsettling?

And there is the case of Indian journalist Rana Ayyub, who had her face transposed onto a short clip from a sex film as an unsettling threat from some mouth-breather who didn’t like her strong stance on the Kathua gang rape incident. And earlier in October, actress Bella Thorne was the victim of another deepfake porn.

Are you sensing a theme here? A new report from Deeptracel­abs.com found that something like 96% of deepfakes are porn related, and what some are calling “nonconsens­ual porn”.

But they don’t have to be. Imagine the damage a fake CEO statement could create. Even if later proved fake, the rate at which the markets move makes stock manipulati­on more than possible.

Initially deepfakes were pretty clunky, easily spotted by a vaguely discerning eye. But the rate at which these have improved is alarming.

The podcast Make Me Smart

With Kai and Molly recently featured an interview with Hany Farid, a professor at the University of California Berkeley’s school of informatio­n. The professor is an expert in this space and helps US legislator­s and politician­s understand the technology and its potential (mis)uses. He boasts a joint appointmen­t in electrical engineerin­g and computer science, with research focuses of digital forensics and image analysis.

So when he says six months ago he could spot them but they are getting harder and harder to see visually, that is cause for alarm. Farid and many others around the world are developing computatio­nal techniques to identify these, anticipati­ng an end game where the human eye simply won’t be able to tell.

He makes a great point in the interview: these videos have even more power because of the adversaria­l nature of politics today. People are willing and ready to fall for this, to jump on anything that makes the other side look bad, he warns. This plays into confirmati­on bias and short-circuits our scepticism. To counter this, he says we must rethink how we interact with each other online. Are you ready to give US President Donald Trump the benefit of the doubt?

If you want to learn more about deepfakes, I highly recommend listening to the Farid podcast episode I mentioned above. It is a short and non-technical discussion that serves as a great primer on this topic. A second podcast worth a listen is the episode of Stuff You Should

Know called “Will deepfakes ruin the world?” There is also an eye-opening Ted Talk by law professor Danielle Citron you can find on YouTube.

HE MANAGED TO REPLACE THE FACES OF ADULT FILM STARS WITH THOSE OF MORE MAINSTREAM ACTORS

THESE VIDEOS HAVE EVEN MORE POWER BECAUSE OF THE ADVERSARIA­L NATURE OF POLITICS TODAY

 ?? /AFP ?? From clunky to cancerous: An example of a deepfake video manipulate­d using artificial intelligen­ce by Carnegie Mellon University researcher­s.
/AFP From clunky to cancerous: An example of a deepfake video manipulate­d using artificial intelligen­ce by Carnegie Mellon University researcher­s.

Newspapers in English

Newspapers from South Africa