Business Day

With AI, what you see is not at all what you think you’re getting

• It’s not just celebritie­s who are concerned about the dangers of face-swapping technology

- SYLVIA McKEOWN McKeown is a gadget and tech trend writer

They say seeing is believing. But what if everything you were seeing wasn’t real and even machines were having a hard time figuring out if it was?

Clips have come online where Steve Buscemi’s face is superimpos­ed on Sharon Stone’s as she utters the iconic words and stretches her legs over the chair in Basic Instinct. It looks awkward yet very realistic. But of course the use of this technology is not as innocent and amusing as it seems.

It all started with a Reddit user called Deepfakes, who was obsessed with wanting to watch porn featuring Wonder Woman actress Gal Gadot. So he made his own, by creating software that would essentiall­y superimpos­e her face seamlessly on to that of a porn actress. The realism he was able to achieve caused the video and the tech to become an overnight sensation.

He then uploaded the tech for free and it went on to “face swap” numerous celebritie­s’ faces onto those of porn stars. Two months later easy-to-use apps, such as Fakeapp, started popping up. They would do the swapping for you if you had a reasonably powerful computer.

In August it was unveiled at computer graphics conference Siggraph as technology that has evolved to allow a source actor to be able to control a realistic virtual avatar to create human portrait videos in real time.

Reddit went on to ban the subreddit “/r/deepfakes” due to a violation of its policy — “specifical­ly our violation against involuntar­y pornograph­y” — but the problem remains.

The craze has found new homes elsewhere on dedicated sites such as mrdeepfake­s.com or cfake. According to reviews of the site “the people who provide the smut here have a high-quality set-up and the pictures look so realistic you will find yourself wondering if it’s real or not”.

But it’s not just celebritie­s who have cause to be concerned; almost anyone can become anyone in any moving image. All the artificial intelligen­ce (AI) needs is a host of facial images to use as a starting reference before it builds a 3D rendering of a face that can be “puppeted” to perform realistic facial movements. Thanks to the willingnes­s of the selfie generation to upload countless images on free-to-see social media sites, the fodder is thick.

Tech site Motherboar­d asked Deepfakes, whose username has now become the catchall term for AI-assisted face swapping, whether he considered the ethical implicatio­ns of this technology; whether consent, revenge porn and blackmail entered his mind while developing the algorithm. “Every technology can be used with bad motivation­s. It’s impossible to stop that,” said the man who turned people into porn stars without their consent.

He likened it to the technology that recreated Paul Walker’s post-mortem performanc­e in Furious 7. “The main difference is how easy it is to do that by everyone. I don’t think it’s a bad thing for more average people to engage in machinelea­rning research.”

Setting aside the life-altering and devastatin­g effects of revenge porn, which disproport­ionately affects women and will no doubt be used in heinous and life-altering ways, the really scary side to this technology is that it could be used on an entirely different kind of celebrity. Already there are deepfake videos of Donald Trump doing all kinds of things. As it stands, most of those things are innocent, like having him replace the monster in Young Frankenste­in, the 1974 Mel Brooks classic.

But Buzzfeed has shown that using Jordan Peel as a source actor it was able to create a believable replica of former president Barack Obama calling Trump a “total and complete dips**t”. It has urged that “moving forward we need to be more vigilant with what we trust from the internet”.

The potential for this tech to cause an internatio­nal incident is high, especially when paired with on-the-rise voice replicatio­n software. Some government­s are already putting detection protocols in place. The US Defence Advanced Research Projects Agency (Darpa) has set aside $28m to fund new tech that can identify and debunk manipulate­d deepfake videos.

Siwei Lyu, professor and Darpa-funded researcher at the University of Albany, told Vice News: “We’re playing a cat-andmouse game. We detect deepfake generated videos based on the fact that those faces on the videos do not blink. As of June our detection accuracy is 100% but the battle is ongoing. I’m expecting the quality of the fake videos to get better and better at an exponentia­l speed.”

One can only imagine that since this segment went to air, programmer­s are already teaching the AI how to blink.

The problem is that it’s all well and good that researcher­s and computers can detect the fakes, but ordinary people who could see those videos online may still be fooled. The rampant fake written news found on Facebook has already caused incidents such as a shootout in a fake paedophile ring pizza parlour and swayed an entire election in the US.

What will happen when the alternativ­e facts start speaking for themselves?

THE SCARY SIDE TO THIS TECHNOLOGY IS IT COULD BE USED ON AN ENTIRELY DIFFERENT KIND OF CELEBRITY

 ?? /Bloomberg ?? Breaking it: The Buzzfeed app on a smartphone. The media company has been able to create a foulmouthe­d replica of former president Barack Obama.
/Bloomberg Breaking it: The Buzzfeed app on a smartphone. The media company has been able to create a foulmouthe­d replica of former president Barack Obama.

Newspapers in English

Newspapers from South Africa