BBC Science Focus

11. DEEPFAKE WARFARE

An arms race will pit AIs against each other to discover what's real and what’s not

-

Deepfake videos have exploded online over the past two years. It’s where Artificial intelligen­ce (AI) is used to swap one person’s image in a photo or video, for another’s. Deeptrace, a company set up to combat this, says in just the eight months between April and December 2019, deepfakes have rocketed by 70% to 17,000.

Most deepfakes, about 96%, are pornograph­y. Here, a celebrity’s face replaces the original. In its 2019 report, The State of

Deepfakes, Deeptrace says the top four dedicated deepfake porn sites generated 134,364,438 views.

As recently as five years ago, realistic video manipulati­on required expensive software and a lot of skill, so it was primarily the preserve of film studios. Now freely-available AI algorithms, that have learned to create highlyreal­istic fakes, can do all the technical work. All anyone needs is a laptop with a graphics processing unit (GPU).

The AI behind the fakes has been getting more sophistica­ted too. “The technology is really much better than last year,” says Associate Professor Luisa Verdoliva, part of the Image Processing Research Group at the University of Naples in Italy. “If you watch YouTube deepfake videos from this year compared to last year, they are much better.”

Now there are huge efforts within universiti­es and business start-ups to combat deepfakes by perfecting AI-based detection systems and turning AI on itself. In September 2019, Facebook,

Microsoft, the University of Oxford and several other universiti­es teamed up to launch the Deepfake Detection Challenge with the aim of supercharg­ing research. They pooled together a huge resource of deepfake videos for researcher­s to pit their detection systems against. Facebook even stumped up $10 million for awards and prizes.

Verdoliva is part of the challenge’s advisory panel and is doing her own detection research. Her approach is to use AI to spot tell-tale signs – impercepti­ble to the human eye – that images have been meddled with. Every camera, including smartphone­s, leaves invisible patterns in the pixels when it processes a photo. Different models leave different patterns. “If a photo is manipulate­d using deep learning, the image doesn’t share these characteri­stics,” says Verdoliva. So, when these invisible markings have vanished, chances are it’s a deepfake.

Other researcher­s are using different detection techniques and while many of them can detect deepfakes generated in a similar way to the ones in their training data, the real challenge is to develop a stealthy detection system that can spot deepfakes created using entirely different techniques.

The extent to which deepfakes will infiltrate our lives in the next few years will depend on how this AI arms race plays out. Right now, the detectors are playing catch-up.

 ??  ??

Newspapers in English

Newspapers from United Kingdom