New Straits Times

What to do with deepfakes?

-

THEY say seeing is believing. Well, if a photograph is considered convincing evidence of some wrongdoing, imagine how damning a video clip can be. There was a time when it was hard to even make a realistic-looking fake photograph. Today, it’s relatively easy to make a realistic-looking video.

A good example is the “deepfake” video that comedian and film director Jordan Peele made of Barack Obama saying things the real President Obama never would. If you’ve not seen it yet, just go to YouTube, input the relevant keywords and you’ll easily find it.

What’s striking about this video, made last year, was how realistic it was. While not flawless, at first glance it could really be mistaken for the real thing. If you look carefully, you’ll see some distortion­s around the mouth which gives away the fact it’s a simulation.

But what a difference a year makes, especially when it comes to technology. A company called Brandalism recently put together a flawless deepfake of Kim Kardashian talking about a villainous organisati­on called Spectre (from the James Bond movies, perhaps) and mocking her fans.

That video has been taken down from YouTube but at the time of writing, it can still be found on Instagram. Unlike the Obama deepfake, this one holds up as realistic even after many views. I’ve watched it many times and I can’t honestly say I could tell it was fake.

The reason is technology. The Obama deepfake was made by Peele’s production company using a mix of Adobe After Effects and a very common face swapping app called FakeApp. The Kardashian video used a much more sophistica­ted software called CannyAI, which was developed by an advertisin­g agency called Canny.

All deepfake software use AI algorithms to manipulate a person’s mouth movements by studying footage from existing real videos to generate a fake video where the person’s mouth movements appear to match the fake speech (an impersonat­or would be needed to supply the fake voice). I guess the difference between the Obama

deepfake, which was good, and the Kardashian deepfake, which was flawless, was the quality of the AI algorithm.

While deepfake videos like the Obama and Kardashian examples are humorous and meant to entertain, one can easily imagine how deepfake technology can be used for nefarious reasons, especially in the area of politics.

How does it work

Is it really so easy to make a realistic deepfake video? If you want to make something really convincing, so much so that it would fool most people, the answer is that it’s not so easy. At least for now.

The core process involved in creating deepfake videos is something called generative adversaria­l networks (or GANs), whereby two machine learning models are pitted against each other. One model creates the video forgeries while the other model attempts to detect the forgeries. The forger continues to create fakes until the other model isn’t able to detect the forgery anymore. For this to work well, you’d need a lot of original source material. That’s why politician­s and celebritie­s are easy targets. There are lots of videos of them around.

There’s also a limit to what the algorithms can do. They work best in a situation where someone is stationary and talking to the camera. It’s much harder to make a convincing deepfake of someone in an action scene. Even if all the person is doing is moving their hands a lot while speaking, it would pose a problem for the algorithm.

So, it’s actually not so easy to do a very realistic deepfake but it is getting easier and the technology is improving all the time. This has a lot of implicatio­ns, especially on politics. You can easily imagine political opponents posting deepfakes of their rivals saying something bad or embarrassi­ng.

Hany Farid, a professor of computer science at Dartmouth College has estimated that within two years, the technology will get to a point whereby the human brain will not be able to detect a deepfake. When that happens, not only will it be easy to fool people that a deepfake video is real, it will also make it possible for politician­s caught in a scandal to claim that a real video is fake.

Of course, when the human eye isn’t able to tell whether a video is fake or not, it’s still possible to rely on digital forensics. And just as deepfake technology is improving, so are the forensics for authentica­ting videos.

detecting deepfakes

In order to learn how to detect deepfakes, you first have to know how to make them.

The United States’ Defense Advanced Research Projects Agency (DARPA) is collaborat­ing with researcher­s at the University of Colorado in Denver to try to create convincing deepfake videos, which will later be used by other researcher­s who are developing technology to determine what’s real and what’s fake.

Other notable academic institutio­ns like Carnegie Mellon, the University of Washington, Stanford University, and The Max Planck Institute for Informatic­s all have researcher­s experiment­ing with deepfake technology.

One would imagine digital forensics will always be one step ahead of the forgers but in order for damage to be done to political campaigns, it doesn’t require a flawless deepfake. That’s because people tend to be gullible when something confirms their biases.

A person who dislikes a certain politician will believe a scandalous video to be true. On the other hand, a person who supports that politician will view the very same video with suspicion. It’s human nature. We’re all susceptibl­e to confirmati­on bias.

You can see this phenomenon happening even with fake news articles which don’t even involve photograph­s or videos. Most fake news articles can easily be debunked by just checking the source of the articles and doing a bit of fact-checking via Google. Yet many people fall for fake news and worse still, spread it around through social media because such news confirm their biases.

For those of us who’d like to think that we’re not so biased and can view things objectivel­y, the way to protect against being duped by a deepfake video is the same way you’d vet a dubious news story. Check the source. Is it from a trusted establishm­ent like CNN or the New York Times or Reuters?

Or did is originate from someone’s blog or social media page? Also do a bit of Googling. Has this video been widely reported on? Sometimes just a bit of due diligence and common sense are just as effective as digital forensics.

 ??  ??
 ??  ?? Jordan Peele next to the deepfake Obama he created.
Jordan Peele next to the deepfake Obama he created.
 ?? future proof oon yeoh IS A CONSULTANT WITH EXPERIENCE­S IN PRINT, ONLINE AND MOBILE MEDIA. REACH HIM AT OONYEOH@ GMAIL.COM ??
future proof oon yeoh IS A CONSULTANT WITH EXPERIENCE­S IN PRINT, ONLINE AND MOBILE MEDIA. REACH HIM AT OONYEOH@ GMAIL.COM

Newspapers in English

Newspapers from Malaysia