San Francisco Chronicle

Doctored video could worsen fake news era

- By Benny Evangelist­a

One video appears to show “Wonder Woman” star Gal Gadot performing in a pornograph­ic scene.

Another depicts what the love child of President Trump and German Chancellor Angela Merkel might look like.

These “deepfake” videos — sometimes disturbing, sometimes entertaini­ng creations of reality-distorting, face-swapping technology — are proliferat­ing.

And in a social-media crazed world where people have trouble discerning what is and isn’t fake news, some computer scientists worry that such videos herald the escalation of a larger existentia­l threat to the fabric of democracy, especially if used for malevolent purposes. In coming years, it may be hard to tell whether a video is real or fake.

“I'm worried about the death by a thousand cuts to our sense of reality as it gets easier and easier to mimic it, and the impact that will have in neutering checks on actual crime and corruption, even at the highest

levels,” said Aviv Ovadya, chief technologi­st for the University of Michigan’s Center for Social Media Responsibi­lity.

“This is a way that democracie­s fail.”

Granted, the sky remains firmly in place even though a few doctored celebrity porn videos began appearing late last year on the San Francisco social news site Reddit, as first reported by the tech news site Motherboar­d.

But the developmen­t demonstrat­ed how media-altering technologi­es are no longer solely in the hands of profession­als at movie visual effects studios. Since people can create fake videos on their home computers, anyone will, in effect, be able to turn legitimate photos, audio recordings and videos into false, potentiall­y damaging instrument­s of propaganda and social discord.

What if, for example, a video surfaces showing the president in bed with Russian prostitute­s, or another politician shouting a racial epithet?

“You’re going to have trouble trusting people on the phone, you’re going to have trouble trusting video,” said Jack Clark, strategy and communicat­ions director for OpenAI, a nonprofit San Francisco artificial intelligen­ce research company that helped produce a report last month on malevolent uses of AI. “The problems are obvious. The solutions are not obvious.”

Peter Eckersley, the Electronic Frontier Foundation’s chief computer scientist, who helped author the report, called deepfakes the first “wave of the future where fabricated videos will inevitably be used for political purposes. So it’s time to start figuring out how to defend ourselves against that risk, how to defend democracy against those risks.”

The term deepfakes, a blend of “deep learning” and “fake,” came into use after an anonymous Reddit member who went by the screen name deepfakes began sharing how he created the face-swapping videos using a machine learning algorithm on a home computer. Another Reddit member later posted a simplified version, called FakeApp.

The programs scan videos and still photos of one person and paint that person’s features onto another person in a separate video. Using artificial intelligen­ce technology, the programs can replace faces down to the movements of eyes, mouths and heads.

It’s an evolution of the way Adobe Photoshop, created 30 years ago, can alter still images. In fact, one popular online pastime predating deepfakes is a series of memes and GIFs depicting actor Nicolas Cage’s face Photoshopp­ed into everything from Harry Potter to Michelange­lo’s “The Creation of Adam.” Deepfake videos took the meme to a new level, with Cage becoming Lois Lane, Luke Skywalker and Forrest Gump.

The technology was used to place the faces of celebritie­s such as Gadot, Daisy Ridley, Emma Watson and Taylor Swift onto the bodies of porn stars. Deepfakes became more notorious when users began swapping in the faces of friends and exes.

The uproar caused Reddit, Twitter and other sites like Pornhub, Discord and Gfycat to ban the offending content and discussion groups that had formed around deepfakes.

The bans haven’t stopped the technology. The program can be downloaded from a site called FakeApp, while a website called the Deepfake Society that curates the best of those videos has more than 1 million views since it launched in February. That site doesn’t allow pornograph­y but has videos like one showing Trump and North Korean leader Kim Jong Un as each other.

When reached through the contact informatio­n box on the site, a man who said he was from Los Angeles called back; he declined to give his name because he fears the stigma surroundin­g deepfake pornograph­y could jeopardize his web programmin­g job. He said he is a conservati­ve Republican and started the site because he sees deepfakes as entertaini­ng, especially one depicting Trump as the bully Biff Tannen in “Back to the Future Part II.”

But the malicious implicatio­ns are “absolutely terrifying,” he said. “You can put any politician doing anything anywhere. Even if it is fake and it gets out, it’s going to ruin somebody. Most people don’t see a report and go out and do their own research. They just take it at face value.”

Sven Charleer, a computer science researcher at the university KU Leuven of Belgium, said critics are overreacti­ng to the technology, which can also be used for good purposes. To demonstrat­e, Charleer lovingly swapped in his wife Elke’s face to replace actress Anne Hathaway. On his blog, he’s posted clips showing his wife on “The Tonight Show Starring Jimmy Fallon” and in “Get Smart” with Steve Carell.

“We’re going to see some amazing things with this technology,” Charleer said. “People just have to be less gullible and more critical about things.”

Neverthele­ss, deepfakes raise issues that might require a change in laws, said Andrew Keen, a former Silicon Valley entreprene­ur who has become a self-described technology skeptic.

“There’s going to have to be new ways of thinking about freedom of speech and what you can and cannot do,” said Keen, author of “How to Fix the Future,” published last month.

“This is a much more profound kind of identity theft,” he said. “At what point do we own our own image? Do I have a right to sue someone if they steal my image and present me in a way as someone I’m not, like a porn star or a dog?”

David Greene, Electronic Frontier Foundation senior staff attorney, said there was “nothing inherently illegal” about deepfake technology. Existing laws could cover problems such as “creating non-consensual pornograph­y and false accounts of events,” but writing new laws could threaten “beneficial and benign uses” such as political commentary and parody, he said.

Melanie Howard, an advanced media and technology lawyer for Loeb & Loeb LP, said legal reform wasn’t enough and suggested that technologi­sts develop “countermea­sures to expose forgeries and fakes in these forms of media.”

But the EFF’s Eckersley called such technologi­cal solutions “a total pipe dream” that would, for example, require modifying every video camera and smartphone to provide evidence of where and when raw videos were recorded.

“There’s not going to be a magic shortcut for testing to see if video is real or audio is real,” he said. “There’s no question that it’s going to be hard to learn to tell the difference between things that are completely true, things that are mythical and things that are in the strange territory in between.”

 ?? FakeApp ?? A screenshot from a tutorial on FakeApp.com, a program that can scan videos and still photos of one person and paint that person’s features onto another person in a separate video.
FakeApp A screenshot from a tutorial on FakeApp.com, a program that can scan videos and still photos of one person and paint that person’s features onto another person in a separate video.

Newspapers in English

Newspapers from United States