New Straits Times

HERE COME THE FAKE VIDEOS

A community of hobbyists has begun experiment­ing with more powerful tools to create realistic face swaps and leave few traces of manipulati­on, writes

- KEVIN ROOSE

THE scene opened on a room with a red sofa, a potted plant and the kind of bland modern art you’d see on a therapist’s wall.

In the room was Michelle Obama, or someone who looked exactly like her.

Wearing a low-cut top with a black bra visible underneath, she posed for the camera and flashed her unmistakab­le smile.

The video, which appeared on the online forum Reddit, was what’s known as a “deepfake” — an ultra-realistic fake video made with artificial intelligen­ce software.

It was created using a programme called FakeApp, which superimpos­ed Obama’s face onto the body of a pornograph­ic film actress.

The hybrid was uncanny — if you didn’t know better, you might have thought it was really her.

Until recently, realistic computer-generated video was a laborious pursuit available only to big-budget Hollywood production­s or cutting-edge researcher­s.

But, in recent months, a community of hobbyists has begun experiment­ing with more powerful tools, including FakeApp — a programme that was built by an anonymous developer using open-source software written by Google.

FakeApp makes it free and relatively easy to create realistic face swaps and leave few traces of manipulati­on.

Since a version of the app appeared on Reddit in January, it has been downloaded more than 120,000 times, according to its creator.

Deepfakes are one of the newest forms of digital media manipulati­on, and one of the most obviously mischief-prone.

It’s not hard to imagine this technology being used to smear politician­s, create counterfei­t revenge porn or frame people for crimes.

Lawmakers have already begun to worry about how deepfakes could be used for political sabotage and propaganda.

Some users on Reddit defended deepfakes and blamed the media for over-hyping their potential for harm.

Others moved their videos to alternativ­e platforms, rightly anticipati­ng that Reddit would crack down under its rules against nonconsens­ual pornograph­y.

And, a few expressed moral qualms about putting the technology into the world.

After lurking for several weeks in Reddit’s deepfake community, I decided to see how easy it was to create a (safe for work, nonpornogr­aphic) deepfake using my own face.

I downloaded FakeApp and enlisted two technical experts to help me — Mark McKeague, a colleague in The New York Times’s research and developmen­t department, and a deepfake creator, known only as Derpfakes.

Because of the controvers­ial nature of deepfakes, Derpfakes would not give his real name.

What I learned is that making a deepfake isn’t simple.

But, it’s not rocket science, either. Picking the right source data is crucial.

Short video clips are easier to manipulate than long clips, and scenes shot at a single angle produce better results than scenes with multiple angles.

Genetics also help. The more the faces resemble each other, the better.

I’m a brown-haired white man with a short beard, so Mark and I decided to try several other brown-haired, stubbled white guys.

We started with Ryan Gosling. I also sent Derpfakes, my outsourced Reddit expert, several video options to choose from.

Next, we took several hundred photos of my face, and gathered images of Gosling’s face using a clip from a recent TV appearance.

FakeApp uses these images to train the deep learning model and teach it to emulate our facial expression­s.

To get the broadest photo set possible, I twisted my head at different angles, making as many different faces as I could.

Mark then used a programme to crop those images down, isolating just our faces, and manually deleted blurred or badly cropped photos.

He then fed the frames into FakeApp. In all, we used 417 photos of me, and 1,113 of Gosling.

When the images were ready, Mark pressed “start” on FakeApp, and the training began.

His computer screen filled with images of my face and Gosling’s face, as the programme tried to identify patterns and similariti­es.

About eight hours later, after our model had been sufficient­ly trained, Mark used FakeApp to finish putting my face on Gosling’s body.

The video was blurry and bizarre, and Gosling’s face occasional­ly flickered into view.

Only the legally blind would mistake the person in the video for me.

After the experiment, I reached out to the anonymous creator of FakeApp through an email address on its website.

I wanted to know how it felt to create a cutting-edge AI tool, only to have it gleefully co-opted by ethically challenged pornograph­ers.

A man wrote back, identifyin­g himself as a software developer in Maryland.

Like Derpfakes, the man would not give me his full name, and instead went by his first initial, N.

He said he had created FakeApp as a creative experiment and was chagrined to see Reddit’s deepfake community use it for ill.

N said he didn’t support the use of FakeApp to create nonconsens­ual pornograph­y or other abusive content. And, he said he agreed with Reddit’s decision to ban explicit deepfakes.

But, he defended the product. On the day of the school shooting last month in Parkland, Florida, a screenshot of a BuzzFeed News article, “Why We Need to Take Away White People’s Guns Now More Than Ever”, written by a reporter named Richie Horowitz, began making the rounds on social media.

The whole thing was fake. Richie Horowitz did not exist, and no article with that title was ever published on the site.

But, the doctored image pulsed through right-wing outrage channels and was boosted by activists on Twitter. It wasn’t an AIgenerate­d deepfake, or even a particular­ly sophistica­ted Photoshop job, but it did the trick.

Online misinforma­tion, no matter how sleekly produced, spreads through a familiar process once it enters our social distributi­on channels. The hoax gets 50,000 shares, and the debunking an hour later gets 200.

The carnival barker gets an algorithmi­c boost on services like Facebook and YouTube, while the expert screams into the void.

There’s no reason to believe that deepfake videos will operate any differentl­y. People will share them when they’re ideologica­lly convenient and dismiss them when they’re not.

The dupes who fall for satirical stories from The Onion will be fooled by deepfakes, and the scrupulous people who care about the truth will find ways to detect and debunk them.

It’s not hard to imagine this technology being used to smear politician­s, create counterfei­t revenge porn or frame people for crimes. Lawmakers have already begun to worry about how deepfakes could be used for political sabotage and propaganda.

 ?? NYT PIC ?? A handout photo of a screenshot from FakeApp, a programme that makes it free and relatively easy to create deepfakes — an ultrareali­stic fake video made with artificial intelligen­ce software. Deepfakes are one of the newest forms of digital media...
NYT PIC A handout photo of a screenshot from FakeApp, a programme that makes it free and relatively easy to create deepfakes — an ultrareali­stic fake video made with artificial intelligen­ce software. Deepfakes are one of the newest forms of digital media...
 ??  ??

Newspapers in English

Newspapers from Malaysia