San Francisco Chronicle

Government­s join in as fake videos grow

Doctored clips seen as posing threat to business, democracy and social order

- By Melia Russell

In a video released by Xinhua, China’s state-run press agency, a young man with a shock of brown hair and rimless glasses made his debut Thursday as the newest member of the team.

“Hello, everyone,” the anchor said, before introducin­g himself as an English-speaking digital composite modeled on the looks and voice of a real Xinhua host.

He is the world’s first artificial intelligen­ce anchorman, developed by Xinhua and Sogou, a Chinese search engine. The edited video looks authentic; the content, benign.

Halfway around the world, the White House press secretary shared a video showing CNN’s Jim Acosta struggling with a White House intern to hold onto a microphone during a tense exchange with President Trump.

Fact-checkers and other experts say the video, which was first shared by Paul Joseph Watson, a conspiracy theorist associated with the far-right website InfoWars, was sped up to make it look like Acosta chopped the woman’s arm with his hand. Other versions of the video, believed to be authentic, showed him slowly raising his hand, appearing to gesture to the president. The White House pulled Acosta’s press pass Wednesday.

Government­s have long manipulate­d images and released propaganda films — think of Joseph Sta-

lin’s habit of “disappeari­ng” political opponents from Soviet photograph­s. But the week’s events highlight how Silicon Valley technology is accelerati­ng the blurring of reality and fiction. “Deepfakes,” highly realistic altered images created by artificial intelligen­ce, originated in the world of porn, could soon spread to other realms.

Some observers worry that such videos present a real danger for business, if a clip of a CEO saying outrageous things were released by a short seller; for democracy, if politician­s publish fictitious videos about their opponents; and for society at large.

“When you see video, you still think that you are peering into reality,” David Ryan Polgar, a tech ethicist, said. “The struggle now is that we are blurring the lines between reality and fiction. That’s extremely dangerous for our notions of truth, what happened and what didn’t.”

It used to be that creating realistic fake videos required a lot of software knowledge and computer hardware. Then came the democratiz­ation of fake video.

In 2017, an anonymous Reddit user, who went by the screen name “deepfakesa­pp,” created a program that could scan videos and still photos of one person and paint that person’s features onto another person in a separate video. The tool was free, readily available and accompanie­d by instructio­ns for people without computer science degrees.

Now social media spread fake videos at warp speed. One video appears to show “Wonder Woman” star Gal Gadot performing in a pornograph­ic scene. Another depicts what a love child of Trump and German Chancellor Angela Merkel might look like. The Xinhua broadcasts use more or less the same techniques.

The ability to produce such realistic videos represents a triumph of computer science. It demonstrat­es the leaps researcher­s have made in deep neural networks, a set of algorithms modeled loosely the human brain and taught to recognize patterns. The videos have become increasing­ly convincing. Fighting them has required its own sophistica­ted computer work.

This year, three computer science researcher­s from the State University of New York at Albany found a flaw in many of these videos. Deepfake algorithms don’t typically use photos or videos where people have their eyes closed, so the videos they generate don’t feature people blinking. Siwei Lyu, an associate professor of computer science, said he and his team designed an artificial intelligen­ce that could detect where blinking was absent in faked videos with 95 percent accuracy.

The team published its findings in June. Less than three weeks later, a group of anonymous software developers wrote to Lyu saying his tactic had backfired. They now understood the need to use photos of people with eyes open and shut.

“Once they notice that you have a technique to detect the fake video, they will improve their methods to circumvent that detection,” Lyu said.

Ultimately, technology must fight the very problem it created, according to Polgar. Big tech companies should offer their vast stores of imagery and the algorithms they use to help detect fakes, and doctored videos should be banned and taken down when identified, he said.

Y Combinator, a San Francisco tech investment group that offers money and mentorship to early stage startups, announced in March that it is looking to fund startups that could solve the problem of fake video.

“The tech to create doctored videos that are indistingu­ishable from reality now exists, and soon it will be widely available to anyone with a smartphone,” the startup incubator said. “We are interested in funding tech that will equip the public with the tools they need to identify fake video and audio.”

The month before, Sam Altman, the organizati­on’s president, had tweeted about being fooled by a fake video:

“Today was the first day I fell for an AIgenerate­d fake video with major geopolitic­al implicatio­ns. Luckily the people who showed it to (me) held my phone while I was watching it. But whoa. The world is gonna get weird.”

“When that you you are see peering video, into you reality. still think The struggle now is that we are blurring the lines between reality and fiction. That’s extremely dangerous for our notions of truth.” David Ryan Polgar, tech ethicist

 ??  ??

Newspapers in English

Newspapers from United States