USA TODAY US Edition

Doctored video has become new reality

Doctored video and audio could pose a threat to security

- Dalvin Brown

Access to tech that can create fake footage poses national security threat

It used to take a lot of time and expertise to realistica­lly falsify videos. Not anymore.

For decades, authentic-looking video renderings were only seen in big-budget sci-fi movies films such as “Star Wars.” However, thanks to the rise in artificial intelligen­ce, doctoring footage has become more accessible than ever, which researcher­s say poses a threat to national security.

“Until recently, we have largely been able to trust audio (and) video recordings,” said Hany Farid, professor of computer science at Dartmouth College. He said that advances in machine learning have democratiz­ed access to tools for creating sophistica­ted and compelling fake video and audio.

“It doesn’t take a stretch of the imaginatio­n to see how this can be weaponized to interfere with elections, to sow civil unrest or to perpetrate fraud,” Farid said.

With the 2020 presidenti­al election looming and the U.S. defense agency worried about doctored videos misleading voters, lawmakers and educationa­l institutio­ns are racing to develop software that can spot and stop what’s known as deepfakes before they even hit the internet.

Deepfakes

Broad concern around the idea of video forgeries began in late 2017 when

computer software was used to superimpos­e celebritie­s into porn by using computer software.

One of the best-known examples was created by director Jordan Peele’s production company in 2018. The video shows former President Barack Obama warning people not to believe everything they see on the internet.

However, it’s not Obama talking. It’s Peele ventriloqu­izing the former president.

Since then, the Department of Defense, through the Defense Advanced Research Projects Agency (DARPA), began developing ways to detect when a video is a deepfake.

A spokespers­on for the agency said in March that while many video manipulati­ons are performed for fun, others are much more dangerous, as they can be used to spread propaganda and misinforma­tion.

The organizati­on is seeking to develop online flags and filters that stop manipulate­d content from being uploaded to the internet.

It takes only about 500 images or 10 seconds of video to create a realistic deepfake, according to Siwei Lyu, a researcher who is working with the Defense Department to develop software to detect and prevent the spread of deepfakes.

Lyu said that anyone who posts photos on social networking sites such as Instagram is at risk of being deepfaked.

Software solutions

The first piece of software Lyu and his team of researcher­s at the University of Albany introduced last year could spot a deepfake video in the blink of an eye, literally, by analyzing how often the simulated faces blink – or don’t.

“We discovered the subjects (in deepfake videos) do not blink very much, and sometimes not at all,” Lyu said. “We then asked why does this happen.”

The researcher­s found that the software used to make deepfakes often depends on photos available on the internet. There aren’t many photos available of high-profile people with their eyes closed, so the animated subjects in the fake videos don’t blink, Lyu said.

As the makers of deepfakes began to catch wind of the new software, the researcher­s developed other methods to spot deepfakes, including using algorithms that detect unnatural movements between faces and heads as well as software that analyzes footage for the loss of subtle details.

“The skin on the deepfake-generated face tends to be overly smooth, and some of the hair and teeth details will be lost,” Lyu said. “If you look at the teeth more closely, they look more like a whole white block rather than individual teeth.”

Researcher­s at the University of Washington also are experiment­ing with deepfake technology. The school figured out how to turn audio clips into a lip-synced video of the person speaking those words in 2017.

Criminaliz­ation

Late last year, Sen. Ben Sasse, RNeb., introduced a bill in Congress that would punish people for the malicious creation and distributi­on of deepfakes. The bill, which was introduced the day before the government shutdown, flew under the radar and died. But Sasse’s office plans to reintroduc­e it.

USA TODAY reached out to Sasse for more informatio­n.

The senator said in a recent interview with radio host Glenn Beck that the “perfect storm of deepfakes” is coming soon.

The state of New York introduced a bill in 2018 that would punish people who create digital videos of subjects without their consent.

Despite the concerns over the hypothetic­al dangers, abuse of deepfakes has yet to be seen outside of adult videos. The Verge published a report in March that questions whether technology for swapping faces is even a major threat seeing as though it has been widely available for years.

Lyu said that he’s doubtful that deepfakes can start a war, and it is unlikely that they will have a long-lasting effect on society as people become increasing­ly aware of the phenomenon.

Lyu suggested that people may even become desensitiz­ed by them.

Perception-altering technology was used in April to break down language barriers in a global malaria awareness campaign featuring David Beckham.

The charity Malaria No More posted a video on YouTube highlighti­ng how it used deepfake tech to effectivel­y lipsync the video of Beckham with the voices of several other people.

To create the 55-second ad, the nonprofit used visual and voice-altering tech to make Beckham appear multilingu­al. His speech begins in English, then transition­s to eight other languages through dubbing.

Today, we live in a world in which millions of real people follow computer-generated influencer­s on social media and don’t even know it.

One of the clearest examples is Lil Miquela, a digitally created “it girl” with 1.5 million followers on Instagram with whom she interacts via direct messages.

 ?? PHOTO BY WIN MCNAMEE/GETTY IMAGES, ILLUSTRATI­ON BY LINDLEY TAYLOR ?? How can you tell when a video is a deepfake? One giveaway: The subject doesn’t blink very much.
PHOTO BY WIN MCNAMEE/GETTY IMAGES, ILLUSTRATI­ON BY LINDLEY TAYLOR How can you tell when a video is a deepfake? One giveaway: The subject doesn’t blink very much.
 ?? AFP/GETTY IMAGES ?? Paul Scharre, author of “Army of None: Autonomous Weapons and the Future of War,” watches the manipulate­d video of President Barack Obama by filmmaker Jordan Peele, shown on screen at right.
AFP/GETTY IMAGES Paul Scharre, author of “Army of None: Autonomous Weapons and the Future of War,” watches the manipulate­d video of President Barack Obama by filmmaker Jordan Peele, shown on screen at right.

Newspapers in English

Newspapers from United States