Texarkana Gazette

Seeing shouldn’t always be believing as faked videos become more realistic

- By David Pierson

LOS ANGELES—All it takes is a single selfie.

From that static image, an algorithm can quickly create a moving, lifelike avatar: a video not recorded, but fabricated from whole cloth by software.

With more time, Pinscreen, the Los Angeles startup behind the technology, believes its renderings will become so accurate they will defy reality.

“You won’t be able to tell,” said Hao Li, a leading researcher on computervi­deo at the University of Southern California who founded

Pinscreen in 2015. “With further deep-learning advancemen­ts, especially on mobile devices, we’ll be able to produce completely photoreal avatars in real time.”

The technology is a triumph of computer science that highlights the gains researcher­s have made in deep neural networks, complex algorithms that loosely mimic the thinking of the human brain.

Similar breakthrou­ghs in artificial intelligen­ce allowed University of Washington researcher­s to move President Barack Obama’s mouth to match a made-up script and the chipmaker Nvidia to train computers to imagine what roads would look like in different weather.

What used to take a sophistica­ted Hollywood production company weeks could soon be accomplish­ed in seconds by anyone with a smartphone.

Not available for a video chat? Use your life-like avatar as a stand-in. Want to insert yourself into a virtual reality game? Upload your picture and have the game render your character.

Those are the benign applicatio­ns.

Now imagine a phony video of North Korean dictator Kim Jong Un announcing a missile strike. The White House would have mere minutes to determine whether the clip was genuine and whether it warranted a retaliator­y strike.

What about video of a presidenti­al candidate admitting to taking foreign cash? Even if proved fake, the damage could prove irreversib­le.

In some corners of the internet, people are using open-source software to swap celebritie­s’ faces into pornograph­ic videos, a phenomenon called Deep Fakes.

It’s not hard to imagine a world in which social media is awash with doctored videos targeting ordinary people to exact revenge, extort or to simply troll.

In that scenario, where Twitter and Facebook are algorithmi­cally flooded with hoaxes, no one could fully believe what they see. Truth, already diminished by Russia’s misinforma­tion campaign and President Donald Trump’s proclivity to label uncomplime­ntary journalism “fake news,” would be more subjective than ever.

The consequenc­es could be devastatin­g for the notion of evidentiar­y video, long considered the paradigm of proof given the sophistica­tion required to manipulate it.

“This goes far beyond ‘fake news’ because you are dealing with a medium, video, that we traditiona­lly put a tremendous amount of weight on and trust in,” said David Ryan Polgar, a writer and self-described tech ethicist. “If you look back at what can now be considered the first viral video, it was the witnessing of Rodney King being assaulted that dramatical­ly impacted public opinion. A video is visceral. It is also a medium that seems objective.”

To stop the spread of fake videos, Facebook, Google and Twitter would need to show they can make good on recent promises to police their platforms.

Last week’s indictment of more than a dozen Russian operatives and three Russian companies showed how easily bad actors can exploit the tech companies that dominate our access to informatio­n. Silicon Valley was blindsided by the spread of trolls, bots and propaganda—a problem that persists today.

Tech companies have a financial incentive to promote sensationa­l content. And as platforms rather than media companies, they’ve fiercely defended their right to shirk editorial judgment.

Critics question whether Facebook, Google and Twitter are prepared to detect an onslaught of new technology like machine-generated video.

“Platforms are starting to take 2016-style misinforma­tion seriously at some levels,” said Aviv Ovadya, chief technologi­st at the Center for Social Media Responsibi­lity. “But doing things that scale is much harder.”

Fake video “will need be addressed at a deeper technical infrastruc­ture layer, which is a whole different type of ballgame,” Ovadya said.

(Facebook and Twitter did not respond to interview requests. Google declined to comment.)

The problem is that there isn’t much in the way of safeguards.

Hany Farid, a digital forensics expert at Dartmouth College who often consults for law enforcemen­t, said watching for blood flow in the face can sometimes determine whether video is real. Slight imperfecti­ons on a pixel level can also reveal whether it is genuine.

Over time, though, Farid said, artificial intelligen­ce will undermine those clues, perpetuati­ng a cat-and-mouse game between algorithms and investigat­ors.

“I’ve been working in this space for two decades and have known about the issue of manipulate­d video, but it’s never risen to the level where everyone panics,” Farid said. “But this machine-learning-video has come out of nowhere and has taken a lot of us by surprise.”

 ?? Gina Ferazzi/Los Angeles Times/TNS ?? ■ Hao Li, CEO of Pinscreen, shows another person’s face on his body through his app Pinscreen on Feb. 1 in Los Angeles. The company’s goal is to make lifelike avatars for gaming or communicat­ion, but in the wrong hands, the technology could easily be...
Gina Ferazzi/Los Angeles Times/TNS ■ Hao Li, CEO of Pinscreen, shows another person’s face on his body through his app Pinscreen on Feb. 1 in Los Angeles. The company’s goal is to make lifelike avatars for gaming or communicat­ion, but in the wrong hands, the technology could easily be...

Newspapers in English

Newspapers from United States