GOOGLE DEVELOPING APP TO SPOT FAKE VIDEOS
It will automatically scan for manipulated pictures or videos, and allow users to report apparent fakes
IN an era replete with fake news, you might expect video evidence to provide a clearer picture of the truth. You’d be wrong, said Google engineer Supasorn Suwajanakorn, who had developed a tool which, fed with the right input, could create a realistic fake video that mimicked the way a person talked by observing footage of their mouth and teeth to create the perfect lip-sync.
Like any technology, it has great potential for good and mischief. Suwajanakorn is working with the AI Foundation on a “Reality Defender” app that would run automatically in web browsers to spot and flag fake pictures or videos.
“I let a computer watch 14 hours of Obama video, and synthesised him talking,” Suwajanakorn said while sharing his work at the TED Conference here on Wednesday.
Such technology could be used to create virtual versions of those who have died — grandparents could be asked for advice, actors returned to the screen, great teachers give lessons, or authors read their works aloud.
He noted a New Dimensions in Testimony project that allowed people to talk with holograms of Holocaust survivors.
“These results are intriguing, but, at the same time, troubling. It concerns me, the potential for misuse. So, I am also working on counter-measure technology to detect fake images and video.”
Such a concept has been the stuff of science fiction, and has been portrayed in movies like Matrix and now, Altered Carbon.
He worried, for example, that war could be triggered by a bogus video of a world leader announcing a nuclear strike.
“Reality Defender” will scan for manipulated pictures or videos, as well as allow users to report fakes to use the power of the crowd to bolster the defence.
While writing fake news might be cheap and easy, it was tough to manipulate video without any traces. Videos, by design, were streams of thousands of images, each of which would have to be perfected in a fake, he said.
“There is a long way to go before we can effectively model people,” said Suwajanakorn, whose work in the field stems from his time as a student at the University of Washington.
“We don’t want it to be in the wrong hands.”