HOW DANGEROUS ARE DEEPFAKES?
Manipulated videos hold the potential to start wars and swing elections, write Ellie Zolfagharifard and Laurence Dodds in San Francisco
President Donald Trump straightens his tie, glares into the camera and takes a deep breath. “We will strike back against Russia with our full military force,” he says slowly, puffing out his chest. “As of today, we are at war.” Almost instantly, the video is shared on thousands of Twitter feeds, WhatsApp groups and Facebook pages, causing mass panic and confusion.
Within minutes, it is outed as a deepfake: an AI-generated clip created by a group of hackers who have also infiltrated America’s power networks to cause chaos in schools, hospitals and on roads. But it’s too late. By now, millions have heard the news that Trump is waging war following attacks on US critical infrastructure.
This may seem like an outlandish scenario, but it’s what experts fear could happen if the technology behind deepfakes is used for nefarious purposes.
“A video like that, even if it was fake, could go viral within seconds,” says Nina Schick, author of Deep Fakes and the Infocalypse. “Such a video can do an immense amount of damage. There’s no question about it. If Russia wanted to create a convincing deepfake video of Trump saying he’s at war, they could do it right now.”
Until recently, the manipulation of digital media to show deepfakes was mostly confined to academic research labs and to the ever-innovative world of online pornography. There were also eye-catching stunts designed to demonstrate the potential for harm, such as Get Out director Jordan Peele’s memorable 2018 imitation of Barack Obama. Back then, the risk was only theoretical. Now, however, deepfakes are loose – and already creating chaos.
While they have yet to start a global conflict, AI-generated videos, faces and voices have caused political scandal in
Malaysia, swindled large sums of money from corporate executives and helped trigger an attempted coup in Gabon. “Technology has allowed for information operations to become far more potent,” says Schick. “Until now, the barrier to entry when it came to manipulation in film has been relatively high. AI has changed that.”
In the first six months of this year, deepfake detection firm Deep Trace Lab said the number of manipulated videos it was spotting in the wild had doubled. Only last month, Facebook announced that it had shut down a new attempt by Russia’s infamous Internet Research Agency to meddle in US and UK politics via a radical news website called PeaceData. Its “editors” appeared to be static deepfakes that used AI-generated photos.
“AI-generated faces are getting more common in disinformation operations, and I suspect they’ll keep on coming,” says Ben Nimmo, head of investigations at Graphika, who helped uncover the
Deepfake pictures are even easier to create than videos; Daily Telegraph readers can make their own at ThisPersonDoesNotExist.com. Yet they are still effective (and creepy) because, unlike stock photos, they have no prior existence, making them just as unique as any human face.
Similar photos have been used by a fake LinkedIn profile that befriended Washington DC insiders, potentially as part of a foreign spying campaign by a network of fake Facebook accounts allegedly run by the Epoch Times, an online news company with links to the Chinese Falun Gong sect. Meanwhile, deepfakes are prospering as commercial tools, with several firms hawking binders full of AI-generated faces that can add instant racial or gender diversity to corporate brochures and adverts.
Strangest of all, they have begun a common joke format for Generation Z. Frivolous deepfakes have exploded on TikTok, letting video creators augment their impressions of Jim Carrey or Al Pacino’s performance in Scarface. “For as little as $20 [£15], you can use an online market place to get somebody to make any deepfake video for you, and we’re starting to see more YouTubers who are using software that’s freely available and open source to make their own manipulated videos,” says Schick. Last month, Philip Tully, a data scientist at security company FireEye, generated a hoax Tom Hanks image that looked almost exactly like the real thing. All it took was a few hundred images of Hanks and less than £75 spent on online facegeneration software.
Experts describe such efforts as “cheap fakes”: media that has been altered without advanced AI. “They can still be harmful,” says Victor Riparbelli, chief executive of Londonbased Synthesia, one of the world’s most advanced deepfake companies. His team is working with businesses such as WPP to create corporate training videos for their global branches. The videos use deepfake technology to allow the presenter to speak in any language and address the viewer by name.
Anyone can try the technology for themselves by typing a script for a virtual presenter to read. The results can be unnerving. Riparbelli says his main competitors are major tech companies. TikTok’s parent company, ByteDance, for instance, has developed its own unreleased deepfake generator called Face Swap, some of which still existed in TikTok’s code at the start of 2020. The likes of Snapchat have created similar features, albeit more limited. Start-ups, such as Ukraine’s RefaceAI, are catching up. Its Reface app uses something known as generative adversarial networks to pit two neural networks against each other, creating a process which endlessly corrects and refines itself.
“It’s naive to think that such technologies by private companies won’t be used for malign purposes,” says Schick. “It can be used for good, such as in commercial applications, but it absolutely will be weaponised.”
Riparbelli says deepfakes will inevitably fall into the hands of criminals, but fully realistic ones are still a long way off – and that may be one way to fight against their rise.
“There’s quite a lot of technical barriers to change what someone says in the video. One is the voice; cloning it is still really, really difficult to do. If I change the speech in a video that’s
‘If Russia wanted to create a convincing deepfake video of Trump saying he’s at war, they could do it right now’
‘It’s naive to think that such technologies by private companies won’t be used for malign purposes’
already been recorded, the body language is going to be out of touch, the head movements are going to be out of touch.”
Several tools have been developed to pick up these quirks ahead of the 2020 presidential election. Microsoft, for instance, recently announced a system that analyses videos and photos and provides a score indicating the chance that they have been manipulated. Adobe has also developed a tool that allows creators to attach attribution data to content to prove it isn’t fake. It’s not realism in deepfakes, however, but the propensity of people to believe what they want, that may pose problems.
“Ultimately, this isn’t actually a problem about technology … We know that misinformation has been around since time immemorial,” says Schick. “It’s really a human problem.”
It may be flawed, but the age of deepfakes has well and truly arrived. The technology already has the potential to swing elections, trigger wars and aid criminals. It’s creating an overload of disinformation that is creating chaos both online and offline.
As Schick puts it: “We are facing a danger of world-changing proportions… and we’re not ready.”