Scam alert: Deepfake tech
Deepfake criminals are ready for business, writes Deborah Tarrant.
According to insurer Euler Hermes, the CEO of a United Kingdom-based energy firm transferred more than €200,000 (about $325,000) last year to a “Hungarian supplier” after a phone call from the chief executive at his German parent company. Except it wasn’t his boss. It’s believed synthetic voice audio was used with deeplearning algorithms and voice cloning to transform speech into the boss’ voice.
Deepfake technology allows scammers to steal or extort large sums of money, potentially manipulate markets and trash leaders’ and brands’ reputations. In 2019, global cybersecurity company Symantec reported three instances where criminals used deepfake voice technology to mimic CEOs and direct senior finance executives to transfer millions, pronto. In one of the examples, interactive audio talked a company out of US$10 million.
Since it was used in 2018’s Solo: A Star Wars Story, deepfake-type tech has been in Hollywood’s domain. Now, with the help of hobbyists, it’s easy (and cheap) to produce, commodify and offer for as little as US$1 a go or US$20 (about $30) a month. Deepfake videos proliferate across the internet, from satirical take-offs to fake news, with Donald Trump, Mark Zuckerberg and Tesla founder Elon Musk among the victims.
“It’s a new world of malicious activity,” says Henry Ajder, head of communications and research at Deeptrace, a Dutch startup founded in 2018 that receives pleas daily from large organisations and governments wanting to fend off fakery.
Pornography dominates the space – about 96 per cent of deepfakes, according to Deeptrace’s State of Deepfakes report from September. Beyond smearing a CEO’s reputation, deepfake porn can be used to blackmail less senior company staff to divulge confidential information.
Awareness training can only go so far in identifying the fraud because experts predict that the anomalies the human ear can detect will be eliminated as the tech gets more sophisticated. However, Saurabh Shintre, senior researcher at NortonLifeLock Research Group, formerly Symantec Research Labs, notes scammers use the familiar phishing tactic of time or social pressure to convince people to transfer funds urgently. So “taking a deep breath while you consider if the caller’s request is plausible” can be a good start, he says. Ajder recommends listening for a wobbly voice where the intonation and cadence is off and asking contextual questions that only the caller could answer.
Signs of a deepfake video include glitching or “noise patterns”, in which pixilation is affected, particularly around the hair, ears and under the chin (where normally there’d be shadow). Early issues with blinking have been resolved.
Facebook, Google and cyber-defence companies are working on solutions to what some call the internet’s latest step in eroding societal trust. Deeptrace is developing media authentication for videos that can be used manually like an app or plugged in like an API.
“Deepfake voice is relatively new and we don’t have a big database of true voices to compare,” says Wanlei Zhou, head of the school of computer science at the University of Technology Sydney, but he’s tipping deepfake filters for video will become available for gateways as the tech matures. “The fakes will be filtered out automatically, just as we filter emails today.”