Global scramble to tackle deepfakes
AI redefines proverb ‘seeing is believing’, write Anuj Chopra and Saladin Salem
Chatbots spouting falsehoods, face-swapping apps crafting porn videos and cloned voices defrauding companies of millions — the scramble is on to rein in Artificial Intelligence (AI) deepfakes that have become a misinformation super spreader.
AI is redefining the proverb “seeing is believing”, with a deluge of images created out of thin air and people shown mouthing things they never said in real-looking deepfakes that have eroded online trust.
“Yikes. (Definitely) not me,” tweeted billionaire Elon Musk last year in one vivid example of a deepfake video that showed him promoting a cryptocurrency scam.
China recently adopted expansive rules to regulate deepfakes but most countries appear to be struggling to keep up with the fast-evolving technology amid concerns that regulation could stymie innovation or be misused to curtail free speech.
Experts warn that deepfake detectors are vastly outpaced by creators, who are hard to catch as they operate anonymously using AI-based software that was once touted as a specialised skill but is now widely available at low cost.
Facebook owner Meta last year said it took down a deepfake video of Ukrainian President Volodymyr Zelensky urging citizens to lay down their weapons and surrender to Russia.
And British campaigner Kate Isaacs, 30, said her “heart sank” when her face appeared in a deepfake porn video that unleashed a barrage of online abuse after an unknown user posted it on Twitter.
“I remember just feeling like this video was going to go everywhere — it was horrendous,” Ms Isaacs, who campaigns against non-consensual porn, was quoted as saying by the BBC in October.
The following month, the British government voiced concern about deepfakes and warned of a popular website that “virtually strips women naked”.
With no barriers to creating AIsynthesised text, audio and video, the potential for misuse in identity theft, financial fraud and tarnish reputations has sparked global alarm.
The Eurasia group called the AI tools “weapons of mass disruption”.
“Technological advances in artificial intelligence will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets,” the group warned in a report.
“Advances in deepfakes, facial recognition, and voice synthesis software will render control over one’s likeness a relic of the past.”
This week AI startup ElevenLabs admitted that its voice cloning tool could be misused for “malicious purposes” after users posted a deepfake audio purporting to be actor Emma Watson reading Adolf Hitler’s biography Mein Kampf.
The growing volume of deepfakes may lead to what the European law enforcement agency Europol described as an “information apocalypse”, a scenario where many people are unable to distinguish fact from fiction.
“Experts fear this may lead to a situation where citizens no longer have a shared reality or could create societal confusion about which information sources are reliable,” Europol said in a report.
That was demonstrated last weekend when NFL player Damar Hamlin spoke to his fans in a video for the first time since he suffered a cardiac arrest during a match.
Hamlin thanked medical professionals responsible for his recovery, but many who believed conspiracy theories that the Covid-19 vaccine was behind his on-field collapse baselessly labelled his video a deepfake.
‘‘ Yikes. (Definitely) not me. ELON MUSK
TWITTER/TESLA CEO REACTING TO DEEPFAKE VIDEO OF HIM PROMOTING A CRYPTOCURRENCY SCAM