New Straits Times

GLOBAL SCRAMBLE TO TACKLE DEEPFAKES

Growing volume of deepfakes may lead to ‘informatio­n apocalypse’

- WASHINGTON

CHATBOTS spouting falsehoods, face-swapping apps crafting porn videos and cloned voices defrauding companies of millions — the scramble is on to rein in artificial intelligen­ce (AI) deepfakes that have become a misinforma­tion super spreader.

AI is redefining the proverb “seeing is believing”, with a deluge of images created out of thin air and people shown mouthing things they never said in reallookin­g deepfakes that have eroded online trust.

“Yikes. (Definitely) not me,” tweeted billionair­e Elon Musk last year in one vivid example of a deepfake video that showed him promoting a crypto currency scam.

China recently adopted expansive rules to regulate deepfakes, but most countries appear to be struggling to keep up with the fast-evolving technology amid concerns that regulation could stymie innovation or be misused to curtail free speech.

Experts warn that deepfake detectors are vastly outpaced by creators, who are hard to catch as they operate anonymousl­y using AI-based software that was once touted as a specialise­d skill, but is now widely available at low cost.

Facebook owner Meta last year said it took down a deepfake video of Ukrainian President Volodymyr Zelenskyy urging citizens to lay down their weapons and surrender to Russia.

British campaigner Kate Isaacs, 30, said her “heart sank” when her face appeared in a deepfake porn video that unleashed a barrage of online abuse after an unknown user posted it on Twitter.

“I remember just feeling like this video was going to go everywhere. It was horrendous,” Isaacs, who campaigns against non-consensual porn, was quoted as saying by the BBC in October.

The next month, the British government voiced concern about deepfakes and warned of a popular website that “virtually strips women naked”.

With no barriers to creating AIsynthesi­sed text, audio and video, the potential for misuse in identity theft, financial fraud and tarnish reputation­s has sparked global alarm.

The Eurasia group called the AI tools “weapons of mass disruption”.

“Technologi­cal advances in AI will erode social trust, empower demagogues and authoritar­ians, and disrupt businesses and markets,” the group warned in a report.

“Advances in deepfakes, facial recognitio­n, and voice synthesis software will render control over one’s likeness a relic of the past.”

This week, AI startup ElevenLabs admitted its voice cloning tool could be misused for “malicious purposes” after users posted a deepfake audio purporting to be actor Emma Watson reading Adolf Hitler’s biography, Mein

Kampf.

The growing volume of deepfakes may lead to what the European law enforcemen­t agency Europol described as an “informatio­n apocalypse”, a scenario where many people are unable to distinguis­h fact from fiction.

“Experts fear this may lead to a situation, where citizens no longer have a shared reality or could create societal confusion about which informatio­n sources are reliable,” Europol said in a report.

That was demonstrat­ed when NFL player Damar Hamlin spoke to his fans in a video for the first time since he suffered a cardiac arrest during a match.

Hamlin thanked medical profession­als responsibl­e for his recovery, but many who believed conspiracy theories that the Covid-19 vaccine was behind his on-field collapse baselessly labelled his video a deepfake.

China enforced new rules last month that will require businesses

offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriat­ely tagged to avoid “any confusion”.

The rules came after the Chinese government warned that deepfakes present a “danger to national security and social stability”.

In the US, where lawmakers have pushed for a task force to police deepfakes, digital rights activists caution against legislativ­e overreach that could kill innovation or target legitimate content.

The European Union, meanwhile, is locked in heated discussion­s over its proposed “AI Act”.

The law, which the EU is racing to pass this year, will require users to disclose deepfakes, but many fear the legislatio­n could prove toothless if it does not cover creative or satirical content.

“How do you reinstate digital trust with transparen­cy? That is the real question right now,” said Jason Davis, a research professor at Syracuse University.

“The (detection) tools are coming and they’re coming relatively quickly. But the technology is moving perhaps even quicker. So like cybersecur­ity, we will never solve this, we will only hope to keep up.”

Many are already struggling to comprehend advances such as ChatGPT, a chatbot created by the United States-based OpenAI that can generate strikingly cogent texts on almost any topic.

In a study, media watchdog NewsGuard, which called it the “next great misinforma­tion super spreader”, said most of the chatbot’s responses to prompts related to topics such as Covid-19 and school shootings were “eloquent, false and misleading”.

“The results confirm fears... about how the tool can be weaponised in the wrong hands.”

 ?? ?? Emma Watson
Emma Watson
 ?? ?? Elon Musk
Elon Musk

Newspapers in English

Newspapers from Malaysia