The Daily Telegraph

Iranian hackers make use of CHATGPT to target feminists

- By James Titcomb

IRANIAN hackers used CHATGPT to target high-profile feminists with cyber attacks, the chatbot’s maker has revealed, in one of the first known cases of state-backed organisati­ons exploiting new artificial intelligen­ce tools.

Openai and Microsoft said they had closed accounts belonging to a hacking group affiliated with Iran’s revolution­ary guard, nicknamed Crimson Sandstorm, alongside other groups linked to Russia, China and North Korea.

The companies said the Iranian group used AI to write emails seeking to lure “prominent feminists” to a fake website, which could have been used to steal details or install computer viruses.

Last year, an imprisoned Iranian feminist, Narges Mohammadi, was awarded the Nobel Peace Prize for her campaign against the regime. Protests erupted around the country last year after 22-year-old Mahsa Amini died in police custody after being arrested for allegedly flouting Iran’s strict dress codes.

Openai, which has received billions in funding from Microsoft, said the hacking groups’ activities showed that CHATGPT offers only “limited” and “incrementa­l” opportunit­ies for malicious actors but that it had a blanket ban on state-backed hackers using its services. They did not say which feminist activists had been targeted.

Among the hacking groups’ other uses for CHATGPT were writing phishing emails, searching for informatio­n and writing code to help build websites.

The Russian hacking group, dubbed Forest Blizzard, used CHATGPT to look up informatio­n on satellite communicat­ion systems and radar technologi­es. Starlink, the satellite internet service operated by Elon Musk’s Spacex, has been widely used by Ukraine as it defends itself against Russia’s invasion.

North Korean hackers used the chatbot to craft hacking emails as well as conduct research on think tanks, while Chinese groups used it to translate documents and to manipulate systems they had accessed.

The National Cyber Security Centre, the cyber arm of GCHQ, has warned that AI tools are making scam emails more convincing than ever.

Openai said it would learn from how the hackers had used the tools in order to make its systems safer.

It said: “Understand­ing how the most sophistica­ted malicious actors seek to use our systems for harm gives us a signal into practices that may become more widespread in the future, and allows us to continuous­ly evolve our safeguards.”

Newspapers in English

Newspapers from United Kingdom