Yuma Sun

Microsoft: US rivals are beginning to use generative AI in offensive cyber operations

- BY FRANK BAJAK

BOSTON – Microsoft said Wednesday that U.S. adversarie­s – chiefly Iran and North Korea and to a lesser extent Russia and China – are beginning to use its generative artificial intelligen­ce to mount or organize offensive cyber operations.

The technology giant and business partner Openai said they had jointly detected and disrupted the malicious cyber actors’ use of their AI technologi­es – shutting down their accounts.

In a blog post, Microsoft said the techniques employed were “early-stage” and neither “particular­ly novel or unique” but it was important to expose them publicly as U.S. adversarie­s leverage large-language models to expand their ability to breach networks and conduct influence operations.

Cybersecur­ity firms have long used machine-learning on defense, principall­y to detect anomalous behavior in networks. But criminals and offensive hackers use it as well, and the introducti­on of large-language models led by Openai’s CHATGPT upped that game of cat-and-mouse.

Microsoft has invested billions of dollars in Openai, and Wednesday’s announceme­nt coincided with its release of a report noting that generative AI is expected to enhance malicious social engineerin­g, leading to more sophistica­ted deepfakes and voice cloning . A threat to democracy in a year where over 50 countries will conduct elections, magnifying disinforma­tion and already occurring,

Here are some examples Microsoft provided. In each case it said all generative AI accounts and assets of the named groups were disabled:

• The North Korean cyberespio­nage group known as Kimsuky has used the models to research foreign think tanks that study the country, and to generate content likely to be used in spear-phishing hacking campaigns.

• Iran’s Revolution­ary Guard has used large-language models to assist in social engineerin­g, in troublesho­oting software errors, and even in studying how intruders might evade detection in a compromise­d network. That includes generating phishing emails “including one pretending to come from an internatio­nal developmen­t agency and another attempting to lure prominent feminists to an attacker-built website on feminism.” The AI helps accelerate and boost the email production.

• The Russian GRU military intelligen­ce unit known as Fancy Bear has used the models to research satellite and radar technologi­es that may relate to the war in Ukraine.

• The Chinese cyberespio­nage group known as Aquatic Panda – which targets a broad range of industries, higher education and government­s from France to Malaysia – has interacted with the models “in ways that suggest a limited exploratio­n of how LLMS can augment their technical operations.”

• The Chinese group Maverick Panda, which has targeted U.S. defense contractor­s among other sectors for more than a decade, had interactio­ns with large-language models suggesting it was evaluating their effectiven­ess as a source of informatio­n “on potentiall­y sensitive topics, high profile individual­s, regional geopolitic­s, US influence, and internal affairs.”

In a separate blog published Wednesday, Openai said its current GPT-4 model chatbot offers “only limited, incrementa­l capabiliti­es for malicious cybersecur­ity tasks beyond what is already achievable with publicly available, non-ai powered tools.”

Cybersecur­ity researcher­s expect that to change.

Last April, the director of the U.S. Cybersecur­ity and Infrastruc­ture Security Agency, Jen Easterly, told Congress that “there are two epoch-defining threats and challenges. One is China, and the other is artificial intelligen­ce.”

Easterly said at the time that the U.S. needs to ensure AI is built with security in mind.

Critics of the public release of CHATGPT in November 2022 – and subsequent releases by competitor­s including Google and Meta – contend it was irresponsi­bly hasty, considerin­g security was largely an afterthoug­ht in their developmen­t.

“Of course bad actors are using large-language models – that decision was made when Pandora’s Box was opened,” said Amit Yoran, CEO of the cybersecur­ity firm Tenable.

Some cybersecur­ity profession­als complain about Microsoft’s creation and hawking of tools to address vulnerabil­ities in large-language models when it might more responsibl­y focus on making them more secure.

“Why not create more secure blackbox LLM foundation models instead of selling defensive tools for a problem they are helping to create?” asked Gary Mcgraw, a computer security veteran and co-founder of the Berryville Institute of Machine Learning.

NYU professor and former AT&T Chief Security Officer Edward Amoroso said that while the use of AI and large-language models may not pose an immediatel­y obvious threat, they “will eventually become one of the most powerful weapons in every nation-state military’s offense.”

Newspapers in English

Newspapers from United States