Marysville Appeal-Democrat

Cyber-defense systems seek to outduel criminals in AI race

- Tribune News Service Mcclatchy Washington Bureau

Not long after generative artificial intelligen­ce models like CHATGPT were introduced with a promise to boost economic productivi­ty, scammers launched the likes of Fraudgpt, which lurks on the dark web promising to assist criminals by crafting a finely tailored cyberattac­k.

The cybersecur­ity firm Netenrich in July identified Fraudgpt as a “villain avatar of CHATGPT” that helps craft spear-phishing emails, provides tools to break passwords, and writes undetectab­le malware or other malicious code.

And so the AI arms race was on.

Companies are embracing cyber-defenses based on generative AI hoping to outpace attackers’ use of similar tools. But more effort is needed, experts warn, including to safeguard the data and algorithms behind the generative AI models, lest the models themselves fall victim to cyberattac­ks.

This month, IBM released survey results of corporate executives, in which 84 percent of respondent­s said they would “prioritize generative AI security solutions over convention­al ones” for cybersecur­ity purposes. By 2025, Aibased security spending is expected to be 116 percent greater than in 2021, according to the survey that was based on responses from 200 CEOS, chief security officers and other executives at U.s.-based companies.

Top lawmakers already are concerned about the dangers that AI can pose to cybersecur­ity.

At a hearing of the Senate Intelligen­ce Committee in September, Chairman Mark Warner, D-VA., said “generative models can improve cybersecur­ity, helping programmer­s identify coding errors and contributi­ng toward safer coding practices … but with that potential upside, there’s also a downside since these same models can just as readily assist malicious actors.”

Separately, the Pentagon’s Defense Advanced

Research Projects Agency in August announced a competitio­n to design Ai-based tools that can fix bugs in commonly used software. The twoyear contest is intended to create systems that can automatica­lly defend any kind of software from attack.

IBM said it is developing cybersecur­ity solutions based on generative AI models to “improve the speed, accuracy and efficacy of threat detection and response capabiliti­es and drasticall­y increase productivi­ty of security teams.”

Darktrace, a cybersecur­ity firm with offices in the United States and around the world, is deploying custom-built generative AI models for cybersecur­ity purposes, said Marcus Fowler, the company’s senior vice president for strategic engagement­s and threats.

The company has graduated from using AI to predict potential attacks to designing generative AI models that observe and understand “the behavior of the environmen­t that they’re deployed within,” meaning a computer network’s normal patterns of use in a corporate or government setting. It maps activities of individual­s, peer groups, and outliers, said Fowler, who previously served at the CIA developing the agency’s global cyber-operations.

The system then is able to detect “deviations from normal and provide a context for such deviations,” allowing security experts to take action, he said.

The company also developed AI systems to study how security experts investigat­e a breach and create “an autonomous triaging capability” that automates the first 30 minutes or so of an investigat­ion, allowing security officials to take swift action when an attack or a breach is detected, Fowler said.

In addition to detecting anomalies and aiding in investigat­ions of a cyberattac­k, AI tools ought to be useful in analyzing malware to determine the origins of attackers, said Jose-marie Griffiths, president of Dakota State University, who previously served on the congressio­nal National Security Commission on Artificial Intelligen­ce.

“Reverse engineerin­g a malware to identify who sent it, what was the intent, is one area where we haven’t seen a lot” of use of AI tools, “but we could potentiall­y see quite a bit of work, and that’s an area we are interested in,” Griffiths said, referring to the university’s ongoing work.

While malware is mostly software code, hackers often include notes in their own language, either to themselves or others, about a particular line of code’s function. Using AI to glean such messages, especially those written in languages other than English, could help sharpen attributio­n, Griffiths said.

Use of generative AI models to improve cybersecur­ity is gaining momentum, but security experts also must pay attention to safeguardi­ng the generative AI models themselves because attackers could attempt to break into the models and their underlying data, Griffiths said.

Broader use of generative AI in cybersecur­ity could help ease chronic problems facing security experts, said John Dwyer, head of research at IBM’S X-force, the company’s cybersecur­ity unit.

“Alert fatigue, talent shortage and mental health issues have sort of been associated with cybersecur­ity for a long time,” Dwyer said. “And it turns out that we can apply [AI] technologi­es to really move the needle to help address some of these core problems that everyone’s been dealing with.”

Cybersecur­ity experts are burned out by being constantly on alert, doing repetitive tasks, “sifting through a bunch of hay looking for a needle,” and either leaving the industry or confrontin­g mental health challenges, Dwyer said.

Using AI models to offload some of those repetitive tasks could ease the workload, and allow security analysts to focus on high-value tasks, Dwyer said.

As with all advances in technology online, progress in legitimate uses on the publicly accessible parts of the web often is accompanie­d by a “much faster rate of growth” in the underwater or dark web, where criminals and hackers operate, Griffiths said. In the case of generative AI, as defenders rush to incorporat­e the tools in defense, the attackers are racing to use the same tools.

“That’s unfortunat­ely the battle we are in,” she said. “It’s going to be constant.”

Newspapers in English

Newspapers from United States