The Manila Times

Is artificial intelligen­ce the solution to cyber security threats?

-

ARTIFICIAL intelligen­ce technology has been a buzzword in cyber security for a decade now — cited as way to flag vulnerabil­ities and recognise threats by carrying out pattern recognitio­n on large amounts of data. Antivirus products, for example, have long used AI to scan for malicious code, or malware, and send alerts in real time.

But the advent of generative AI, which enables computers to generate complex content — such as text, audio and video — from simple human inputs, offers further opportunit­ies to cyber defenders. Its advocates promise it will boost efficiency in cyber security, help defenders launch a real-time response to threats, and even help them outpace their adversarie­s altogether.

“Security teams have been using AI to detect vulnerabil­ities and generate threat alerts for years, but generative AI takes this to another level,” says Sam King, chief executive of security group Veracode.

“Now, we can use the technology not only to detect problems, but also to solve and, ultimately, prevent them in the first place.”

Generative AI technology was first thrust into the spotlight by the launch of OpenAI’s ChatGPT, a consumer chatbot that responds to users’ questions and prompts. Unlike the technology that came before it, generative AI “has adaptive learning speed, contextual understand­ing and multimodal data processing, and sheds the more rigid, rule-based coat of traditiona­l AI, supercharg­ing its security capabiliti­es,” explains Andy Thompson, offensive research evangelist at CyberArk Labs.

So, after a year of hype around generative AI, are these promises being delivered upon?

Already, generative AI is being used to create specific models, chatbots, or AI assistants that can help human analysts detect and respond to hacks — similar to ChatGPT, but for cyber security. Microsoft has launched one such effort, which it calls Security Copilot, while Google has a model called SEC Pub.

“By training the model on all of our threat data, all of our security best practices, all our knowledge of how to build secure software and secure configurat­ions, we already have customers using it to increase their ability to analyse attacks and malware to create automated defences,” says Phil Venables, chief informatio­n security officer of Google Cloud.

And there are many more specific use cases, experts say. For example, the technology can be used for attack simulation, or to ensure that a company’s code is kept secure. Veracode’s King says: “You can now take a GenAI model and train it to automatica­lly recommend fixes for insecure code, generate training materials for your security teams, and identify mitigation measures in the event of an identified threat, moving beyond just finding vulnerabil­ities.”

Generative AI can also be used for “generating [and] synthesisi­ng data” with which to train machine learning models, says Gang Wang, associate professor of computer science at the University of Illinois Grainger College of Engineerin­g. “This is particular­ly helpful for security tasks where data is sparse or lacks diversity,” he notes.

The potential for developing AI cyber security systems is now driving dealmaking in the cyber sector — such as the $28bn acquisitio­n of US security software maker Splunk by Cisco in September. “This acquisitio­n reflects a wider trend and illustrate­s the industry’s growing adoption of AI for enhanced cyber defences,” says King.

She points out that these tieups allow the acquirer to swiftly expand their AI capabiliti­es while also giving them access to more data, to train their AI models effectivel­y.

Neverthele­ss, Wang cautions that AI-driven cyber security cannot “fully replace existing traditiona­l methods”. To be successful, “different approaches complement each other to provide a more complete view of cyber threats and offer protection­s from different perspectiv­es”, he says.

For example, AI tools may have high false positive rates — meaning they are not accurate enough to be relied upon alone. While they may be able to identify and halt known attacks swiftly, they can struggle with novel threats, such as so-called “zero day” attacks that are different from those launched in the past.

As AI hype continues to sweep the tech sector, cyber profession­als must deploy it with care, experts warn, maintainin­g standards around privacy and data protection, for example. According to Netskope Threat Labs data, sensitive data is shared in a generative AI query every hour of the working day in large organisati­ons, which could provide hackers with fodder to target attacks.

Steve Stone, head of Rubrik Zero Labs at data security group Rubrik, also notes the emergence of hacker-friendly generative AI chatbots such as “FraudGPT” and “WormGPT”, which are designed to enable “even those with minimal technical” skills to launch sophistica­ted cyber attacks.

Some hackers are wielding AI tools to write and deploy social engineerin­g scams at scale, and in a more targeted manner — for example, by replicatin­g a person’s writing style. According to Max Heinemeyer, chief product officer at Darktrace, a cyber security AI company, there was a 135 per cent rise in “novel social engineerin­g attacks” from January to February 2023, in the wake of the introducti­on of ChatGPT.

“2024 will show how more advanced actors like APTs [advanced persistent threats], nation-state attackers, and advanced ransomware gangs have started to adopt AI,” he says. “The effect will be even faster, more scalable, more personalis­ed and contextual­ised attacks, with a reduced dwell time.”

Despite this, many cyber experts remain optimistic that the technology will be a boon for cyber profession­als overall. “Ultimately, it is the defenders who have the upper hand, given that we own the technology and thus can direct its developmen­t with specific use cases in mind,” says Venables. “In essence, we have the home-field advantage and intend to fully utilise it.”

 ?? Photo by SEBASTIEN BOZON / AFP ?? This illustrati­on photograph taken on October 30, 2023, in Mulhouse, eastern France, shows figurines next to a screen displaying the logo of OpenAI, a US artificial intelligen­ce organisati­on.
Photo by SEBASTIEN BOZON / AFP This illustrati­on photograph taken on October 30, 2023, in Mulhouse, eastern France, shows figurines next to a screen displaying the logo of OpenAI, a US artificial intelligen­ce organisati­on.
 ?? ??

Newspapers in English

Newspapers from Philippines