Bangkok Post

Cybercrimi­nals using AI for attacks

- SUCHIT LEESA-NGUANSUK

C yber-

attacks will increasing­ly use artificial intelligen­ce (AI) to speed up their post-exploitati­on activities at targeted organisati­ons, according to Palo Alto Networks.

AI has transforme­d the threat landscape, expanding the speed, scale and sophistica­tion of cyber-attacks, Steven Scheurmann, the US-based cybersecur­ity firm’s regional vice-president for Asean, told the Bangkok Post.

The low-hanging fruit for attackers is to use AI chatbots to craft more realistic phishing emails with fewer obvious errors, he said.

With AI, it is easier to create deepfakes, opening the door to misinforma­tion or propaganda campaigns, said Mr Scheurmann. For example, a multinatio­nal firm’s Hong Kong office lost HK$200 million after scammers staged a deepfake video meeting.

“We see signs that bad actors are using AI to attack organisati­ons on a larger scale,” he said.

Using AI makes it less expensive and faster to execute numerous simultaneo­us attacks aimed at exploiting multiple vulnerabil­ities, said Mr Scheurmann.

AI can also speed up post-exploitati­on activities such as lateral movement and reconnaiss­ance. Lateral movement is a technique used after compromisi­ng an endpoint to extend access to other hosts or applicatio­ns in an organisati­on.

Much has been made of the potential for AI-generated malware. The company’s research suggests AI is more useful to attackers as a co-author than as the sole creator of new malware.

Attackers can use AI to assist with the developmen­t of specific pieces of functional­ity in malware. However, this usage still often requires a knowledgea­ble human operator, he said. The technology may also make it possible for attackers to develop new malware variants quicker and cheaper, said Mr Scheurmann.

Organisati­ons need to leverage AI to catch up with cybercrimi­nals, he said. Palo Alto Networks uses AI to bolster its security, detecting 1.5 million new attacks daily, said Mr Scheurmann.

Organisati­ons can apply AI in their own security operation centres. According to a 2024 report from the firm’s threat intelligen­ce arm Unit 42, more than 90% of the centres are still dependent on manual processes.

He said AI is particular­ly effective at pattern recognitio­n, so cybersecur­ity threats that follow repetitive attack chains could be stopped earlier.

Groups developing AI models can take steps to prevent threat actors from misusing their AI creations, said Mr Scheurmann. By controllin­g access to their AI models, threat actors can be prevented from co-opting them freely for nefarious purposes.

AI designers should be aware of the potential to jailbreak large language models by convincing them to answer questions that could contribute to bad behaviour, he said.

AI designers should consider that attackers will ask AI things like, “How do I increase the impact of an attack on a vulnerable Apache web server?” AI models should be hardened against such lines of questionin­g, said Mr Scheurmann.

Organisati­ons should make an effort to secure users accessing AI tools, ensuring visibility and control over how these services are being used within an enterprise, he said. Clear policies are needed for the type of data users can feed into AI services, protecting proprietar­y or sensitive informatio­n from exposure to third parties, said Mr Scheurmann.

He said consolidat­ing security solutions into a unified platform is crucial for organisati­ons to improve operationa­l efficiency, enhance their security posture and effectivel­y address evolving threats.

Newspapers in English

Newspapers from Thailand