AI: A potential new tool for financial scammers in M’sia
KUALA LUMPUR: The application of artificial intelligence (AI) now spans various fields and is easily accessible to people from all walks of life as long as they are relatively tech-savvy.
However, AI – a branch of computer science developed to simulate human intelligence in machines that are programmed to think and act like humans – is a doubleedged sword in that it can be used for beneficial or nefarious purposes.
In terms of the latter, AI can be applied to orchestrate sophisticated scams, one of which involves the use of AI-generated ‘deep fake’ technology that enables scammers to create realistic audio and video impersonations of trusted individuals.
While Malaysia has not recorded any case of crimes involving AI, numerous incidents of such nature have been reported in other countries.
The use of AI technology to perpetrate criminal activities demands law enforcement (personnel) to be equipped with specialised expertise (to handle such cases), especially in the field of digital forensics as evidence for such fraud cases is in digital form. Therefore, understanding and knowledge of AI are crucial to gather and analyse evidence.
— Syahrul Nizam Junaini, research fellow at the Data Science Centre Unimas
Impersonation
In China, a businessman, identified as Guo, was nearly cheated of 4.3 million yuan (about RM2.8 million) in May last year after he was tricked by a scammer who used AI to impersonate his (Guo’s) close friend.
It was reported that the ‘friend’ wanted to borrow some money and persuaded Guo to transfer the sum. Fortunately, the businessman realised that he was being scammed after finding out his friend’s identity had been stolen and he had no knowledge of the transaction.
Guo alerted the police and the bank involved and recovered 3.4 million yuan, while efforts to reclaim the remaining funds were on-going.
In another case reported in the US early last year, a woman was contacted by a person who said her daughter has been kidnapped and demanded a ransom for her release.
To convince the mother that her daughter had been abducted, the ‘kidnapper’ used AI to spoof the voice of the girl.
This case shocked the US authorities as the cloned voice, generated using AI, was highly convincing.
Challenging
Commenting on the use of AI by fraudsters, research fellow at the Data Science Centre Universiti Malaysia Sarawak (Unimas) Syahrul Nizam Junaini warned that Malaysia would not be exempt from facing such crimes in line with the increasing use of AI technology in the country.
He said the sophistication of this technology would make it possible for cybercriminals to orchestrate financial scams, especially when personal data is stored in the cloud.
“These perpetrators often target individuals based on information gleaned from social media,” he told Bernama.
According to Syahrul Nizam, the AI used in scams typically involves sophisticated software capable of analysing and replicating an individual’s visual and audio characteristics.
“This technology can mimic one’s speech patterns, speaking style, intonation and even facial expressions to the extent that it becomes challenging to distinguish between genuine and fake,” he said, adding that deep fake is an AI software with the ability to generate fake videos of individuals.
He said the sophistication of this technology would pose significant challenges to law enforcement agencies in handling crimes, particularly those involving scams and other financial offences.
“The use of AI technology to perpetrate criminal activities demands law enforcement (personnel) to be equipped with specialised expertise (to handle such cases), especially in the field of digital forensics as evidence for such fraud cases is in digital form.
“Therefore, understanding and knowledge of AI are crucial to gather and analyse evidence,” he said.
‘Hire more IT experts’
Syahrul Nizam added that to ensure the country would be prepared to face cybercrime threats, authorities particularly the police force would need to increase the number of information technology (IT) experts within their ranks.
Stressing the importance of enhancing the skills of existing officers to keep pace with technological advancements, he said they should be sent overseas to participate in related programmes as well as collaborate with international police to gain insights into how they could handle AI-linked cases including the use of deep fake technology.
Bukit Aman Commercial Crime Investigation Department (CCID) director Datuk Seri Ramli Mohamed Yoosuf told Bernama last month that the department anticipated a surge in police reports linked to AI due to the widespread adoption of the technology in Malaysia and globally.
He said AI could be misused and solving such cases could pose a great challenge to CCID.
“Our investigative technology must be enhanced to keep up with the development of AI,” he added.
Bank Negara Malaysia (BNM) was quoted in a media report as saying that it too viewed AI technology as one of the ‘new tools’ that online fraudsters would likely to employ in the future.
Enhance cybersecurity Meanwhile, Universiti Tun Hussein Onn Malaysia Department of Information Security and Web Technology senior lecturer, Dr Noor Zuraidin Mohd Safar, suggested comprehensive collaboration among stakeholders as an early preventive measure against AIrelated crimes.
“AI technology will constantly evolve and Malaysia must be prepared for this.
“Stakeholders including the police, cybersecurity (authorities) and BNM must have expertise in AI technology with their focus (being) on ecommerce and e-banking,” he told Bernama.
He also suggested that stakeholders would leverage AI technology to prevent crimes.
“AI also has the capability to serve as a preventive tool as it can identify suspicious data,” he said, also highlighting the need to develop a system to empower law enforcement agencies to detect and address activities perceived as fraudulent or at risk of becoming so.
To do this, the government must be prepared to invest in creating a secure cybersecurity system, he said.
Noor Zuraidin also proposed for the existing laws related to technology and crime to be amended to align with current developments.
Observing the current legislation may somewhat lack bite in addressing crimes involving AI, he said improvements would be necessary, particularly in terms of personal data protection and the misuse of AI technology.
“For me, this is crucial to ensure society is protected from those who misuse this technology as well as to ensure justice for victims who have been deceived with AI technology,” he said.
Noor Zuraidin also suggested stakeholders to collaborate with industry experts to curb the leakage of personal data.
Sharing tips to avoid falling victim to AI-related scams, he advised the public to ensure that the person contacting them was legitimate.
“Verify the caller’s identity, inquire about information such as staff number, landline number, address and so on especially if the caller claims to be from a bank.
“If in doubt, terminate the call.
“Most importantly, cultivate a sceptical attitude when verifying information provided by the caller,” he said.
Noor Zuraidin also reminded the public to create unique usernames and passwords to make it difficult for criminals to hack their bank accounts.
“In the meantime, authorities must consistently provide awareness about financial crimes involving AI to the public so that they become more vigilant,” he added.