Combating the next wave of AI fraud
ARTIFICIAL Intelligence has been around for decades yet the innovation around it has skyrocketed over the past couple of years.
AI can now help create efficiencies in the workplace that will increase productivity, improve internal operations and enhance creativity.
Yet, the evolution of large language models and the use of generative AI can open doors for fraudsters in unprecedented ways.
Generative AI gives fraudsters new avenues to deceive businesses and consumers alike.
From creating personalised and convincing messages that are tailored to their victim, to analysing public social media profiles and other personal information to create fake accounts, it is more difficult to distinguish what is real from what is fake. Different types of AI-enabled fraud Generative AI is enabling fraudsters to automate previously time-consuming and complex processes of stitching together fake, synthetic identities that interact like a human across thousands of digital touchpoints, fooling businesses or consumers into thinking they are legitimate.
Text messages
There are two lines of attack coming from texts. First, generative AI enables fraudsters to replicate personal exchanges with someone a victim knows with well-written scripts that appear authentic and are very difficult to discern as fake.
Further complicating matters, bad actors can conduct multi?pronged attacks via textbased conversations with multiple victims at once, manipulating them into carrying out actions that can involve transfers of money, goods, or other fraudulent gains.
Fake video or images
Bad actors can train AI models with deep-learning techniques to use very large amounts of digital assets like photos, images and videos to produce high?quality, authentic videos or images that are virtually indiscernible from the real ones. Once trained, AI models can blend and superimpose images onto other images and within video content at alarming speed. More concerning, AI-based text-to-image generators enable fraudsters with little to no design or video-production skills to perform these actions. These AI tools work so quickly that they dramatically increase the effectiveness of fraud attacks. “Human” voice
Perhaps the scariest of the new methods at a fraudster’s disposal is the growth of AI-generated voices that mimic real people. This fraud scheme has created a wide range of new risks for consumers who can be easily convinced they are speaking to someone they know as well as businesses that use voice verification systems for different applications such as identity recognition and customer support. Fighting AI with AI
To combat these threats now and in the future, companies should leverage advanced technology, like machine learning and AI, to better support and protect their businesses to stay one step ahead of fraudsters.
Generative AI can be used to fight and prevent fraud by analysing patterns in data and identifying potential risk factors so companies can spot early indicators of potentially fraudulent behaviour. Synthetic data created by generative AI can be used to speed the development and testing of new fraud detection models. It can also help investigate suspicious activity by generating scenarios and identifying potential fraud risk. – secuitymagazine.com