The Herald (Zimbabwe)

Combating the next wave of AI fraud

-

ARTIFICIAL Intelligen­ce has been around for decades yet the innovation around it has skyrockete­d over the past couple of years.

AI can now help create efficienci­es in the workplace that will increase productivi­ty, improve internal operations and enhance creativity.

Yet, the evolution of large language models and the use of generative AI can open doors for fraudsters in unpreceden­ted ways.

Generative AI gives fraudsters new avenues to deceive businesses and consumers alike.

From creating personalis­ed and convincing messages that are tailored to their victim, to analysing public social media profiles and other personal informatio­n to create fake accounts, it is more difficult to distinguis­h what is real from what is fake. Different types of AI-enabled fraud Generative AI is enabling fraudsters to automate previously time-consuming and complex processes of stitching together fake, synthetic identities that interact like a human across thousands of digital touchpoint­s, fooling businesses or consumers into thinking they are legitimate.

Text messages

There are two lines of attack coming from texts. First, generative AI enables fraudsters to replicate personal exchanges with someone a victim knows with well-written scripts that appear authentic and are very difficult to discern as fake.

Further complicati­ng matters, bad actors can conduct multi?pronged attacks via textbased conversati­ons with multiple victims at once, manipulati­ng them into carrying out actions that can involve transfers of money, goods, or other fraudulent gains.

Fake video or images

Bad actors can train AI models with deep-learning techniques to use very large amounts of digital assets like photos, images and videos to produce high?quality, authentic videos or images that are virtually indiscerni­ble from the real ones. Once trained, AI models can blend and superimpos­e images onto other images and within video content at alarming speed. More concerning, AI-based text-to-image generators enable fraudsters with little to no design or video-production skills to perform these actions. These AI tools work so quickly that they dramatical­ly increase the effectiven­ess of fraud attacks. “Human” voice

Perhaps the scariest of the new methods at a fraudster’s disposal is the growth of AI-generated voices that mimic real people. This fraud scheme has created a wide range of new risks for consumers who can be easily convinced they are speaking to someone they know as well as businesses that use voice verificati­on systems for different applicatio­ns such as identity recognitio­n and customer support. Fighting AI with AI

To combat these threats now and in the future, companies should leverage advanced technology, like machine learning and AI, to better support and protect their businesses to stay one step ahead of fraudsters.

Generative AI can be used to fight and prevent fraud by analysing patterns in data and identifyin­g potential risk factors so companies can spot early indicators of potentiall­y fraudulent behaviour. Synthetic data created by generative AI can be used to speed the developmen­t and testing of new fraud detection models. It can also help investigat­e suspicious activity by generating scenarios and identifyin­g potential fraud risk. – secuitymag­azine.com

 ?? ?? Snipers possess an extraordin­ary level of focus and concentrat­ion that sets them apart.(File photo)
Snipers possess an extraordin­ary level of focus and concentrat­ion that sets them apart.(File photo)

Newspapers in English

Newspapers from Zimbabwe