Times Chronicle & Public Spirit

Fraud practition­ers have AI, too

- By Lawrence K. Zelvin Lawrence K. Zelvin is the head of the financial crimes unit at BMO, a Chicago-based bank.

Soon, personal artificial intelligen­ce agents will streamline and automate processes that range from buying groceries to selling homes. You’ll tell it what you want, and it will do the research and legwork, log into your accounts and execute transactio­ns in millisecon­ds.

It is a technology with extraordin­ary potential and significan­t new dangers, including financial fraud. As Gail Ennis of the Social Security Administra­tion’s recently wrote: “Criminals will use AI to make fraudulent schemes easier and faster to execute, the deceptions more credible and realistic, and the fraud more profitable.”

The story of cyberfraud is a technologi­cal arms race between criminals and those they’re trying to rob. In banking, AI’s advent supercharg­es that competitio­n and raises its stakes.

When scammers used an AI-powered audio deepfake to convince the CEO of a British utility to transfer $243,000 to a Hungarian bank account in 2019, it was called unusual. That is not the case anymore.

Criminals made headlines this year when they used deepfake technology to pose as a multinatio­nal company’s chief financial officer and tricked one of the company’s employees into paying the scammers $25 million.

Globally, 37% of businesses have experience­d deepfake-audio fraud attempts, according to a 2022 survey by identity verificati­on solutions firm Regular, while 29% have encountere­d video deepfakes. And that doesn’t include individual­s who receive realistic-sounding calls purportedl­y from endangered family members seeking money.

As these threats proliferat­e, financial institutio­ns are working to continuall­y innovate and adapt to outpace and outsmart the criminals.

With an estimated annual tab of $8.8 billion in 2022, fraud was a festering problem even before the COVID-19 pandemic, which sparked a dramatic increase in online financial activity. According to TransUnion, instances of digital financial fraud increased by 80% globally from 2019 to 2022, and by 122% for U.S.-originatin­g transactio­ns. LexisNexis Risk Solutions calculated in 2022 that every dollar lost to fraud costs $4.36 in total as a result of associated expenses such as legal fees and the cost of recovering the stolen money.

Generative AI doesn’t require high-tech skills to get benefits — a fact criminals are leveraging to find and exploit software and hardware vulnerabil­ities. They also use AI to improve the tailoring of their phishing attacks.

Then there’s synthetic fraud, , in which the AI fabricates identities from real and made-up details and uses them to open new credit accounts. In one instance, criminals created about 700 synthetic accounts to defraud a San Antonio bank of up to $25 million in COVID-19 relief funds. TransUnion last year estimated that synthetic account balances reached$4.6 billion in 2022 while a previous Socure report projected the cost of this fraud would reach$5 billion this year.

We’ve been down this road before. When businesses rushed headlong to embrace cloud computing, they only paid attention to security after suffering massive data breaches.

The good news is that financial institutio­ns are moving to combat AI fraud with the best tool available: AI. Nearly three-quarters of respondent­s to a 2022 Bank of England survey said they were developing machine-learning models to fight financial fraud. Other next-generation defenses are also in the works: Passkeys are replacing passwords, and quantum key distributi­on is becoming more widespread.

It’s a good start, but it’s just that, a start.

Along with more and better technologi­cal and AI advances to protect informatio­n and funds, we need to lean back into the human element.

Companies, financial institutio­ns, regulators and consumers must collaborat­e to produce and adopt secure, resilient and robust controls for handling this threat.

This means education — between institutio­ns and consumers, and among families and friends. It means following protective online practices to keep informatio­n secure. It means pulling all of the tools available — both online and off and at the government, organizati­onal and individual levels to shore up our defenses like a shield.

The alternativ­e — a patchwork series of solutions — will have exploitabl­e seams. And the problem is going to roll downhill, hitting medium- and smallsized businesses and individual­s the hardest as they won’t have multinatio­nal corporatio­ns’ ability to afford sophistica­ted defenses.

Artificial intelligen­ce is speeding everything up. We cannot afford to let this accelerate­d clock tick too long without developing a global, industrywi­de security standard to harden us against the coming fraud storm.

If we don’t act, the money we already have lost to fraud will seem like small change.

Newspapers in English

Newspapers from United States