Times Chronicle & Public Spirit
Fraud practitioners have AI, too
Soon, personal artificial intelligence agents will streamline and automate processes that range from buying groceries to selling homes. You’ll tell it what you want, and it will do the research and legwork, log into your accounts and execute transactions in milliseconds.
It is a technology with extraordinary potential and significant new dangers, including financial fraud. As Gail Ennis of the Social Security Administration’s recently wrote: “Criminals will use AI to make fraudulent schemes easier and faster to execute, the deceptions more credible and realistic, and the fraud more profitable.”
The story of cyberfraud is a technological arms race between criminals and those they’re trying to rob. In banking, AI’s advent supercharges that competition and raises its stakes.
When scammers used an AI-powered audio deepfake to convince the CEO of a British utility to transfer $243,000 to a Hungarian bank account in 2019, it was called unusual. That is not the case anymore.
Criminals made headlines this year when they used deepfake technology to pose as a multinational company’s chief financial officer and tricked one of the company’s employees into paying the scammers $25 million.
Globally, 37% of businesses have experienced deepfake-audio fraud attempts, according to a 2022 survey by identity verification solutions firm Regular, while 29% have encountered video deepfakes. And that doesn’t include individuals who receive realistic-sounding calls purportedly from endangered family members seeking money.
As these threats proliferate, financial institutions are working to continually innovate and adapt to outpace and outsmart the criminals.
With an estimated annual tab of $8.8 billion in 2022, fraud was a festering problem even before the COVID-19 pandemic, which sparked a dramatic increase in online financial activity. According to TransUnion, instances of digital financial fraud increased by 80% globally from 2019 to 2022, and by 122% for U.S.-originating transactions. LexisNexis Risk Solutions calculated in 2022 that every dollar lost to fraud costs $4.36 in total as a result of associated expenses such as legal fees and the cost of recovering the stolen money.
Generative AI doesn’t require high-tech skills to get benefits — a fact criminals are leveraging to find and exploit software and hardware vulnerabilities. They also use AI to improve the tailoring of their phishing attacks.
Then there’s synthetic fraud, , in which the AI fabricates identities from real and made-up details and uses them to open new credit accounts. In one instance, criminals created about 700 synthetic accounts to defraud a San Antonio bank of up to $25 million in COVID-19 relief funds. TransUnion last year estimated that synthetic account balances reached$4.6 billion in 2022 while a previous Socure report projected the cost of this fraud would reach$5 billion this year.
We’ve been down this road before. When businesses rushed headlong to embrace cloud computing, they only paid attention to security after suffering massive data breaches.
The good news is that financial institutions are moving to combat AI fraud with the best tool available: AI. Nearly three-quarters of respondents to a 2022 Bank of England survey said they were developing machine-learning models to fight financial fraud. Other next-generation defenses are also in the works: Passkeys are replacing passwords, and quantum key distribution is becoming more widespread.
It’s a good start, but it’s just that, a start.
Along with more and better technological and AI advances to protect information and funds, we need to lean back into the human element.
Companies, financial institutions, regulators and consumers must collaborate to produce and adopt secure, resilient and robust controls for handling this threat.
This means education — between institutions and consumers, and among families and friends. It means following protective online practices to keep information secure. It means pulling all of the tools available — both online and off and at the government, organizational and individual levels to shore up our defenses like a shield.
The alternative — a patchwork series of solutions — will have exploitable seams. And the problem is going to roll downhill, hitting medium- and smallsized businesses and individuals the hardest as they won’t have multinational corporations’ ability to afford sophisticated defenses.
Artificial intelligence is speeding everything up. We cannot afford to let this accelerated clock tick too long without developing a global, industrywide security standard to harden us against the coming fraud storm.
If we don’t act, the money we already have lost to fraud will seem like small change.