The Guardian (USA)

Imagine your child calling for money. Except it’s not them – it’s an AI scam

- James Wise

This year, I was sent a link to a video of myself, passionate­ly explaining why I had invested into a new technology company. In the video I spoke enthusiast­ically about the great faith I had in the company’s leadership and encouraged others to try the service out. The problem was, I had never met the company nor used its product.

It looked and sounded like me, right down to the fading Mancunian accent.

But it wasn’t. It was an AI-generated fake used in a business pitch and designed to wow me into investing in a company. Far from impressing me, it left me concerned about the myriad ways these new tools could be used for fraudulent purposes.

From data breaches to phishing attacks, where fraudsters trick people into sharing passwords or sending money to an unknown account, cybercrime is already one of the most commonly experience­d and pernicious forms of crime in the UK. In 2022, the UK had the highest number of cybercrime victims per million internet users in the world. In part we are victims of our own digital success. Britons have been fast to adopt new technologi­es such as online shopping and mobile banking, activities that cybercrimi­nals are keen to exploit. As AI becomes more sophistica­ted, these criminals are being given even more ways to trick us into believing they are someone they are not.

Many of the impressive advancemen­ts in human imitation are being developed on our doorstep. The company ElevenLabs has built and released a tool that can almost perfectly replicate any accent, in any language. You can go on its website and have its pre-trained models read out statements using the fast-talking New Yorker “Sam” or the more mellow, midwestern tones of “Bella”.

The London-based company Synthesia goes further. Its technology allows customers to create new sales people. You can generate a photoreali­stic video of a synthetica­lly generated person speaking in any language, pitching your product or providing customer support. These videos are incredibly lifelike, but the person doesn’t exist.

ElevenLabs make the rules about use, and misuse, of their technology very clear. They explicitly state that “you cannot clone a voice for abusive purposes such as fraud, discrimina­tion, hate speech or for any form of online abuse”. But less ethical companies are launching similar products at pace as well.

It is rather ironic that imitating humans, for good or ill, is one of the first major uses of AI. Alan Turing, the godfather of modern computing, created the Turing test, which he originally called the “imitation game”, to assess an AI’s ability to fool a human into think

ing it was real. Passing this test quickly became a benchmark for an AI developer’s success. Now that anyone can create synthetic people with a click of a button, we need an anti-Turing test to establish who is real and what is generated.

How will you now know, when you get a video call from your teenage child asking for emergency gap-year funds, that it is really them? How should you respond to an agitated voicemail that sounds like it’s from your boss demanding you wire the company funds, when you can no longer be sure it is really them? These questions are no longer hypothetic­als.

Fortunatel­y, some services exist already to tackle this challenge. Just as quickly as ChatGPT was adopted by canny students to complete their homework, AI-detection tools such as Originalit­y.ai were released to tell teachers the likelihood that an essay was in fact written by AI. Similar solutions are in developmen­t to assess whether a video is real, relying on pixellevel mistakes that still give away even the most sophistica­ted AI tools.

And new initiative­s are being launched. Synthesia is among many members of the Content Authentici­ty Initiative, which was started in 2019 to provide users with more insight into where the content they receive comes from, and how it was created. More controvers­ially, but perhaps inevitably, a national form of digital identity – a way of verifying whether you are talking to a real person or a bot – will almost certainly be required if you want to separate your mate from a fake.

In the interim, much greater efforts need to be made to increase public awareness of the increasing sophistica­tion of cybercrimi­nals, and just what is now possible. While we wait for government­s to act and regulation to be drawn up, there is the much more immediate risk of a thousand AI tricksters exacerbati­ng Britain’s existing cyber-fraud problem.

James Wise is a partner at the venture capital firm Balderton, and a trustee of the thinktank Demos

 ?? Illustrati­on: Deena So'Oteh/The Guardian ??
Illustrati­on: Deena So'Oteh/The Guardian

Newspapers in English

Newspapers from United States