Deepfake scam targets CEO of world’s biggest ad firm
The head of the world’s biggest advertising group was the target of an elaborate deepfake scam involving an artificial intelligence voice clone. The chief executive of WPP, Mark Read, detailed the attempted fraud in a recent email, warning others at the company to look out for calls claiming to be from top executives.
Fraudsters created a WhatsApp account with a publicly available image of Read and used it to set up a Microsoft Teams meeting that appeared to be with him and another senior WPP executive, according to the email obtained by the Guardian. During the meeting, the impostors deployed a voice clone of the executive as well as YouTube footage of them. The scammers impersonated Read off-camera using the meeting’s chat window. The scam, which was unsuccessful, targeted an “agency leader”, asking them to set up a new business in an attempt to solicit money and personal details.
“Fortunately the attackers were not successful,” Read wrote in the email. “We all need to be vigilant to the techniques that go beyond emails to take advantage of virtual meetings, AI and deepfakes.”
A WPP spokesperson confirmed the failure of the phishing attempt in a statement: “Thanks to the vigilance of our people, including the executive concerned, the incident was prevented.” WPP did not respond to questions about when the attack took place or which executives besides Read were involved.
Once primarily a concern related to online harassment, pornography and political disinformation, the number of deepfake attacks in the corporate world has surged over the past year.
AI voice clones have fooled banks, duped financial firms out of millions and put cybersecurity departments on alert. In one high-profile case, an executive of the defunct digital media startup Ozy pleaded guilty to fraud and identity theft after it was reported he used voice-faking software to impersonate a YouTube executive in an attempt to fool Goldman Sachs into investing $40m in 2021.
The attempted fraud on WPP also appeared to use generative AI for voice cloning, but included simpler techniques such as taking a publicly available image and using it as a contact display picture. The attack is representative of the many tools scammers now have at their disposal.
“We have seen increasing sophistication in the cyber-attacks on our colleagues, and those targeted at senior leaders in particular,” Read said in the email.
Read’s email listed a number of points to look out for as red flags, including requests for passports, money transfers and any mention of a “secret acquisition, transaction or payment that no one else knows about”. “Just because the account has my photo doesn’t mean it’s me,” Read said in the email.
WPP, a publicly traded company with a market capitalisation of about $11.3bn, stated on its website that it had been dealing with fake sites using its brand name and was working with relevant authorities to stop the fraud.
Many companies are grappling with the boom in generative AI, moving resources toward the technology while simultaneously facing its
nd potential harms. WPP announced last year that it was partnering with the chip-maker Nvidia to create advertisements with generative AI, touting it as a sea change in the industry.
In recent years, low-cost audio deepfake technology has become widely available and far more convincing. A school principal in Baltimore, US, was put on leave this year over an audio recording that appeared to include him making racist and antisemitic comments. It turned out to be a deepfake perpetrated by a colleague. Bots have impersonated Joe Biden and the former presidential candidate Dean Phillips.