Financial Mirror (Cyprus)

How dangerous are deepfakes and other AI-powered fraud?

-

Former U.S. president Donald Trump posing with Black voters, President Joe Biden discouragi­ng people from voting via telephone or the Pope in a puffy white jacket: Deepfakes of videos, photos and audio recordings have become widespread on various internet platforms, aided by the technologi­cal advances of large language models like Midjourney, Google’s Gemini or OpenAI’s ChatGPT.

With the right prompt fine-tuning, everyone can create seemingly real images or make the voices of prominent political or economic figures and entertaine­rs say anything they want. While creating a deepfake is not a criminal offense on its own, many government­s are neverthele­ss moving towards stronger regulation when using artificial intelligen­ce to prevent harm to the parties involved.

Apart from the main avenue of deepfakes, creating non-consensual pornograph­ic content involving mostly female celebritie­s, this technology can also be used to commit identity fraud by manufactur­ing fake IDs or impersonat­ing others over the phone. As our chart based on the most recent annual report of identity verificati­on provider Sumsub shows, deepfake-related identity fraud cases have skyrockete­d between 2022 and 2023 in many countries around the world.

For example, the number of fraud attempts in the Philippine­s rose by 4,500 percent year over year, followed by nations like Vietnam, the United States and Belgium. With the capabiliti­es of so-called artificial intelligen­ce potentiall­y increasing even further, as is evidenced by products like AI video generator Sora, deepfake fraud attempts could also spill over into other areas. “We’ve seen deepfakes become more and more convincing in recent years and this will only continue and branch out into new types of fraud, as seen with voice deepfakes”, says*

Pavel Goldman-Kalaydin, Sumsub’s Head of Artificial Intelligen­ce and Machine Learning, in the aforementi­oned report. “Both consumers and companies need to remain hyper-vigilant to synthetic fraud and look to multi-layered anti-fraud solutions, not only deepfake detection.”

These assessment­s are shared by many cybersecur­ity experts. For example, a survey among 199 cybersecur­ity leaders attending the World Economic Forum Annual Meeting on Cybersecur­ity in 2023 showed 46 percent of respondent­s being most concerned about the “advance of adversaria­l capabiliti­es – phishing, malware developmen­t, deepfakes” in terms of the risks artificial intelligen­ce poses for cybersecur­ity in the future. (Statista)

 ?? ??

Newspapers in English

Newspapers from Cyprus