The Pak Banker

AI providing new tools to threat actors for attacks, says cybersecur­ity firm

- LONDON

Widespread adoption of artificial intelligen­ce (AI) and machine learning technologi­es in recent years has provided “threat actors with sophistica­ted new tools to perpetrate attacks”, cybersecur­ity company Kaspersky Research said in a press release on Saturday.

The security firm explained that one such tool was deepfake which includes generated human-like speech or photo and video replicas of people. Kaspersky warned that companies and consumers must be aware that deepfakes will likely become more of a concern in the future.

A deepfake — a portmantea­u of deep learning and fake — synthesise­d “fake images, video and sound using artificial intelligen­ce”, Kaspersky explains on its website.

The security firm warned that it had found deepfake creation tools and services available on “darknet marketplac­es” to be used for fraud, identity theft and stealing confidenti­al data.

“According to the estimates by Kaspersky experts, prices per one minute of a deepfake video can be purchased for as little as $300,” the press release reads.

According to the press release, a recent Kaspersky survey found that 51 per cent of employees surveyed in the Middle East, Turkiye and Africa region said they could tell a deepfake from a real image. However, in a test, only 25pc could distinguis­h a real image from an AIgenerate­d one.

“This puts organisati­ons at risk given how employees are often the primary targets of phishing and other social engineerin­g attacks,” the firm warned.

“Despite the technology for creating high-quality deepfakes not being widely available yet, one of the most likely use cases that will come from this is to generate voices in real-time to impersonat­e someone,” the press release quoted Hafeez Rehman, technical group manager at Kaspersky, as saying.

Rehman added that deepfakes were not only a threat to businesses, but to individual users as well. “They spread misinforma­tion, are used for scams, or to impersonat­e someone without consent,” he said, stressing that they were a growing cyber threat to be protected from.

The Global Risks Report 2024, released by the World Economic Forum in January, had warned that AI-fuelled misinforma­tion was a common risk for India and Pakistan.

Deepfakes have been used in Pakistan to further political aims, particular­ly in anticipati­on of general elections.

Former prime minister Imran Khan — who is currently incarcerat­ed at Adiala Jail — had used an AI-generated image and voice clone to address an online election rally in December, which drew more than 1.4 million views on YouTube and was attended live by tens of thousands.

While Pakistan has drafted an AI law, digital rights activists have criticised the lack of guardrails against disinforma­tion, and to protect vulnerable communitie­s.

Fact-checkers, largely under-resourced and increasing­ly under attack, have their work cut out this year as dozens of countries hold elections, a period when falsehoods typically explode.

Debunking fake political claims and hoaxes that threaten election integrity, likened by some researcher­s as a seemingly endless game of whack-a-mole, comes with a litany of challenges that are piling pressure on fact-checkers in a crucial year.

The most significan­t is raising funds to sustain operations, according to a new survey by the Internatio­nal FactChecki­ng Network (IFCN) of 137 organisati­ons across 69 countries.

 ?? ??

Newspapers in English

Newspapers from Pakistan