IN NEW ONLINE ABUSE, AI BOT TURNS WOMEN’S PHOTOS INTO NUDES
Unidentified cyber criminals have rolled out a service that allows anyone to create fake nude images of women from a regular photograph, a cyber research agency tracking deepfakes said in a report released on Tuesday, uncovering a growing ecosystem that has already targeted at least 100,000 women and threatens to open a new front in online sexual abuse.
A deepfake is any manipulated media created by an artificial intelligence technique called deep learning.
The technology first gained attention in 2017 and evolved rapidly, with experts warning that it can affect democracy and law since, among other things, deepfakes can be used to create convincing fakes about rival politicians and generate false evidence to implicate someone in crime.
It has also been used to create deepfake pornography of celebrities, but Netherlands-based Sensity’s report now uncovers its first widespread use in targeting virtually any individual whose images are available.
The tool works only on images of women.
“Our investigation of this bot and its affiliated channels revealed several key findings. Approximately 104,852 women have been targeted and had their personal “stripped” images shared publicly as of the end of July, 2020. The number of these images grew by 198% in the last 3 months,” said the report.
At present, most of the roughly 104,000 users and most of the victims appear to be from Russia, the report added, citing a poll in one of seven Telegram groups linked to the service – the name of which has been withheld in order to avoid publicity.
At the core is a bot that lets a person upload a photograph of a woman. The bot feeds back a version with any clothing deleted and replaced by fake but at times authentic, but often evidently fake skin and private parts.
The tool is available for free, but the photos will be watermarked. Users can pay $1.5 (about ₹110) to remove it, the report said.
“The activity on the bot’s affiliated Telegram channels makes for bleak viewing. On the image sharing galleries, thousands of synthetically stripped images of young women taken from social media and private correspondence are constantly being uploaded,” said Henry Ajder, an expert on deepfakes and the lead author of the report who has since left Sensity.
“The bot’s significance, as opposed to other tools for creating deepfakes, is its accessibility, which has enabled tens of thousands of users to non-consensually strip these images,” Ajder said, adding that the most concerning was the discovery of images of underage girls.
According to Sensity’s investigation, the tool appears to be a version of DeepNudes, a software first released anonymously in 2019 before criticism forced its developer to pull it down.
But, “on July 19th 2019, the creators sold the DeepNude licence on an online marketplace to an anonymous buyer for $30,000,” the report added.
“In terms of photorealism, the level of the technology is still quite primitive and in many cases, it will be possible to distinguish them (the photographs) as a fake. Still, it does not mean this material isn’t a reputation threat.” said Giorgio Patrini, CEO and chief scientist at Sensity.
“We can state confidently that the individuals involved in the bot creation are very likely to be Russian native speakers, given the presence of this language among the users and the fact that a large part of the victim are Russian nationals,” Patrini said.
Deepfake nudes do not fall in specific legal definitions in most countries.
Experts said it is closest to crimes known as revenge pornography and non-consensual sharing of intimate images. In India, such crimes are usually tried under Section 499 (criminal defamation) and Section 354C of the Indian Penal Code (voyeurism), and Section 66E (depicting private parts of a person) and Section 67A (section against sexually explicit materials) of the Information Technology Act, according to a 2018 analysis by Yesha Paul, then at the Centre for Communication Governance (CCG), National Law University (NLU) Delhi.
“The courts might be significantly challenged to analyse the evidence to see whether or not that meets the standard of proof required in criminal trials - to prove guilt beyond reasonable doubt,” said Gunjan Chawla, programme manager, technology and national security at CCG, NLU Delhi.
There are also concerns of jurisdiction and evidence, when the illegal material is generated by an algorithm.
“The courts might be significantly challenged to analyse the evidence to see whether or not that meets the standard of proof required in criminal trials - to prove guilt beyond reasonable doubt,” she said.
The use of AI to create fake nudes demonstrates the risk of putting out personal photographs, Patrini added. “People must think twice when sharing visual content online, visible to anyone. Even if the content itself is innocuous, unfortunately one day it might be repurposed and utilised maliciously against you,” he said.
APPROXIMATELY 104,852 WOMEN HAVE BEEN TARGETED AND HAD THEIR PERSONAL “STRIPPED” IMAGES SHARED PUBLICLY AS OF JULY, 2020 THE REPORT SAID