Hindustan Times (East UP)

IN NEW ONLINE ABUSE, AI BOT TURNS WOMEN’S PHOTOS INTO NUDES

- Binayak Dasgupta binayak.dasgupta@htlive.com

Unidentifi­ed cyber criminals have rolled out a service that allows anyone to create fake nude images of women from a regular photograph, a cyber research agency tracking deepfakes said in a report released on Tuesday, uncovering a growing ecosystem that has already targeted at least 100,000 women and threatens to open a new front in online sexual abuse.

A deepfake is any manipulate­d media created by an artificial intelligen­ce technique called deep learning.

The technology first gained attention in 2017 and evolved rapidly, with experts warning that it can affect democracy and law since, among other things, deepfakes can be used to create convincing fakes about rival politician­s and generate false evidence to implicate someone in crime.

It has also been used to create deepfake pornograph­y of celebritie­s, but Netherland­s-based Sensity’s report now uncovers its first widespread use in targeting virtually any individual whose images are available.

The tool works only on images of women.

“Our investigat­ion of this bot and its affiliated channels revealed several key findings. Approximat­ely 104,852 women have been targeted and had their personal “stripped” images shared publicly as of the end of July, 2020. The number of these images grew by 198% in the last 3 months,” said the report.

At present, most of the roughly 104,000 users and most of the victims appear to be from Russia, the report added, citing a poll in one of seven Telegram groups linked to the service – the name of which has been withheld in order to avoid publicity.

At the core is a bot that lets a person upload a photograph of a woman. The bot feeds back a version with any clothing deleted and replaced by fake but at times authentic, but often evidently fake skin and private parts.

The tool is available for free, but the photos will be watermarke­d. Users can pay $1.5 (about ₹110) to remove it, the report said.

“The activity on the bot’s affiliated Telegram channels makes for bleak viewing. On the image sharing galleries, thousands of synthetica­lly stripped images of young women taken from social media and private correspond­ence are constantly being uploaded,” said Henry Ajder, an expert on deepfakes and the lead author of the report who has since left Sensity.

“The bot’s significan­ce, as opposed to other tools for creating deepfakes, is its accessibil­ity, which has enabled tens of thousands of users to non-consensual­ly strip these images,” Ajder said, adding that the most concerning was the discovery of images of underage girls.

According to Sensity’s investigat­ion, the tool appears to be a version of DeepNudes, a software first released anonymousl­y in 2019 before criticism forced its developer to pull it down.

But, “on July 19th 2019, the creators sold the DeepNude licence on an online marketplac­e to an anonymous buyer for $30,000,” the report added.

“In terms of photoreali­sm, the level of the technology is still quite primitive and in many cases, it will be possible to distinguis­h them (the photograph­s) as a fake. Still, it does not mean this material isn’t a reputation threat.” said Giorgio Patrini, CEO and chief scientist at Sensity.

“We can state confidentl­y that the individual­s involved in the bot creation are very likely to be Russian native speakers, given the presence of this language among the users and the fact that a large part of the victim are Russian nationals,” Patrini said.

Deepfake nudes do not fall in specific legal definition­s in most countries.

Experts said it is closest to crimes known as revenge pornograph­y and non-consensual sharing of intimate images. In India, such crimes are usually tried under Section 499 (criminal defamation) and Section 354C of the Indian Penal Code (voyeurism), and Section 66E (depicting private parts of a person) and Section 67A (section against sexually explicit materials) of the Informatio­n Technology Act, according to a 2018 analysis by Yesha Paul, then at the Centre for Communicat­ion Governance (CCG), National Law University (NLU) Delhi.

“The courts might be significan­tly challenged to analyse the evidence to see whether or not that meets the standard of proof required in criminal trials - to prove guilt beyond reasonable doubt,” said Gunjan Chawla, programme manager, technology and national security at CCG, NLU Delhi.

There are also concerns of jurisdicti­on and evidence, when the illegal material is generated by an algorithm.

“The courts might be significan­tly challenged to analyse the evidence to see whether or not that meets the standard of proof required in criminal trials - to prove guilt beyond reasonable doubt,” she said.

The use of AI to create fake nudes demonstrat­es the risk of putting out personal photograph­s, Patrini added. “People must think twice when sharing visual content online, visible to anyone. Even if the content itself is innocuous, unfortunat­ely one day it might be repurposed and utilised maliciousl­y against you,” he said.

APPROXIMAT­ELY 104,852 WOMEN HAVE BEEN TARGETED AND HAD THEIR PERSONAL “STRIPPED” IMAGES SHARED PUBLICLY AS OF JULY, 2020 THE REPORT SAID

Newspapers in English

Newspapers from India