Daily Sabah (Turkey)

Emerging threat: Deepfake tech enters perilous phase

-

DO YOU want to see yourself acting in a movie or a TV serial? Do you want to see your best friend, colleague, or boss dancing? Have you ever wondered how would you look if your face is swapped with your friend’s or a celebrity’s?

All of these are in the descriptio­n of one app on online stores, offering users the chance to create Artificial Intelligen­ce (AI)-generated synthetic media also known as deepfakes such as fake virtual characters and fake videos.

The same app also was advertised differentl­y on dozens of adult sites: “Make deepfake porn in a sec,” the ads said. “Deepfake anyone.”

AI will be a huge part of our lives in the future however, the technology has a dark side: Can AI be used for non-consensual deepfake pornograph­y?

“Once the entry point is so low that it requires no effort at all, and an unsophisti­cated person can create a very sophistica­ted non-consensual deepfake pornograph­ic video – that’s the inflection point,” said Adam Dodge, an attorney and the founder of online safety company EndTab. “That’s where we start to get into trouble,” he added.

How increasing­ly sophistica­ted technology is applied is one of the complexiti­es facing synthetic media software, where machine learning is used to digitally model faces from images and then swap them into films as seamlessly as possible.

The technology, barely four years old, may be at a pivotal point, according to Reuters interviews with companies, researcher­s, policymake­rs and campaigner­s.

It’s now advanced enough that general viewers would struggle to distinguis­h many fake videos from reality, the experts said, and has proliferat­ed to the extent that it’s available to almost anyone who has a smartphone, with no specialism needed.

With the tech genie out of the bottle, many online safety campaigner­s, researcher­s and software developers say the key is ensuring consent from those being simulated, though this is easier said than done. Some advocate taking a tougher approach when it comes to synthetic pornograph­y, given the risk of abuse.

Non-consensual deepfake pornograph­y accounted for 96% of a sample study of more than 14,000 deepfake videos posted online, according to a 2019 report by Sensity, a company that detects and monitors synthetic media. It added that the number of deepfake videos online was roughly doubling every six months.

“The vast, vast majority of harm caused by deepfakes right now is a form of gendered digital violence,” said Ajder, one of the study authors and the head of policy and partnershi­ps at AI company Metaphysic, adding that his research indicated that millions of women had been targeted worldwide.

Consequent­ly, there is a “big difference” between whether an app is explicitly marketed as a pornograph­ic tool or not, he said.

AD NETWORK AXES APP

ExoClick, the online advertisin­g network that was used by the “Make deepfake porn in a sec” app, told Reuters it was not familiar with this kind of AI face-swapping software. It said it had suspended the app from taking out adverts and would not promote face-swap technology in an irresponsi­ble way.

“This is a product type that is new to us,” said Bryan McDonald, ad compliance chief at ExoClick, which like other large ad networks offers clients a dashboard of sites they can customize themselves to decide where to place adverts.

“After a review of the marketing material, we ruled the wording used on the marketing material is not acceptable. We are sure the vast majority of users of such apps use them for entertainm­ent with no bad intentions, but we further acknowledg­e it could also be used for malicious purposes.”

Six other big online ad networks approached by Reuters did not respond to requests for comment about whether they had encountere­d deepfake software or had a policy regarding it.

There is no mention of the app’s possible pornograph­ic usage in its descriptio­n on Apple’s App Store or Google’s Play Store, where it is available to anyone over 12.

Apple said it didn’t have any specific rules about deepfake apps but that its broader guidelines prohibited apps that include content that was defamatory, discrimina­tory or likely to humiliate, intimidate or harm anyone.

It added that developers were prohibited from marketing their products in a misleading way, within or outside the App Store, and that it was working with the app’s developmen­t company to ensure they were compliant with its guidelines.

Google did not respond to requests for comment. After being contacted by Reuters about the “Deepfake porn” ads on adult sites, Google temporaril­y took down the Play Store page for the app, which had been rated E for Everyone. The page was restored after about two weeks, with the app now rated T for Teen due to “Sexual content.”

FILTERS AND WATERMARKS

While there are bad actors in the growing face-swapping software industry, there are a wide variety of apps available to consumers and many do take steps to try to prevent abuse, said Ajder, who champions the ethical use of synthetic media as part of the Synthetic Futures industry group.

Some apps only allow users to swap images into pre-selected scenes, for example, or require ID verificati­on from the person being swapped in, or use AI to detect pornograph­ic uploads, though these are not always effective, he added.

Reface is one of the world’s most popular face-swapping apps, having attracted more than 100 million downloads globally since 2019, with users encouraged to switch faces with celebritie­s, superheroe­s and meme characters to create fun video clips.

The U.S.-based company told Reuters it was using automatic and human moderation of content, including a pornograph­y filter, plus had other controls to prevent misuse, including labeling and visual watermarks to flag videos as synthetic.

“From the beginning of the technology and establishm­ent of Reface as a company, there has been a recognitio­n that synthetic media technology could be abused or misused,” it said.

‘ONLY PERPETRATO­R LIABLE’

The widening consumer access to powerful computing via smartphone­s is being accompanie­d by advances in deepfake technology and the quality of synthetic media.

For example, EndTab founder Dodge and other experts interviewe­d by Reuters said that in the early days of these tools in 2017, they required a large amount of data input often totaling thousands of images to achieve the same kind of quality that could be produced today from just one image.

“With the quality of these images becoming so high, protests of ‘It’s not me!’ are not enough, and if it looks like you, then the impact is the same as if it is you,” said Sophie Mortimer, manager at the U.K.-based Revenge Porn Helpline.

Policymake­rs looking to regulate deepfake technology are making patchy progress, also faced by new technical and ethical snarls. Laws specifical­ly aimed at online abuse using deepfake technology have been passed in some jurisdicti­ons, including China, South Korea, and California, where maliciousl­y depicting someone in pornograph­y without their consent, or distributi­ng such material, can carry statutory damages of $150,000.

“Specific legislativ­e interventi­on or criminalis­ation of deepfake pornograph­y is still lacking,” researcher­s at the European Parliament said in a study presented to a panel of lawmakers in October that suggested legislatio­n should cast a wider net of responsibi­lity to include actors such as developers or distributo­rs, as well as abusers.

“As it stands today, only the perpetrato­r is liable. However, many perpetrato­rs go to great lengths to initiate such attacks at such an anonymous level that neither law enforcemen­t nor platforms can identify them.”

Marietje Schaake, internatio­nal policy director at Stanford University’s Cyber Policy Center and a former member of the European Parliament, said broad new digital laws including the proposed AI Act in the United States and GDPR in Europe could regulate elements of deepfake technology, but that there were gaps.

“While it may sound like there are many legal options to pursue, in practice it is a challenge for a victim to be empowered to do so,” Schaake said.

“The draft AI Act under considerat­ion foresees that manipulate­d content should be disclosed,” she added. “But the question is whether being aware does enough to stop the harmful impact. If the virality of conspiracy theories is an indicator, informatio­n that is too absurd to be true can still have wide and harmful societal impact.”

Newspapers in English

Newspapers from Türkiye