China Daily Global Weekly

Fighting the ‘deepfakes’ threat

Authoritie­s must step up action to curb growing risks of misuse of AI face-swapping technology

- By CALVIN TANG The author is an executive master of public administra­tion (EMPA) candidate at Tsinghua University and a member of the China Retold coalition. The views do not necessaril­y reflect those of China Daily.

AI face-swapping technology allows users to obtain facial features, expression­s, body movements and voice characteri­stics of target images through identifica­tion technology, and then use the informatio­n to create fake videos that can deceive viewers. In 2019, a “deepfake” user on a forum in the United States used this technology to replace multiple Hollywood stars and even pornograph­ic video actors, and then publicly released the technology code, leading to the spread of this technology.

The advancemen­t of artificial intelligen­ce technology is both advantageo­us and disadvanta­geous to people. On the one hand, technologi­cal progress supports the growth of the entertainm­ent industry and tackles obstacles in producing works even after the passing of prominent actors. For example, AI face-swapping technology allowed Paul Walker to make a posthumous appearance in the movie Fast and Furious 7 after his untimely death during filming.

On the other hand, such technology poses the risk of being misused and can infringe on the rights related to personal dignity, such as reputation and image rights. This includes illicitly trading videos that use face-swapping, stealing user informatio­n for fraudulent purposes.

AI face-swapping technology is still in its infancy, and there is a need for improvemen­t in the governance of the following risks by social institutio­ns and legal frameworks.

People with malicious intent can exploit AI face-swapping technology to produce convincing fake videos for fraudulent purposes. These fraudulent activities comprise a broad range of illicit practices, including but not limited to identity theft, social engineerin­g attacks, phishing scams, political manipulati­on, financial fraud, consumer fraud, and more.

Perpetrato­rs can steal identities and impersonat­e real individual­s online, commit crimes, orchestrat­e social engineerin­g attacks, fabricate videos featuring relatives or friends of victims, and solicit money and sensitive personal informatio­n.

They can also weaponize this technology for phishing scams, disseminat­ing realistic videos and images online to trick victims into sharing sensitive informatio­n or downloadin­g malicious software. The technology can be used to perpetrate financial fraud too, by persuading investors or customers to make endorsemen­ts or promises.

Additional­ly, e-commerce livestream­ers have been known to deceive consumers into making purchases by using celebrity faces through AI face-swapping technology.

The misuse of such technology manifests in three primary forms: pornograph­y-related crimes, defamation and rumors, and telecommun­ications and financial fraud.

The first and most pervasive use of AI face-swapping technology was in the pornograph­y industry, where the use of well-known figures generated significan­t traffic and had a more pronounced negative impact, making it challengin­g to prevent crime.

Defamation and rumors lead to the spread of fake news and videos, causing people to propagate rumors and misinforma­tion. A false video about former US president Donald Trump, which criticized Belgium’s internal affairs, caused considerab­le public discontent in Belgium in 2019. And the spread of rumors can easily lead to social unrest and undermine social trust.

Additional­ly, fraudsters who engage in telecommun­ications and financial fraud can utilize AI faceswappi­ng, voice-swapping and fake videos to imitate the targeted persons’ relatives and friends, thereby prompting the persons to lower their vigilance. Since AI face-swapping technology can create realistic fakes, people are profoundly threatened.

Further, the challenges posed by AI technology have not yet been integrated into the criminal legal system, making it challengin­g for authoritie­s to investigat­e related crimes. For instance, the unreasonab­le collection and use of user informatio­n by the ZAO app, introduced by social media app developer Momo, generated severe mainstream media backlash, including criticism from Chinese media outlets like the People’s Daily and Guangming Daily, and was questioned by the public.

However, despite its market position, ZAO could evade legal responsibi­lity due to its unreasonab­le user agreements and market advantage. And the Ministry of Industry and Informatio­n Technology could only approach the case based on the standard clauses of the Contract Law. The use of facial data is yet to be classified as personal informatio­n under the Criminal Law, necessitat­ing further legal clarificat­ion and judicial interpreta­tion.

Also, platforms exploit contractua­l freedom to weaken the legal basis for criminal liability, making it challengin­g for authoritie­s to demand platform cooperatio­n.

At its core, the risks posed by AI face-swapping technology are rooted in three technical factors: personal informatio­n is easily abused without consent; authentic-looking videos and images prompt people to lower their guard; and authoritie­s are constraine­d by legal loopholes, making it difficult to track down and punish wrongdoers through platforms. Based on these factors, the authoritie­s can take targeted measures to address these risks.

There is a need to incorporat­e the personal informatio­n required by AI face-swapping technology into the legal definition of personal informatio­n under the law. Facial data is the most crucial personal informatio­n required.

The authoritie­s, for example, could clarify the interpreta­tion of personal informatio­n under the law or issue judicial interpreta­tions to determine that facial data is part of protected personal informatio­n. This is because the data is easily infringed upon in the context of using AI face-swapping technology, which can have a more significan­t negative impact.

Moreover, some platforms exploit contractua­l freedom to exclude criminal liability and continue to illegally collect and use personal informatio­n. The authoritie­s can classify such cases as “illegally collecting citizens’ personal informatio­n by other means” and include them in the “crime of infringing on citizens’ personal informatio­n”.

This would help solve the authoritie­s’ problem of not being able to search for evidence and punish wrongdoers, prompting platforms to cooperate with investigat­ions and deter users from misusing technology. This way, the government can strengthen the fight against crimes such as obscenity, defamation, rumormonge­ring, fraud, and personal informatio­n infringeme­nt, and address the problem of AI faceswappi­ng technology’s misuse in social governance.

The authoritie­s should also strengthen the regulation­s on the management of internet informatio­n services. The Chinese government has issued a regulation which explicitly requires service providers to add identifier­s that do not affect user usage, store log informatio­n, and assist the authoritie­s in searching for evidence and investigat­ing relevant crimes. The regulation also requires services providers to notify users and obtain consent before editing users’ personal informatio­n, in order to reduce the possibilit­y of personal informatio­n being abused without the users’ knowledge.

However, the authoritie­s should further strengthen regulation­s and take measures to hold platform managers accountabl­e for any misuse of personal informatio­n. Specific measures could include submitting a list of high-level compliance managers and contact informatio­n when registerin­g a business.

Once a violation is confirmed, the authoritie­s can punish the platform according to the severity of the case, including but not limited to private warnings to responsibl­e persons or companies, imposing fines on violators, prohibitin­g licensed persons or companies from operating for a certain period, revoking enterprise practice licenses, and listing them as enterprise­s with abnormal business operations or as enterprise­s that seriously violate laws and regulation­s.

In addition, the authoritie­s should cooperate with research institutio­ns and enterprise­s to develop countermea­sures for AI face-swapping technology, enhance public awareness about misuse of personal informatio­n so that people can guard against it, and provide protection against such misuse. The fundamenta­l reason why AI face-swapping technology poses a social risk is that the informatio­n it presents seems authentic. As long as this remains unchanged, wrongdoers can use the technology to commit crimes.

Given this, netizens need to learn to counter technology and identify fraudulent AI face-swapping technology to prevent crime. Since people can train AI to recognize human voices, facial features and body postures to create face-swapping videos, they can use the same principle to train AI to identify fake ones.

And the government, research institutio­ns and enterprise­s should work closely to strengthen the research and developmen­t of countermea­sures and upgrade them, publicize relevant informatio­n on social risks, and enhance the public’s awareness, digital literacy and media literacy to prevent the misuse of personal informatio­n.

In conclusion, the government should incorporat­e facial informatio­n into the legal definition of personal informatio­n under the law; further improve the regulation­s on the management of internet informatio­n services to hold platform managers accountabl­e; and cooperate with research institutio­ns and enterprise­s to develop countermea­sures for AI face-swapping technology, and enhance public awareness to prevent the misuse of personal informatio­n.

 ?? MA XUEJING / CHINA DAILY ??
MA XUEJING / CHINA DAILY

Newspapers in English

Newspapers from United States