China Cracks Down on Surge in AI-Driven Fraud
Authorities warn of hyperrealistic content generated by artificial intelligence
Artificial intelligence is making its way into daily Chinese life— including a surge of AIdriven fraud.
Beijing, which has adopted some of the world’s most-restrictive regulations to govern its internet, has become something of a pioneer in curbing the use of hyper-realistic AI-generated content.
In recent weeks, police around the country have cautioned on their official social-media accounts that criminals have employed AI software to generate deceptive text and impersonate friends of the people they target.
In April, a man surnamed Guo received a video call from an impostor on the Chinese messaging app WeChat. Impersonating a friend of Guo, the scammer persuaded him to transfer the equivalent of $600,000 to a bank account in Inner Mongolia within 10 minutes. He complied, and it wasn’t until he contacted his friend to confirm the transfer that he realized he had been swindled.
“We had a video chat, and I recognized the face and voice in the video; that’s why I let my guard down,” Guo told authorities, according to a social-media post by Inner Mongolia police.
Guo reported the scam to the local police in the eastern Chinese city of Fuzhou, who contacted their counterparts in Inner Mongolia, where the impostor opened the bank account. Authorities were able to halt the transfer of most of the funds and were working to recover the rest, according to the post on the police’s WeChat account.
The same month, police in the Anhui province in eastern China detained scammers who tricked a man into transferring a large amount of money to a supposed friend using AI face-swapping and voice-synthesis technology, according to the local stateowned newspaper Xin’an Evening News.
Regulators around the world have struggled to govern so-called deepfakes, in which a person in an image, audio or video is swapped with someone else. China has been quick to roll out regulations and to use existing tools to track AI scammers. In January, the country’s internet watchdog, the Cyberspace Administration of China, started enforcing new rules that prohibit the use of AI-generated images, audio and text to spread misinformation or content that violates the law or is deemed disruptive to the economy or national security. The broadly defined categories give authorities wide latitude. Guo’s plight struck a nerve in China. It trended on the socialmedia platform Weibo with the hashtag #AIFraudIsEruptingAcrossChina once the police made his case public in May. The hashtag has since become unavailable, suggesting that censors are trying to limit discussion. A search of the topic yields a message, “According to relevant laws, regulations and policies, the search results are not displayed.”
ByteDance-owned Douyin, TikTok’s Chinese version, in May proposed platform norms and industry initiatives, including a move to call on peers to prohibit the use of AI software to generate deceptive content and rumors. Chinese authorities have in recent weeks clamped down on disinformation generated by AI chatbots—sometimes seemingly modeled on previous incidents that sparked socialmedia storms.
Last month, in the province of Gansu, authorities detained a man who allegedly created an article on ChatGPT about a nonexistent train crash by using news clips he had gathered online. According to police, he published the article on a blog aggregator that lets users share money it derives from ads based on traffic. The article generated 15,000 views, police said.
The government’s handling of the 2011 collision of two bullet trains in eastern China triggered broad outrage.
Also in May, authorities in the Henan province alleged that a man generated disinformation about a violent scuffle in a restaurant in an attempt to attract clicks. The ChatGPT-generated headlines falsely suggested that a woman died after a man smashed her head with a brick in the fight. The headlines were reminiscent of viral video footage from last year showing a group of men beating up several women at a restaurant, which stirred heated online debate about genderbased violence against women. China isn’t among the countries where OpenAI, the developer of ChatGPT, makes the AI chatbot available, though many have found ways to circumvent the barrier through virtual private networks.
The popularity of ChatGPT has stirred a frenzy in China, where tech companies, battered by a two-year regulatory clampdown and a wobbly economy, see the technology as a new driver of growth.
The search-engine owner Baidu, the e-commerce company Alibaba Group and the AI company SenseTime Group have rolled out their own large language models, the foundation technology behind ChatGPT. Other tech companies are exploring how to incorporate the technology into their products.
Governments around the world share concern that the ChatGPTlike services could spread discriminatory or harmful information. As regulators in other countries have started exploring possible checks on the new wave of generative AI technology, China has prepared a regulatory framework to scrutinize chatbots.
The Cyberspace Administration of China in April proposed new rules that would require companies to go through a security review before launching such AI chatbots. In addition, the onus is on companies to make sure content that their AI services generate is factual and aligns with the Chinese Communist Party’s political values, according to the draft regulation.