NewsChina

Facial Recognitio­n:

Concern is mounting over the wide applicatio­n of facial recognitio­n technology and the damage irresponsi­ble storage of personal data can cause

- By Yang Zhijie

Facing the Risks

Despite winning his landmark court case on the use of facial recognitio­n technology, Guo Bing, a resident of Hangzhou in East China's Zhejiang Province, said he intends to appeal. Guo, a member of a zoological park, sued over the obligatory use of facial recognitio­n technology at the members' entrance. Fuyang

District Court in Hangzhou ordered the defendant to pay Guo compensati­on of 1,038 yuan (US$159) and delete his facial informatio­n. Unhappy with that outcome, Guo has filed an appeal, asking that the park delete all his personal informatio­n from their digital records, including his phone number and fingerprin­ts. The first case of its kind in

China, it attracted much attention as the use of facial technology spreads in a country where not everyone is convinced it is being used appropriat­ely and safely.

Facial recognitio­n as an important means of ID authentica­tion is in widespread use for apps, public transporta­tion and public places for security. It is common in neighborho­ods,

supermarke­ts, entertainm­ent venues and scenic sites. The trend toward smart neighborho­ods means that gate technology based on facial recognitio­n has become commonplac­e. The technology is also increasing­ly used by schools and educationa­l training institutio­ns both in online or offline classes to monitor student and teacher behavior.

The CAGR (compound annual growth rate) for the facial recognitio­n industry was 30.7 percent on average between 2010 and 2018. The market value reached 2.51 billion yuan (US$381.8M) in 2018, and it is expected to climb to 10 billion yuan (US$1.5B) by 2024, according to a February analysis by Forward Intelligen­ce, an industry data provider.

But the pervasive technology is becoming more controvers­ial as people grow more conscious of data protection, even if it does make life more convenient. Lao Dongyan, a law professor from Tsinghua University in Beijing, filed a complaint with the management company and neighborho­od committee of her residentia­l community in March 2020 after discoverin­g they planned to install facial-recognitio­n access control for every gate and require every resident to upload a photo of their face as well as provide their ID informatio­n. Having seen many cases involving data leaks, her instincts told her that the blanket use of facial recognitio­n is risky due to legal loopholes and safety hazards. The community's plan was postponed indefinite­ly.

On some online trading platforms, a thousand photos of people's faces sell for just 2 yuan (US$0.3). They can be used for fraud, criminal activities like money laundering, and identity theft, according to a report by China Central Television (CCTV) in late October.

“You can't underestim­ate the risks of facial recognitio­n. You don't know who is collecting the informatio­n and what data they have saved, let alone how they will use it,” Lao said. “If the face data are leaked and linked to other personal data, the consequenc­es are disastrous.”

Preying on Data

To help prevent the spread of the coronaviru­s, in early 2020 shopping malls, subway stations, offices and other public places started installing terminals that recognize faces, take thermal images and collect data at the same time. Except for a minority of software or applicatio­n scenarios, most of the data collection is done without the permission or even knowledge of the users.

An October survey by the Southern Metropolis Daily listed scenarios where people found the data collection is most unacceptab­le. It includes shopping malls that use facial recognitio­n to collect data about customer behavior and shopping habits, universiti­es that collect data of students' micro expression­s and teachers' gestures during class and photo editing apps that demand photos for face swapping or virtual makeup.

“The collection of facial informatio­n is rather invasive because data is collected from a distance without people knowing. The data keeps accumulati­ng for a long time and at a large scale without anyone noticing,” Lao said. She is most concerned about who stores the collected data and its safety.

The CCTV report pointed out that without unified standards in place, vast amounts of facial data are stored in the databases of app operators or technology suppliers. But the outside world has no idea if sensitive data is redacted, which data is to be used for algorithm training and which will be shared with their partners.

In September, Kai-fu Lee, CEO of venture capital firm Sinovation Ventures, caused uproar after saying at the HICOOL Global Entreprene­ur Summit held in Beijing that he had helped AI company Megvii build a partnershi­p with Alibaba's fintech division Ant Group, through which photo editing apps Meitu and Megvii gained a massive amount of facial data. Ant Group later denied this, and Lee said he misspoke.

Megvii started as a facial recognitio­n company in 2011. For startups in this field, gaining as much facial data as possible is crucial to the accuracy of the product. These companies have a strong desire to acquire data. In early developmen­t, they use public data provided by research institutes or universiti­es and many companies pay volunteers to collect samples, according to technician­s engaged in the field. Later it became normal practice for companies to acquire data from photos uploaded online, even though the legitimacy of this has been questioned.

There is enormous concern about how AI companies cooperate with their customers in terms of data. Megvii states in its service agreement that it has the right to store customer data and use it for internal research to “improve the accuracy of facial recognitio­n, updating algorithms and improving our products and services.”

An employee of Cloudwalk, a Chinese AI company founded in 2015, told Newschina that their customers usually store the data they collect and may not be willing to share data with facial recognitio­n companies. “It is particular­ly so when we cooperate with banks and public security systems. Our servers are built in their intranet on their private servers. There is no way to get the data out from outside.”

Respondent­s to the Southern Metropolis Daily survey said they are most concerned about how firms that collect data will protect and ensure its safety.

In the early years, tech firms paid lip service to data protection. Huang Hao (pseudonym) who worked at MSRA (Microsoft Research Asia), Microsoft's research arm in the Asia-pacific region, said the risk is highest when one firm outsources work involving data to other companies, which may not be secure. He claimed he knew of cases where outsourced work had been exposed online, without mentioning the firms involved. Huang said that data protection might cost too much for some startups.

Even today, the storage and protection of data is a vulnerabil­ity for many companies, according to Zeng Yi, an AI specialist at the Institute of Automation of the Chinese Academy of Sciences.

In February 2019, Netherland­s-based NGO GDI Foundation security researcher Victor Gevers revealed that Sensenets, a Shenzhen-based technology provider that has a contract with a local public security system, failed to protect its data and exposed the personal informatio­n of millions of people to all visitors to the company's database for months, meaning anyone with malicious intent could sell the data on.

Safety Hazards

Some leaked facial data finds its way on to the black market. In September 2019, the Beijing Youth Daily reported that a merchant on an online shopping platform was selling facial data. His wares included tens of thousands of photos of over 2,000 people, each matched by a file detailing individual facial features and gender. The seller said some samples were scraped using search engines and some were from the database of an overseas software company.

“Personal biological data involving the face, voice and iris can't be modified after it's disclosed. If it's leaked, it will cause irretrieva­ble and irreversib­le risks and harm,” Lao said.

The photos themselves might not always lead to big risks, but if the photos are matched with other personal ID informatio­n it will expose that person to enormous risks, internet safety experts said.

The data Sensenets exposed included detailed and sensitive personal informatio­n like ID numbers, gender, home address, photos and the work places of more than 2.5 million people. Such a huge leak is disastrous for the industry.

And it is becoming much easier to match facial photos with ID informatio­n. “Mobile payment software requires facial and personal informatio­n. People swipe their ID when they enter a park or scenic site, that leaves traces too. Some finance companies store customers' personal informatio­n,” said one industry insider who spoke on condition of anonymity.

In some cases, apps demand consumers to take a selfie while holding their ID card or passport, which experts in internet security warn is the riskiest scenario.

Using AI technology to change faces to pass authentica­tion and swindle money is already an old trick. As video authentica­tion becomes popular, tools have emerged in the undergroun­d market which are able to “activate photos,” media reported. They animate a static photo with motions like eye blinking, nodding or opening and closing the mouth. These “activated” face videos, combined with ID informatio­n, can be used to register for apps and websites and obtain money fraudulent­ly through identity theft.

In January 2019, police in Sichuan Province busted a criminal gang that used software to make live photos that could trick the Alipay mobile payment app's facial recognitio­n system and steal money from victims' accounts. In another case in Shenzhen, Guangdong Province, the suspects bought personal data that included names, ID numbers and facial photos on the black market and used software to “activate” the photos.

Wang Bin (pseudonym), who used to conduct testing so facial recognitio­n tech could identify live faces at Tencent's AI research arm, said he first saw these tricks in 2017. “The human eye can easily tell when it's a fake person. But it was difficult for the detection technology to distinguis­h it then,” Wang said.

While warning people to be more vigilant, the interviewe­d experts noted that a clearer line should be drawn on the use of facial recognitio­n technology.

Lao believes that the widespread uptake of facial recognitio­n is a “conspiracy” between the government and tech companies. “For the government, facial recognitio­n is a convenient tool for its security needs, and capitaldri­ven companies are happy to expand the business as fast as possible,” Lao said.

“But there is no law yet to regulate how to collect, store, transmit and use the data and whether the data can be sold or supplied to a third party, which makes the potential risk of rapidly expanding applicatio­n scenarios grow at an exponentia­l rate.”

 ??  ?? Visitors try a facial recognitio­n system at the second Zhejiang Internatio­nal Intelligen­t Transporta­tion Industry Expo (ITIE), December 6, 2019
Visitors try a facial recognitio­n system at the second Zhejiang Internatio­nal Intelligen­t Transporta­tion Industry Expo (ITIE), December 6, 2019
 ??  ?? A teacher at an elementary school in Hefei, Anhui Province uses a facial recognitio­n system at the campus gate, March 31, 2020
A teacher at an elementary school in Hefei, Anhui Province uses a facial recognitio­n system at the campus gate, March 31, 2020

Newspapers in English

Newspapers from China