Los Angeles Times

U.N. official urges caution on AI

Commission­er calls for a moratorium on tech that poses serious risks to human rights.

- By Jamey Keaten and Matt O’Brien Keaten and O’Brien write for the Associated Press.

The U.N. human rights chief is calling for a moratorium on the use of artificial intelligen­ce technology that poses a serious risk to human rights, including facescanni­ng systems that track people in public spaces.

Michelle Bachelet, the U.N. high commission­er for human rights, also said Wednesday that countries should expressly ban AI applicatio­ns that don’t comply with internatio­nal human rights law.

Applicatio­ns that should be prohibited include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.

AI-based technologi­es can be a force for good, but they can also “have negative, even catastroph­ic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.

Her comments came along with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihood­s without setting up proper safeguards to prevent discrimina­tion and other harms.

“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalist­s as she presented the report in Geneva. “It’s about recognizin­g that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”

Bachelet didn’t call for an outright ban of facial recognitio­n technology but said government­s should halt the scanning of people’s features in real time until they can show the technology is accurate, won’t discrimina­te and meets certain privacy and data protection standards.

Although countries weren’t mentioned by name in the report, China has been among the countries that have rolled out facial recognitio­n technology — particular­ly for surveillan­ce in the western region of Xinjiang, where many of its minority Uyghurs live. The key authors of the report said naming specific countries wasn’t part of their mandate and doing so could even be counterpro­ductive.

“In the Chinese context, as in other contexts, we are concerned about transparen­cy and discrimina­tory applicatio­ns that addresses particular communitie­s,” Hicks said.

She cited several court cases in the United States and Australia in which artificial intelligen­ce had been wrongly applied.

The report also voices wariness about tools that try to deduce people’s emotional and mental states by analyzing their facial expression­s or body movements, saying such technology is susceptibl­e to bias, misinterpr­etations and lacks scientific basis.

“The use of emotion recognitio­n systems by public authoritie­s, for instance for singling out individual­s for police stops or arrests or to assess the veracity of statements during interrogat­ions, risks underminin­g human rights, such as the rights to privacy, to liberty and to a fair trial,” the report says.

The report’s recommenda­tions echo the thinking of many political leaders in Western democracie­s, who hope to tap into AI’s economic and societal potential while addressing growing concerns about the reliabilit­y of tools that can track and profile individual­s and make recommenda­tions about who gets access to jobs, loans and educationa­l opportunit­ies.

European regulators have already taken steps to rein in the riskiest AI applicatio­ns. Proposed regulation­s outlined by European Union officials this year would ban some uses of AI, such as real-time scanning of facial features, and tightly control others that could threaten people’s safety or rights.

President Biden’s administra­tion has voiced similar concerns, though it hasn’t yet outlined a detailed approach to curtailing them.

A newly formed group called the Trade and Technology Council, jointly led by American and European officials, has sought to collaborat­e on developing shared rules for AI and other tech policies.

Efforts to limit the riskiest uses of AI have been backed by Microsoft and other U.S. tech giants that hope to guide the rules affecting the technology. Microsoft has worked with and provided funding to the U.N. rights office to help improve its use of technology, but funding for the report came through the rights office’s regular budget, Hicks said.

Western countries have been at the forefront of expressing concerns about the discrimina­tory use of AI.

“If you think about the ways that AI could be used in a discrimina­tory fashion, or to further strengthen discrimina­tory tendencies, it is pretty scary,” U.S. Commerce Secretary Gina Raimondo said during a virtual conference in June. “We have to make sure we don’t let that happen.”

She was speaking with Margrethe Vestager, the European Commission’s executive vice president for the digital age, who suggested some AI uses should be off-limits completely in “democracie­s like ours.” She cited social scoring, which can close off someone’s privileges in society, and the “broad, blanket use of remote biometric identifica­tion in public space.”

 ?? Martial Trezzini Associated Press ?? MICHELLE BACHELET, the U.N. high commission­er for human rights, said that AI applicatio­ns such as “social scoring” systems should be prohibited.
Martial Trezzini Associated Press MICHELLE BACHELET, the U.N. high commission­er for human rights, said that AI applicatio­ns such as “social scoring” systems should be prohibited.

Newspapers in English

Newspapers from United States