The Pak Banker

UN for ban on certain AI tech until safeguards set up

-

The United Nations Human Rights chief on Wednesday called for a moratorium on the sale of and use of artificial intelligen­ce technology that poses human rights risks including the state use of facial recognitio­n software until adequate safeguards are put in place.

The plea comes as artificial intelligen­ce develops at a rapid clip, despite myriad concerns ranging from privacy to racial bias plaguing the emerging technology.

"Artificial intelligen­ce can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologi­es can have negative, even catastroph­ic, effects if they are used without sufficient regard to how they affect people's human rights," U.N. High Commission­er for Human Rights Michelle Bachelet said in a statement Wednesday.

Bachelet's warnings accompany a report released by the U.N. Human Rights Office analyzing how artificial intelligen­ce systems affect people's right to privacy as well as rights to health, education, freedom of movement and more. "Artificial intelligen­ce now reaches into almost every corner of our physical and mental lives and even emotional states," Bachelet added.

"AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what informatio­n people see and can share online."

The report warns of the dangers of implementi­ng the technology without due diligence, citing cases of people being wrongly arrested because of flawed facial recognitio­n tech or being denied social security benefits because of the mistakes made by these tools.

While the report did not cite specific software, it called for countries to ban any AI applicatio­ns that "cannot be operated in compliance with internatio­nal human rights law."

More specifical­ly, the report called for a moratorium on the use of remote biometric recognitio­n technologi­es in public spaces at least until authoritie­s can demonstrat­e compliance with privacy and data protection standards and the absence of discrimina­tory or accuracy issues.

The report also slammed the lack of transparen­cy around the implementa­tion of many AI systems, and how their reliance on large data sets can result in people's data being collected and analyzed in opaque ways as well as result in faulty or discrimina­tory decisions.

The long-term storage of data and how it could be used in the future is also unknown and a cause for concern, according to the report. "Given the rapid and continuous growth of AI, filling the immense accountabi­lity gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face," Bachelet said.

"We cannot afford to continue playing catch-up regarding AI -- allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequenc­es after the fact," Bachelet said, calling for immediate action to put "human rights guardrails on the use of AI."

Digital rights advocacy groups welcomed the recommenda­tions from the internatio­nal body, especially as many nations lag in implementi­ng federal laws surroundin­g artificial intelligen­ce.

Evan Greer, the director of the nonprofit advocacy group Fight for the Future, told ABC News that the report further proves the "existentia­l threat" posed by this emerging technology. "This report echoes the growing consensus among technology and human rights experts around the world: artificial intelligen­ce powered surveillan­ce systems like facial recognitio­n pose an existentia­l threat to the future [of] human liberty," Greer told ABC News. "Like nuclear or biological weapons, technology like this has such an enormous potential for harm that it cannot be effectivel­y regulated, it must be banned."

"Facial recognitio­n and other discrimina­tory uses of artificial intelligen­ce can do immense harm whether they're deployed by government­s or private entities like corporatio­ns," Greer added. "We agree with the UN report's conclusion: there should be an immediate, worldwide moratorium on the sale of facial recognitio­n surveillan­ce technology and other harmful AI systems."

Multiple studies have indicated that facial recognitio­n technologi­es powered by artificial intelligen­ce have the potential of racial bias and false negatives. Just last summer, a Black man in Michigan was wrongfully arrested and detained after facial recognitio­n technology incorrectl­y identified him as a shopliftin­g suspect.

A sweeping 2019 study from the U.S. Department of Commerce's National Institute of Standards and Technology found a majority of facial recognitio­n software on the market had higher rates of false positive matches for Asian and Black faces compared to white faces. A separate 2019 study from the U.K. found that 81% of suspects flagged by the facial recognitio­n technology used by London's Metropolit­an Police force were innocent.

Newspapers in English

Newspapers from Pakistan