Los Angeles Times

Rules for a new surveillan­ce reality

- Amos Toh is the senior researcher on artificial intelligen­ce and human rights at Human Rights Watch. By Amos Toh

If you’re worried about how facial recognitio­n technology is being used, you should be. And things are about to get a lot scarier unless new regulation is put in place.

Already, this technology is being used in many U.S. cities and around the world. Rights groups have raised alarm about its use to monitor public spaces and protests, to track and profile minorities, and to flag suspects in criminal investigat­ions. The screening of travelers, concertgoe­rs and sports fans with the technology has also sparked privacy and civil liberties concerns.

Facial recognitio­n increasing­ly relies on machine learning, a form of artificial intelligen­ce, to sift through still images or video of people’s faces and obtain identity matches. Even more dubious forms of AI-enabled monitoring are in the works.

Tech companies have begun hawking a range of products to government customers that attempt to infer and predict emotions, intentions and “anomalous” behavior from facial expression­s, body language, voice tone and even the direction of a gaze. These technologi­es are being touted as powerful tools for government­s to anticipate criminal activity, head off terrorist threats and police an increasing­ly amorphous range of suspicious behaviors. But can they really do that?

Applicatio­ns of AI for emotion and behavior recognitio­n are at odds with scientific studies warning that facial expression­s and other external behaviors are not a reliable indicator of mental or emotional states. And that is worrying.

One concern is that these technologi­es could single out racial and ethnic minorities and other marginaliz­ed population­s for unjustifie­d scrutiny if how they talk, dress or walk deviates from behavior that the software is programmed to interpret as normal — a standard likely to default to the cultural expression­s, behaviors and understand­ings of the majority.

Perhaps cognizant of these challenges, the Organizati­on for Economic Cooperatio­n and Developmen­t and the European Union are formulatin­g ethicsbase­d guidelines for AI. The OECD Principles and the Ethics Guidelines developed by the European Commission’s HighLevel Expert Group contain important recommenda­tions. But several key recommenda­tions dealing with human rights obligation­s should not just be voluntary standards: They should be adopted by government­s as legally binding rules.

For example, both sets of guidelines recognize that transparen­cy is key. They say that government­s should disclose when someone might interact with an AI system — such as when CCTV cameras in a neighborho­od are equipped with facial recognitio­n software. They also call for disclosure of a system’s internal logic and real-life impact — which faces or behaviors, say, is the software programmed to flag to police? And if so, what might happen when an individual’s face or behavior is flagged?

Such disclosure­s should not be optional. Transparen­cy is a prerequisi­te both for protecting individual rights and for assessing whether government practices are lawful, necessary and proportion­ate.

Both sets of guidelines also emphasize the importance of developing rules for responsibl­e AI deployment with input from those affected. Discussion­s should take place before the systems are acquired or deployed. Oakland’s surveillan­ce oversight law provides a promising model.

Under Oakland’s law, government agencies must provide public documentat­ion of what the technologi­es are, how and where they plan to deploy them, why they are needed and whether there are less intrusive means for accomplish­ing the agency’s objectives. The law also requires safeguards, such as rules for collecting data, and regular audits to monitor and correct misuse. Such informatio­n must be submitted for considerat­ion at a public hearing, and approval by the City Council is required to acquire the technology.

This kind of collaborat­ive process insures a broad discussion of whether a technology threatens privacy or disproport­ionately affects the rights of marginaliz­ed communitie­s. These open discussion­s may raise enough concerns about the human rights risks of government­s using facial recognitio­n that a decision should be made to ban it, as has happened in Oakland, San Francisco and Somerville, Mass.

Companies providing facial recognitio­n for commercial use should also be held legally accountabl­e to high standards. At a minimum, they should be required to maintain comprehens­ive records about how their software is programmed to sort and identify faces, including logs of the data used to train the software to classify facial features and of changes made to the underlying code that affect how faces are identified or matched.

These record-keeping practices are key to fulfilling the transparen­cy and accountabi­lity standards proposed by the OECD and in the EU. They can be critical to analyzing whether facial recognitio­n software is accurate for some faces but not others, or why someone was misidentif­ied.

To provide time to develop these vital regulatory frameworks, government­s should impose a moratorium on the use of facial recognitio­n. Without binding regulation­s in place, we can’t be sure that government­s are meeting their human rights obligation­s.

Newspapers in English

Newspapers from United States