Houston Chronicle Sunday

A rebuttal to biometrics as concerns grow

Federal study finds race, gender affect face-scanning tech

- By Natasha Singer and Cade Metz

The majority of commercial facial-recognitio­n systems exhibit bias, according to a landmark federal study, underscori­ng questions about a technology increasing­ly used by police department­s and federal agencies to identify suspected criminals.

The systems falsely identified African American and Asian faces 10 times to 100 times more than Caucasian faces, the National Institute of Standards and Technology reported. Among a database of photos used by law enforcemen­t agencies in the United States, the highest error rates came in identifyin­g Native Americans, the study found.

The technology also had more difficulty identifyin­g women than men. And it falsely identified older adults up to 10 times more than middle-aged adults.

The new report comes at a time of mounting concern from lawmakers and civil rights groups over the proliferat­ion of facial recognitio­n. Proponents view it as an important tool for catching criminals and tracking terrorists. Tech companies market it as a convenienc­e that can be used to help identify people in photos or in lieu of a password to unlock smartphone­s.

Civil liberties experts, however, warn that the technology — which can be used to track people at a distance without their knowledge — has the potential to lead to ubiquitous surveillan­ce, chilling freedom of movement and speech. This year, San Francisco, Oakland and Berkeley in California and the Massachuse­tts communitie­s of Somerville and Brookline banned government use of the technology.

“One false match can lead to missed flights, lengthy interrogat­ions, watch list placements, tense police encounters, false arrests or worse,” Jay Stanley, a policy analyst at the American Civil Liberties Union, said in a statement. “Government agencies including the FBI, Customs and Border Protection and local law enforcemen­t must immediatel­y halt the deployment of this dystopian technology.”

The federal report is one of the largest studies of its kind. The researcher­s had access to more than 18 million photos of about 8.5 million people from American mug shots, visa applicatio­ns and border-crossing databases.

The National Institute of Standards and Technology tested 189 facial-recognitio­n algorithms from 99 developers, representi­ng the majority of commercial developers. They included systems from Microsoft, biometric technology companies like Cognitec, and Megvii,

an artificial intelligen­ce company in China.

The agency did not test systems from Amazon, Apple, Facebook and Google because they did not submit their algorithms for the federal study.

The federal report confirms earlier studies from MIT that reported that facial-recognitio­n systems from some large tech companies had much lower accuracy rates in identifyin­g the female and darker-skinned faces than the white male faces.

“While some biometric researcher­s and vendors have attempted to claim algorithmi­c bias is not an issue or has been overcome, this study provides a comprehens­ive rebuttal,” Joy Buolamwini, a researcher at the MIT Media Lab who led one of the facial studies, said in an email. “We must safeguard the public interest and halt the proliferat­ion of face surveillan­ce.”

Although the use of facial recognitio­n by law enforcemen­t is not new, new uses are proliferat­ing with little independen­t oversight or public scrutiny. China has used the technology to surveil and control ethnic minority groups like the Uighurs. This year, U.S. Immigratio­n and Customs Enforcemen­t officials came under fire for using the technology to analyze the driver’s licenses of millions of people without their knowledge.

Biased facial recognitio­n technology is particular problemati­c in law enforcemen­t because errors could lead to false accusation­s and arrests. The new federal study found that the kind of facial matching algorithms used in law enforcemen­t had the highest error rates for African American females.

“The consequenc­es could be significan­t,” said Patrick Grother, a computer scientist at NIST who was the primary author of the new report. He said he hoped it would spur people who develop facial recognitio­n algorithms to “look at the problems they may have and how they might fix it.”

But ensuring that these systems are fair is only part of the task, said Maria De-Arteaga, a researcher at Carnegie Mellon University who specialize­s in algorithmi­c systems. As facial recognitio­n becomes more powerful, she said, companies and government­s must be careful about when, where and how they are deployed.

”We have to think about whether we really want these technologi­es in our society,” she said.

 ?? Qilai Shen / Bloomberg ?? Civil liberties groups have flagged facial-recognitio­n technology’s many flaws.
Qilai Shen / Bloomberg Civil liberties groups have flagged facial-recognitio­n technology’s many flaws.

Newspapers in English

Newspapers from United States