The Mercury News

Facial recognitio­n loses support as bias claims rise

- Larry BAEID

Following the lead of

San Francisco, Boston and several other cities, Detroit is poised to end a contract with a company that provides facial recognitio­n technology to its police department. And it’s not just cities that are backing away from the technology. In the wake of protests for racial justice, IBM, Microsoft and Amazon are now denying police department­s access to their facial recognitio­n technology.

The technology, which is theoretica­lly capable of using computer vision to recognize individual­s based on facial characteri­stics, has been found to be less accurate when dealing with people of color.

In June, Robert Williams, who is Black, “was wrongfully arrested because of a false face recognitio­n match,” according to a complaint filed by the American Civil Liberties Union of Michigan. The ACLU said that Williams was handcuffed on his front lawn “in front of his wife and two terrified girls, ages two and five,” and detained overnight. In an op-ed for The Washington Post, Williams said that police, investigat­ing a crime,

“showed me a blurry surveillan­ce camera photo of a black man and asked if it was me. I chuckled a bit. ‘No, that is not me.’ ” He showed me another photo and said, ‘So I guess this isn’t you either?’ ” I picked up the piece of paper, put it next to my face and said, ‘I hope you guys don’t think that all black men look alike.’ ” He added that the Michigan State Police facial recognitio­n system ” incorrectl­y spit out a photograph of me pulled from an old driver’s license picture.”

A 2019 study conducted by the federal government’s National Institute of Standards and Technology found higher rates of false positives for Asian and African American faces relative to images of Caucasians, where the “differenti­als often ranged from a factor of 10 to 100 times.” It also found high rates of false positives in oneto-one matching for Asians, African Americans and Native American groups with facial recognitio­n systems developed in the U.S., but

“there was no such dramatic difference in false positives in one-toone matching between Asian and Caucasian faces for algorithms developed in Asia.” NIST also found higher rates of false positives for African American females.

The NIST study follows similar research published in 2018 in Proceeding­s of Machine Learning Research, by MIT’S Joy Buolamwini and Timnit Gebru from Microsoft Research, which found that gender classifica­tion systems based on facial recognitio­n “performed best for lighter individual­s and males overall,” and “worst for darker females.”

Aside from the inaccuraci­es, there are broader concerns about the use of facial recognitio­n by law enforcemen­t. In a 2019 blog post about bias errors in Amazon’s Rekognitio­n software, Buolamwini said, “among the most concerning uses of facial analysis technology involve the bolstering of mass surveillan­ce, the weaponizat­ion of AI, and harmful discrimina­tion in law enforcemen­t contexts.” She called for great regulation and oversight.

Activists and researcher­s aren’t the only ones concerned. Major companies that have developed facial

recognitio­n software are pulling back their technologi­es for police department­s because of their own concerns. Last month, Microsoft President Brad Smith said, “We will not sell facial-recognitio­n technology to police department­s in the United States until we have a national law in place, grounded in human rights, that will govern this technology.”

Amazon has also pulled back, announcing “a one-year moratorium on police use of Amazon’s facial recognitio­n technology,” though the company “will continue to allow organizati­ons like Thorn, the Internatio­nal Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognitio­n to help rescue human traffickin­g victims and reunite missing children with their families.”

IBM is getting out of the facial recognitio­n business. In a letter to several members of Congress, IBM CEO Arvind Krishna wrote that the company “firmly opposes and will not condone uses of any technology, including facial recognitio­n technology offered by other vendors, for mass surveillan­ce, racial profiling, violations of basic

human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparen­cy.” He said, “We believe now is the time to begin a national dialogue on whether and how facial recognitio­n technology should be employed by domestic law enforcemen­t agencies.”

Can be beneficial

I’m not opposed to all use of facial recognitio­n, as long as it’s used voluntaril­y with no pressure or coercion and with respect for privacy. Apple, Google and Facebook use facial recognitio­n to help sort out photos. I used it recently to create a Mother’s Day slideshow for my wife by having Google show me photos of my daughter and son among the tens of thousands of pictures I have stored in Google Photos. Facebook uses it to help people locate their own images, including as a way to help people determine if their image is being used for impersonat­ion or bullying. Apple uses it to help combine photos of a person into one group in its Photos app.

And, of course, Apple, Google and Microsoft use it to give people access to their phones and computers, making it unnecessar­y to type in a password or PIN or touch a fingerprin­t reader.

But there’s a big difference between voluntaril­y using technology to make your life easier versus having it used to identify you without your permission. While I understand law enforcemen­t’s desire to more efficientl­y catch criminals, I also understand the concerns of citizens — especially people of color who have had a history of abuse by police — to fear a mass surveillan­ce system with a significan­t failure rate.

This conversati­on also raises the issue of biased algorithms. While one might expect a computer to be far less biased than some humans, the fact is that computers are programmed by humans and subject to the biases and blind spots of those who write the code. While I’m not suggesting that programmer­s who build facial recognitio­n or AI technology are knowingly racist, they are limited by their cultural perspectiv­e, whether they realize it or not.

 ??  ??

Newspapers in English

Newspapers from United States