The Morning Call

Race a bias for facial recognitio­n results

Lighter skin ID’d better than faces of people of color

- By Rachel Siegel

When Rep. Rashida Tlaib, D-Mich., was invited to tour the Detroit Police Department’s Real Time Crime Center, the purpose was to explain how officers use facial recognitio­n when policing the streets of a city that is more than 80% black.

But the meeting quickly deteriorat­ed when Tlaib told Chief James Craig that “analysts need to be African Americans, not people that are not,” because “non-African Americans think African Americans all look the same.”

Craig, who is African American, said the suggestion that white analysts would be less adept at their jobs than people of color was “insulting.”

Tlaib’s comments, however, were consistent with an enduring debate that rages around facial recognitio­n software: The systems more accurately identify lighter-skinned faces than they do people of color. Researcher­s and numerous studies argue that’s because the software is trained on vast sets of images that skew heavily toward white men, leaving women and minorities vulnerable to holes in mammoth databases.

That can be especially risky, critics argue, as facial recognitio­n is embraced by government and law enforcemen­t.

Critics also worry that people aren’t being trained adequately in how to use the technology and interpret its results. Researcher­s say that law enforcemen­t agencies don’t always disclose how its analysts are taught to use the systems, or who is conducting the training. And they worry that even if a department claims a strong training protocol, people will inevitably let biases about gender and race creep into how they assess a match.

“There’s a huge amount of reliance that this is going to be accurate if it spits out a match, or a candidate list of five people,” said Jake Laperruque, senior counsel at The Constituti­on Project at the Project on Government Oversight. “And that’s just not the case.”

Camera quality, lighting and the size of a system’s database can all affect facial recognitio­n’s accuracy. But researcher­s argue that improving those factors doesn’t erase a system’s hardwired biases. One 2018 study conducted by Joy Buolamwini of the MIT Media Lab found that the technology is correct 99% of the time with photos of white men. But the software misidentif­ied the gender as often as 35% of the time when viewing an image of a darker-skinned woman.

In January, researcher­s with MIT Media Lab reported that facial-recognitio­n software developed by Amazon and marketed to local and federal law enforcemen­t also fell short on basic accuracy tests, including correctly identifyin­g a person’s gender. Specifical­ly, Amazon’s Rekognitio­n system was perfect in predicting the gender of lighter-skinned men, the researcher­s said, but misidentif­ied the gender of darker-skinned women in roughly 30% of their tests.

Amazon disputed those findings, saying the research used algorithms that work differentl­y from the facial-recognitio­n systems used by police department­s. (Amazon founder and chief executive Jeff Bezos owns The Washington Post.)

But the results, researcher­s argue, offer a cautionary tale for millions of Americans. A 2016 report by Georgetown Law researcher­s found that the facial images of half of all American adults, or more than 117 million people, were accessible in a law-enforcemen­t facial-recognitio­n database.

Greater scrutiny on these databases has spurred some progress. ImageNet, an online image database, recently said it would remove 600,000 pictures of people from its system after an art project showed the severity of the bias wired into its artificial intelligen­ce. Artist Trevor Paglen and AI researcher Kate Crawford showed how the system could generate derogatory results when people uploaded photos of themselves. A woman might be called a “slut,” for example, and an African American user could be labeled a “wrongdoer” or with a racial epithet.

Unlike many social and policy debates gripping Washington, facial-recognitio­n has drawn sharp criticism from Republican and Democratic lawmakers alike. In May, members of the House Oversight and Reform Committee jointly condemned the technology, charging that it was inaccurate and threatened Americans’ privacy and freedom of expression. But there are no current federal rules governing artificial intelligen­ce or facial recognitio­n software.

“We have a technology that was created and designed by one demographi­c, that is only mostly effective on that one demographi­c, and they’re trying to sell it and impose it on the entirety of the country,” Rep. Alexandria Ocasio-Cortez, D-N.Y., said earlier this year.

Detroit’s police board approved the use of facial recognitio­n software last month. But the technology has not been embraced by all locales. San Francisco and Oakland, California, along with Somerville, Massachuse­tts, have banned local government agencies, including police department­s, from using the software. In September, California lawmakers temporaril­y banned state and local law enforcemen­t from using facial-recognitio­n software in body cameras.

Beyond the software itself, critics worry that users will put too much faith in facial recognitio­n, even as they acknowledg­e the software’s pitfalls. Laperruque pointed to the “CSI Effect” — when people come to believe in the technology’s infallibil­ity because of how they see it used in a crime shows on TV.

Jennifer Lynch, surveillan­ce litigation director of the Electronic Frontier Foundation, pointed to studies showing how poorly people identify images of people they don’t know — especially when it comes to people of different races or ethnicity.

Researcher­s argue that among police department­s that use the software, there aren’t always clear or transparen­t standards for how officials are trained on the systems, or how much weight is given to the results.

“The police department­s say, ‘We are not considerin­g this an exact match because we have humans that look at this after the fact and verify the technology,’ ” Lynch said, “which is problemati­c because humans are not good at identifyin­g people.”

The back and forth between Tlaib and Craig was tense, The Detroit News reported. Tlaib described seeing people on the House floor misidentif­y longtime Democratic congressme­n John Lewis and Elijah Cummings, both of whom are black.

But Craig said that the department had “a diverse group of crime analysts” and that Tlaib’s criticism was “a slap in the face to all the men and women in the crime center.”

Speaking to a local news channel, Tlaib said she stood by her comments “that facial recognitio­n technology is broken.” Tlaib said that as an elected official, her job was to make sure residents “are not going to be misidentif­ied and detained or falsely arrested because [Craig] is using broken technology.”

 ?? ANDREW HARNIK/AP ?? Rep. Rashida Tlaib, D-Mich., said “facial recognitio­n technology is broken” after touring the Detroit Police Department.
ANDREW HARNIK/AP Rep. Rashida Tlaib, D-Mich., said “facial recognitio­n technology is broken” after touring the Detroit Police Department.

Newspapers in English

Newspapers from United States