Study: Fa­cial recog­ni­tion shows bias

Peo­ple of color, women more likely to be misiden­ti­fied

Orlando Sentinel - - FRONT PAGE - By Drew Har­well

WASH­ING­TON — Fa­cial-recog­ni­tion sys­tems misiden­ti­fied peo­ple of color more of­ten than white peo­ple, a re­cently re­leased land­mark fed­eral study shows, cast­ing new doubts on a rapidly ex­pand­ing in­ves­tiga­tive tech­nique widely used by law en­force­ment across the United States.

Asian and African American peo­ple were up to 100 times more likely to be misiden­ti­fied than white men, de­pend­ing on the par­tic­u­lar al­go­rithm and type of search. The study, which found a wide range of ac­cu­racy and per­for­mance be­tween devel­op­ers’ sys­tems, also showed that Na­tive Americans had the high­est false­pos­i­tive rate of all eth­nic­i­ties.

The faces of African American women were falsely iden­ti­fied more of­ten in the kinds of searches used for po­lice in­ves­ti­ga­tors, in which an im­age is com­pared with thou­sands or mil­lions of others in hopes of iden­ti­fy­ing a sus­pect.

Al­go­rithms de­vel­oped in the U.S. also showed high er­ror rates for “one-to-one” searches of Asians, African Americans, Na­tive Americans and Pa­cific Is­landers. Such searches are crit­i­cal to such func­tions as cell­phone sign-ons and air­port board­ing schemes, and er­rors could make it eas­ier for im­pos­tors to gain ac­cess to those sys­tems.

Women were more likely to be falsely iden­ti­fied than men, and the el­derly and chil­dren were more likely to be misiden­ti­fied than those in other age groups, the study found. Mid­dle-age white men gen­er­ally ben­e­fited from the high­est ac­cu­racy rates.

The Na­tional In­sti­tute of Stan­dards and Tech­nol­ogy, the fed­eral lab­o­ra­tory known as the NIST that de­vel­ops stan­dards for new tech­nol­ogy, found “em­pir­i­cal ev­i­dence” that most of the fa­cial-recog­ni­tion al­go­rithms ex­hibit “de­mo­graphic dif­fer­en­tials” that can worsen their ac­cu­racy based on a per­son’s age, gen­der or race.

The study could fun­da­men­tally shake one of American law en­force­ment’s fastest-grow­ing tools for iden­ti­fy­ing criminal sus­pects and wit­nesses, which pri­vacy ad­vo­cates have ar­gued is ush­er­ing in a dan­ger­ous new wave of gov­ern

ment sur­veil­lance tools.

The FBI has logged more than 390,000 fa­cial-recog­ni­tion searches of state driver-li­cense records and other fed­eral and lo­cal data­bases since 2011, fed­eral records show. But mem­bers of Congress this year have voiced anger over the tech­nol­ogy’s lack of reg­u­la­tion and its po­ten­tial for dis­crim­i­na­tion and abuse.

The fed­eral re­port con­firms pre­vi­ous find­ings from stud­ies show­ing sim­i­larly stag­ger­ing er­ror rates. Com­pa­nies such as Ama­zon had crit­i­cized those stud­ies, say­ing they re­viewed out­dated al­go­rithms or used the sys­tems im­prop­erly.

One of those re­searchers, Joy Buo­lamwini, said the study was a “com­pre­hen­sive re­but­tal” to skep­tics of what re­searchers call “al­go­rith­mic bias.”

The study, she said, is “a sober­ing re­minder that fa­cial recog­ni­tion tech­nol­ogy has con­se­quen­tial tech­ni­cal lim­i­ta­tions along­side pos­ing threats to civil rights and lib­er­ties.”

In­ves­ti­ga­tors said they did not know what caused the gap but hoped the find­ings would, as NIST com­puter sci­en­tist Pa­trick Grother said in a state­ment, prove “valu­able to pol­i­cy­mak­ers, devel­op­ers and end users in think­ing about the lim­i­ta­tions and ap­pro­pri­ate use of these al­go­rithms.”

Jay Stan­ley, a se­nior pol­icy an­a­lyst at the American Civil Lib­er­ties Union, which sued fed­eral agen­cies this year for records re­lated to how they use the tech­nol­ogy, said the re­search showed why govern­ment lead­ers should im­me­di­ately halt its use.

“One false match can lead to missed flights, lengthy in­ter­ro­ga­tions, tense po­lice en­coun­ters, false ar­rests or worse,” he said. “But the tech­nol­ogy’s flaws are only one con­cern. Face recog­ni­tion tech­nol­ogy — ac­cu­rate or not — can en­able un­de­tectable, per­sis­tent and sus­pi­cion­less sur­veil­lance on an un­prece­dented scale.”

The NIST test ex­am­ined most of the in­dus­try’s lead­ing sys­tems, in­clud­ing 189 al­go­rithms vol­un­tar­ily sub­mit­ted by 99 com­pa­nies, aca­demic in­sti­tu­tions and other devel­op­ers. The al­go­rithms form the cen­tral build­ing blocks for most of the fa­cial-recog­ni­tion sys­tems around the world.

The al­go­rithms came from a range of ma­jor tech com­pa­nies and sur­veil­lance con­trac­tors, in­clud­ing Idemia, In­tel, Mi­crosoft, Pana­sonic, Sense­Time and Vig­i­lant So­lu­tions.

No­tably ab­sent from the list was Ama­zon, which de­vel­ops its own soft­ware, Rekog­ni­tion, for sale to lo­cal po­lice and fed­eral in­ves­ti­ga­tors to help track down sus­pects.

The NIST said Ama­zon did not sub­mit its al­go­rithm for test­ing. The com­pany did not im­me­di­ately of­fer com­ment but has said pre­vi­ously that its cloud-based ser­vice can­not be eas­ily ex­am­ined by the NIST test.

The NIST team tested the sys­tems with about 18 mil­lion pho­tos of more than 8 mil­lion peo­ple, all of which came from data­bases run by the State De­part­ment, the De­part­ment of Home­land Se­cu­rity and the FBI. No pho­tos were taken from so­cial me­dia, video sur­veil­lance or the open in­ter­net, they said.

The test stud­ied both how al­go­rithms work on “one-to-one” match­ing, used for un­lock­ing a phone or ver­i­fy­ing a pass­port, and “one-to-many” match­ing, used by po­lice to scan for a sus­pect’s face across a vast set of driver-li­cense pho­tos.

MARK LENNIHAN/AP

A study by a U.S. agency has found that fa­cial recog­ni­tion sys­tems of­ten per­form un­evenly based on race, gen­der or age, cast­ing new doubts on a rapidly ex­pand­ing in­ves­tiga­tive tech­nique widely used by law en­force­ment across the coun­try.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.