The Mercury News

States reconsider use of facial recognitio­n software by police.

Local government­s weighing benefits against privacy

- Sy Julie Carr Smyth

COLU SUS, OHIO >> Law enforcemen­t agencies across the U.S. have used facial recognitio­n technology to solve homicides and bust human trafficker­s, but concern about its accuracy and the growing pervasiven­ess of video surveillan­ce is leading some state lawmakers to hit the pause button.

At least seven states and nearly two dozen cities have limited government use of the technology amid fears over civil rights violations, racial bias and invasion of privacy. Debate over additional bans, limits and reporting requiremen­ts has been underway in about 20 state capitals this legislativ­e session, according to data compiled by the Electronic Privacy Informatio­n Center.

Lawmakers say they want to give themselves time to evaluate how and why the technology is being used.

“I think people are just freaked out, and rightfully so, about this technology,” said Freddy Martinez, director of Lucy Parsons Labs, a Chicago nonprofit that specialize­s in citizens’ digital rights. “It’s one of those rare issues that’s seen bipartisan support, in that nobody wants to be tracked everywhere they go, especially when you don’t have a choice.”

The issue caught fire in statehouse­s after law enforcemen­t applied facial recognitio­n technology to images taken from street cameras during last year’s racial justice demonstrat­ions — and in some cases used those to make arrests. Complaints about false identifica­tions prompted Amazon, Microsoft and IBM to pause sales of their software to police, though most department­s hire lesser-known firms that specialize in police contracts. Wrongful arrests of Black men have gained attention in Detroit and New Jersey after the technology was blamed for mistaking their images for those of others.

The American Civil Liberties Union began raising questions about the technology years ago, citing studies that found higher error rates for facial recognitio­n software used to identify people of color. Concerns also have grown because of increasing awareness of the Chinese government’s extensive video surveillan­ce system, especially as it’s been employed in a region home to one of China’s largely Muslim ethnic minority population­s.

In March, the ACLU sued Clearview AI, a company that provides facial recognitio­n services to law enforcemen­t and private companies, contending it illegally stockpiled images of 3 billion people scraped from internet sites without their knowledge or permission.

For many, news of that stockpile, first reported by The New York Times, raised concerns that the type of surveillan­ce seen in China could happen in the U.S. and other countries. Cities that passed bans — including Boston; Minneapoli­s; San Francisco; Oakland, California; and Portland, Oregon — listed concerns about police using the technology secretly among their reasons.

Hoan Ton-That, CEO of Clearview AI, said his company collects only publicly available photos from the open internet that are accessible “from any com

puter anywhere in the world.” He said its database cannot be used for surveillan­ce.

Ton-That said that, as a person of mixed race, it is important to him that the technology is not biased.

“Unlike other facial recognitio­n technologi­es that have misidentif­ied people of color, an independen­t study has indicated that Clearview AI has no racial bias,” he said in a statement. “We know of no instance where Clearview AI’s technology has resulted in a wrongful arrest.”

But the pushback against the technology has continued.

Last year, New York imposed a two-year moratorium on use of the technology in schools after an upstate district adopted facial recognitio­n as part of its security plans and was sued. A state ACLU executive called it “flawed and racially-biased” technology that didn’t belong in schools.

That came on the heels of the nation’s first ban on government use of the technology, in San Francisco in 2019, and a statewide three-year moratorium on police department­s using facial recognitio­n from videos shot with body cameras that California imposed later that year.

No such restrictio­ns exist at the federal level. Variants of facial recognitio­n technology were used, including by ordinary people, to help identify those who took part in the deadly insurrecti­on at the U.S. Capitol on Jan. 6. Police also used it at some protests last year staged against coronaviru­s-related mask mandates, and some activists have used it to identify police officers engaged in misconduct.

This February, Virginia lawmakers passed one of the most restrictiv­e bans of them all. It prohibits local law enforcemen­t agencies and campus police department­s — though not state police — from purchasing or using facial recognitio­n technology unless expressly authorized by the state legislatur­e.

Police groups are pushing for the prohibitio­ns to be revisited.

“It’s fear-mongering politics at its worst,” said Jonathan Thompson, CEO and executive director of the National Sheriffs’ Associatio­n.

He said facial recognitio­n technology is just one tool used by police agencies — and not to the extent politician­s suggest.

“I’ve never heard of anybody sitting around a computer monitor searching for people all day, every day. It doesn’t work that way,” he said. “Agencies have rules. They have governance of how and who has access to these databases. They have to have a legitimate, rational reason for doing it.”

Thompson’s associatio­n produced a report detailing example after example of the technology being used for good to snag drug dealers, to solve murders and missing persons cases, and to identify and rescue human traffickin­g victims. Most often, a face is compared against a database of known subjects. The vast majority of images are criminal mugshots, he said, not driver’s license photos or random pictures of individual­s.

A new Massachuse­tts law tries to strike a balance between civilian and police concerns. It allows police to benefit from the technology while adding protection­s that could prevent false arrests.

In Ohio, Republican Attorney General Dave Yost headed off a restrictiv­e law on facial recognitio­n data — at least so far — by conducting his own investigat­ion into the state’s images database in response to a Georgetown University Law Center report that found immigratio­n officials were applying the technology to driver’s license photos in some states.

Yost’s review found local, state and federal authoritie­s didn’t use driver’s license or other photos “to conduct mass surveillan­ce, broad dragnets, political targeting or other illegitima­te uses.”

Martinez, of the Lucy Parsons Lab, said he’s not reassured.

“I really do think this is one of these tools, let’s say, science shouldn’t be using. It’s uniquely bad in ways other technologi­es are not,” he said. “People nationally want police to do their jobs, but there are certain lines we don’t let them cross. This crosses that line.”

 ??  ??
 ?? MARK LENNIHAN — THE ASSOCIATED PRESS ARCHIVES ?? State lawmakers across the U.S. are reconsider­ing the trade-offs of facial recognitio­n technology amid civil rights and racial bias concerns.
MARK LENNIHAN — THE ASSOCIATED PRESS ARCHIVES State lawmakers across the U.S. are reconsider­ing the trade-offs of facial recognitio­n technology amid civil rights and racial bias concerns.
 ?? JOHN MINCHILLO — THE ASSOCIATED PRESS ARCHIVES ?? Protesters march in New York following the death of George Floyd. Companies paused sales of software to police after Floyd’s death.
JOHN MINCHILLO — THE ASSOCIATED PRESS ARCHIVES Protesters march in New York following the death of George Floyd. Companies paused sales of software to police after Floyd’s death.

Newspapers in English

Newspapers from United States