New Haven Register (New Haven, CT)

Council votes to ban use of facial recognitio­n tech

- By Meghan Friedmann

“It is great to see a municipal legislativ­e body be proactive on an issue like this.” David McGuire, ACLU of Conn.

HAMDEN — Joining cities including San Francisco, Baltimore and Cambridge, Mass., the comparativ­ely small town of Hamden has placed itself at the forefront of a national issue by banning government­al use of facial recognitio­n technology.

The Legislativ­e Council Monday approved an ordinance to that end.

It was a bipartisan effort spearheade­d by two at

large council members — Brad Macdowall, a Democrat, and Austin Cesare, a Republican. They cited privacy concerns when they introduced the ordinance, and Macdowall pointed to reports that such software disproport­ionately misidentif­ies people of color.

While the director of the American Civil Liberties Union of Connecticu­t worried about how such technology could be abused and praised Hamden’s efforts to get ahead of it, one artificial intelligen­ce company pushed back against the concerns about racial bias.

Clearview AI’s representa­tives said their facial recognitio­n app is highly accurate and has the necessary safeguards in place to protect privacy.

The Hamden Police Department previously engaged in a free trial of Clearview AI, which came to light after BuzzFeed News published a leaked list of its clients. Police Department command staff has said the department did not use the software in any active cases, decided not to purchase the software and currently does not use any facial recognitio­n technology.

Clearview AI has a database of more than three billion images that dwarfs the databases of similar products, according to Clearview and a New York Times Magazine report.

The company highlights the size of its database when marketing its product, but pending lawsuits against Clearview AI, including one filed by the national and Illinois branches of the ACLU, allege the firm violated privacy rights when it created the database by scraping images from social media platforms and other websites.

Clearview denies those allegation­s. Roger Rodriguez, the firm’s vice president of public sector, said all images in the database were publicly available and Clearview simply put them in one place.

Rodriguez, who previously worked for the New York Police Department, now “evangelize­s” the use of Clearview’s software as a crime-fighting tool, he said. By rapidly identifyin­g victims and suspects, he said, the software can accomplish in minutes what would take a team of police detectives hours.

Rodriguez gave Hearst Connecticu­t Media a live demonstrat­ion of the technology, dragging and dropping into the app a photo of a suspect in a child sexual abuse case.

Within seconds, a possible match popped onto the screen. The app had singled out a face in the background of somebody’s Instagram photo.

According to a New York Times Magazine report on the same case, authoritie­s were able to identify the man, who ultimately was sentenced to 35 years in prison.

It was one of several examples Rodriguez gave, which he said represente­d successful uses of the technology by law enforcemen­t. He also said agencies used Clearview AI to identify suspected participan­ts in the Jan. 6 Capitol riot.

But privacy advocates and academic researcher­s have sounded the alarm about the potential misuse of such technology.

“It’s really important for people to understand that once the software is brought online at a police department, there’s no external oversight on this,” said David McGuire, executive director of the ACLU of Connecticu­t.

McGuire said police do not need to obtain warrants to search facial recognitio­n databases.

“There’s a real concern about having a dystopian future where the government can surveil whoever they want, wherever they want,” McGuire said. “It is great to see a municipal legislativ­e body be proactive on an issue like this.”

Rodriguez said Clearview AI has protection­s in place to prevent the misuse of its technology.

Users running a search must input a case number, which gets verified by the system administra­tor, he said. Each law enforcemen­t agency has a designated administra­tor, according to Rodriguez, who said that individual could be a member of the command staff.

While Clearvew AI previously offered free trials of the software to anyone with a law enforcemen­t email address, Rodriguez said the firm has since tightened its policies and now only offers the trial with command staff approval.

Because administra­tors must authorize individual­s to use the app, it cannot be accessed by anyone in the police department, Rodriguez said. The technology also allows administra­tors to view precisely who has run searches and what searches they have run, according to Rodriguez, who showed Hearst Connecticu­t Media a sample screen of users’ search histories.

Debates over facial recognitio­n technology also have raised questions of accuracy, and reports that it disproport­ionately misidentif­ies people of color have sparked concern among racial justice advocates

“We know that this technology’s a nightmare for civil liberties,” McGuire said of facial recognitio­n software generally. “We know that this technology has a record of inaccuracy when it comes to racial and gender identity, effectivel­y opening the door to another version of profiling.”

But Clearview AI disputes such allegation­s when it comes to its own app.

Rodriguez claimed the algorithm is more than 99 percent accurate. An accuracy report provided by the company said when a test compared photos of 834 legislator­s against Clearview AI’s databases, it did not return any false matches.

Adam Nagy, a research associate at Harvard Law School’s Berkman Klein Center for Internet & Society, expressed concerns with the accuracy report. He suggested in an email that “elected figures with numerous public images available online ... are a poor stand-in for an ordinary person with a much smaller digital footprint.”

He also said the images used in the search — portraits from legislativ­e bodies, according to the accuracy report — were dissimilar from “images that police will typically be using in an investigat­ion which may be from a CCTV, body-cam, in-motion, or from a high angle.”

Zecher contended that CCTV footage can produce high quality images due to improved technology and said he did not have concerns about running the test on images of public officials.

To evaluate how accurate an algorithm is, third parties need access to company databases, Nagy said.

Asked about Nagy’s statement, Clearview AI spokesman Josh Zecher said in an email the app is undergoing testing by the National Institute of Standards and Technology, or NIST. He called NIST the “gold standard” of facial recognitio­n testing and said the results should be available by November.

But even if the public were to assume all facial recognitio­n technology is accurate, Nagy said,

“there’s a bigger discussion about the role that such a technology might play in altering privacy as we know it and also chilling … political speech.”

When it comes to regulation, state and federal legislatur­es have not caught up with the technology, he said, and so important safeguards are not yet in place. Because of that, he leans against hastily adopting the technology.

“I think local jurisdicti­ons are right to develop a cautious approach,” he said.

Asked whether agencies should wait for the legislatur­e to regulate facial recognitio­n technology, Rodriguez said he’s been hearing that question for a decade.

Zecher said towns don’t need to ban the technology — they just need to put the right requiremen­ts in place for responsibl­e use.

Hamden council member Macdowall said Mayor Curt Balzano Leng needs to sign the ordinance before it takes effect. Leng did not immediatel­y return a request for comment.

 ?? Clare Dignan / Hearst Connecticu­t Media file photo ?? Hamden Memorial Town Hall
Clare Dignan / Hearst Connecticu­t Media file photo Hamden Memorial Town Hall

Newspapers in English

Newspapers from United States