Kuwait Times

Algorithm bias in focus amid reckoning on ‘police racism’

Concerns over artificial intelligen­ce programs

-

WASHINGTON: A wave of protests over law enforcemen­t abuses has highlighte­d concerns over artificial intelligen­ce programs like facial recognitio­n which critics say may reinforce racial bias. While the protests have focused on police misconduct, activists point out flaws that may lead to unfair applicatio­ns of technologi­es for law enforcemen­t, including facial recognitio­n, predictive policing and “risk assessment” algorithms.

The issue came to the forefront recently with the wrongful arrest in Detroit of an African American man based on a flawed algorithm which identified him as a robbery suspect. Critics of facial recognitio­n use in law enforcemen­t say the case underscore­s the pervasive impact of a flawed technology. Mutale Nkonde, an AI researcher, said that even though the idea of bias and algorithms has been debated for years, the latest case and other incidents have driven home the message.

“What is different in this moment is we have explainabi­lity and people are really beginning to realize the way these algorithms are used for decision-making,” said Nkonde, a fellow at Stanford University’s Digital Society Lab and the BerkmanKle­in Center at Harvard. Amazon, IBM and Microsoft have said they would not sell facial recognitio­n technology to law enforcemen­t without rules to protect against unfair use. But many other vendors offer a range of technologi­es.

Secret algorithms Nkonde said the technologi­es are only as good as the data they rely on. “We know the criminal justice system is biased, so any model you create is going to have ‘dirty data,’” she said. Daniel Castro of the Informatio­n Technology & Innovation Foundation, a Washington think tank, said however it would be counterpro­ductive to ban a technology which automates investigat­ive tasks and enables police to be more productive. “There are (facial recognitio­n) systems that are accurate, so we need to have more testing and transparen­cy,” Castro said.

“Everyone is concerned about false identifica­tion, but that can happen whether it’s a person or a computer.” Seda Gurses, a researcher at the Netherland­s-based Delft University of Technology, said one problem with analyzing the systems is that they use proprietar­y, secret algorithms, sometimes from multiple vendors. “This makes it very difficult to identify under what conditions the dataset was collected, what qualities these images had, how the algorithm was trained,” Gurses said.

Predictive limits

The use of artificial intelligen­ce in “predictive policing,” which is growing in many cities, has also raised concerns over reinforcin­g bias. The systems have been touted to help make better use of limited police budgets, but some research suggests it increases deployment­s to communitie­s which have already been identified, rightly or wrongly, as highcrime zones. These models “are susceptibl­e to runaway feedback loops, where police are repeatedly sent back to the same neighborho­ods regardless of the actual crime rate,” said a 2019 report by the AI Now Institute at New York University, based a study of 13 cities using the technology. These systems may be gamed by “biased police data,” the report said. In a related matter, an outcry from academics prompted the cancellati­on of a research paper which claimed facial recognitio­n algorithms could predict with 80 percent accuracy if someone is likely to be a criminal.

 ?? — AFP ?? WASHINGTON: Washington Police officers watch demonstrat­ors as they march on the street at the fence in Lafayette Park, near the White House.
— AFP WASHINGTON: Washington Police officers watch demonstrat­ors as they march on the street at the fence in Lafayette Park, near the White House.
 ??  ??

Newspapers in English

Newspapers from Kuwait