Algorithm grants, removes freedom
Tech calculations have major impact on prisons, policing
PHILADELPHIA — Darnell Gates sat at a long table in a downtown Philadelphia office building. He wore a black Tshirt with “California” in bright yellow letters on the chest. He had never been to the state, but he hoped to visit family there after finishing his probation.
When Gates was released from jail in 2018 — he had served time for running a car into a house in 2013 and later for violently threatening his former domestic partner — he was required to visit a probation office once a week after he had been deemed “high risk.”
He called the visits his “tail” and his “leash.” Eventually, his leash was stretched to every two weeks. Later, it became a month. Gates wasn’t told why. He complained that conversations with his probation officers were cold and impersonal. They rarely took the time to understand his rehabilitation.
He didn’t realize that an algorithm had tagged him high risk until he was told about it during an interview with the New York Times.
“What do you mean?” Gates, 30, asked. “You mean to tell me I’m dealing with all this because of a computer?”
In Philadelphia, an algorithm created by a University of Pennsylvania professor has helped dictate the experience of probationers for at least five years.
The algorithm is one of many making decisions about people’s lives in the United States and Europe. Authorities use socalled predictive algorithms to set police patrols, prison sentences and probation rules. In the Netherlands, an algorithm flagged welfare fraud risks. A British city rates which teenagers are most likely to become criminals.
Nearly every state has turned to this new sort of governance algorithm, according to the Electronic Privacy Information Center, a nonprofit dedicated to digital rights. Algorithm Watch, a watchdog in Berlin, has identified similar programs in at least 16 European countries.
As the practice has spread, U.N. investigators, civil rights lawyers, labor unions and community organizers have been pushing back.
They are angered by a growing dependence on automated systems that are taking humans and transparency out of the process. It is often not clear how the systems are making their decisions. Is gender a factor? Age? ZIP code? It’s hard to say, since many states and countries have few rules requiring that algorithmmakers
disclose their formulas.
They also worry that the biases — involving race, class and geography — of the people who create the algorithms are being baked into these systems, as ProPublica has reported. In San Jose, where an algorithm is used during arraignment hearings, an organization called Silicon Valley DeBug interviews the family of each defendant, takes this personal information to each hearing and shares it with defenders as a kind of counterbalance to algorithms.
Two community organizers, the Media Mobilizing Project in Philadelphia and MediaJustice in Oakland, recently compiled a nationwide database of prediction algorithms. And Community Justice Exchange, a national organization that supports community organizers, is distributing a 50page guide that advises organizers on how to confront the use of algorithms.
The algorithms are supposed to reduce the burden on understaffed agencies, cut government costs and — ideally — remove human bias. Opponents say governments haven’t shown much interest in learning what it means to take humans out of the decisionmaking. A recent U.N. report warned that governments risked “stumbling zombielike into a digitalwelfare dystopia.”
Last year, Idaho passed a law specifying that the methods and data used in bail algorithms must be publicly available so the general public can understand how they work. In the Netherlands, a district court ruled Wednesday that the country’s welfarefraud software violated European human rights law, one of the first rulings against a government’s use of predictive algorithms.
“Where is my human interaction?” Gates asked, sitting next to his lawyer in the boardroom of the Philadelphia public defender’s office. “How do you win against a computer that is built to stop you? How do you stop something that predetermines your fate?”
On a recent Thursday, Todd Stephens sat in a food court across the street from Citizens Bank Park, home of the Philadelphia Phillies. He was explaining the latest effort to remake state sentencing practices with a predictive algorithm.
Predictive algorithms, at their most basic, work by using historical data to calculate the probability of events, similar to how a sports book determines the odds for a game or pollsters forecast an election result.
The technology builds on statistical techniques that have been used for decades, often for determining risk. They have been supercharged thanks to increases in affordable computing power and available data.
The private sector uses such tools all the time, to predict how likely people are to default on a loan, get sick or be in a car wreck, or whether they will click on an internet ad. Governments, which hold vast amounts of data about the public, have been eager to tap their potential.
A Republican member of the Pennsylvania House of Representatives, Stephens is part of a state commission working to adopt the technology. Like many states, Pennsylvania has mandated that an algorithm be developed to help courts decide the sentence after someone is convicted.
The idea, Stephens said, was to predict how likely people were to commit another crime by collecting information about them and comparing that to statistics describing known offenders. That might include age, sex and past and current convictions.
The commission had proposed a plan that would have leaned on information provided by county probation departments. But the American Civil Liberties Union and community groups protested this plan during public meetings in the fall. They worried it would expand the power of predictive algorithms used for probation, including the one that tagged Gates.
“We walked into a hornet’s nest I didn’t even know existed,” Stephens said.
In response to the protests, the state commission recommended a much simpler setup based on software already used by the state courts. But even this algorithm is difficult for a layperson to understand. Asked to explain it, Stephens suggested speaking with another commissioner.
Nyssa Taylor, criminal justice policy counsel with the Philadelphia ACLU, was among the protesters. She worries that algorithms will exacerbate rather than reduce racial bias. Even if governments share how the systems arrive at their decisions — which happens in Philadelphia in some cases — the math is sometimes too complex for most people to wrap their heads around.
Various algorithms embraced by the Philadelphia criminal justice system were designed by Richard Berk, a professor of criminology and statistics at Penn. These algorithms do not use ZIP codes or other location data that could be a proxy for race, he said. And although he acknowledged that a layperson couldn’t easily understand the algorithm’s decisions, he said human judgment had the same problem.
“All machinelearning algorithms are black boxes, but the human brain is also a black box,” Berk said. “If a judge decides they are going to put you away for 20 years, that is a black box.”
Last year in Rotterdam, Netherlands, a rumor circulating in two predominantly lowincome and immigrant neighborhoods claimed that the city government had begun using an experimental algorithm to catch citizens who were committing welfare and tax fraud.
Mohamed Saidi learned about it from a WhatsApp message that he initially thought was a joke. Mohamed Bouchk-hachakhe first heard from his mother, who had been told by a friend. Driss Tabghi got word from a local union official.
The rumor turned out to be true.
The Dutch program, System Risk Indication, scans data from government authorities to flag people who may be claiming unemployment aid when they are working or a housing subsidy for living alone when they are living with several others.
The agency that runs the program, the Ministry of Social Affairs and Employment, said the data could include income, debt, education, property, rent, car ownership, home address and the welfare benefits received for children, housing and health care.
The algorithm produces “risk reports” on individuals who should be questioned by investigators. In Rotterdam, where the system was most recently used, 1,263 reports were produced in two neighborhoods.
“You’re putting me in a system that I didn’t even know existed,” said Bouchkhachakhe, who works for a logistics company.
Similar programs exist elsewhere. In North Carolina, IBM software has been used to identify Medicaid fraud. In London, local councils tested software to identify those who may be wrongly claiming a housing benefit. Systems are used to flag children who may be at risk of abuse.
In Rotterdam, opposition built after word about the techniques spread. Privacy rights groups, civil rights lawyers and the largest national labor union rallied citizens to fight the effort.
“They will not tell you if you are on the register,” said Tijmen Wisman, an assistant professor of privacy law who runs a Dutch privacy group. He helped organize a meeting for roughly 75 residents in the affected neighborhoods, many taking video on their smartphones to share with neighbors.
The district court that sided with the opponents ordered an immediate halt to the algorithm’s use. In the closely watched case, which is seen as setting a precedent in Europe about government use of predictive algorithms, the court said that the welfare program lacked privacy safeguards and that the government was inadequately transparent about how it worked. The decision can be appealed.
Once a week in Bristol, England, a team gathers in a conference room to review the latest results from an algorithm meant to identify the most atrisk youths in the city and review caseloads. Representatives from the police and children’s services and a member of the team that designed the software typically attend to scan the list of names.
With youth violence and crime on the rise, and with many youth programs and community centers where young people gathered having been closed, the local government turned to software to help identify children most in need. Officials there say the work provides evidence the technology can work if coupled with a human touch.
Last year, Bristol introduced a program that creates a risk score based on data pulled from police reports, social benefits and other government records. The system tallies crime data, housing information and any known links to others with high risk scores, and if the youth’s parents were involved in a domestic incident. Schools feed in attendance records.
“You can get quite a complete picture,” said Tom Fowler, 29, the data scientist who helped create the youth scoring system for the Bristol government.
The scores fluctuate depending on how recently the youths had an incident like a school suspension. The goal at the weekly meetings is to identify children at risk of being recruited into crime.
There’s evidence that the algorithm identifies the right people, but the city is still figuring out how to translate the data into action. Last year, a teenager who had one of the highest risk scores stabbed someone to death. In a review of the killing, city officials concluded that they had taken the right steps. Fowler said a person can’t be arrested simply because of the algorithm.
“He had a social worker and oneonone coaching,” said Fowler, who now works for a data-analytics company. “He made a really bad decision.
“You can’t control for that. Data can only go so far. But it’s pretty much the worst thing that can happen. That makes you do a bit of soul searching whether you did everything you could.”
Sitting in the Philadelphia public defender’s office, Gates said he was an easy person to read, pointing to the tattoos on his arms, which were meant to look like the bones under his skin. He understands machines. From a young age, he enjoyed dismantling computers and smartphones before putting them back together.
But Gates, whom we met through the defender’s office, believed that a person could read him better than a machine.
“Does a computer know I might have to go to a doctor’s appointment on Friday at 2 o’clock?” he asked.
Visiting the probation office so often can prevent him from getting the rest of his life on track. “How is it going to understand me as it is dictating everything that I have to do?” Gates asked.
Several weeks after his interview with the Times, he was allowed to make a short trip to Puerto Rico after a personal appeal to a judge. He always felt comfortable in front of his judge. The experience showed him the importance of a human touch.
“I can’t explain my situation to a computer,” Gates said. “But I can sit here and interact with you, and you can see my expressions and what I am going through.”