AI makes a bad judge
Imagine a form of technology so developed that law enforcement and the judicial system begin to rely on it not to investigate or punish past crimes but to prevent future crimes from happening, with dire consequences for those identified as felonsto-be.
Film fans will be familiar with this setup as the plot of Steven Spielberg ’s 2002 sci-fi thriller “Minority Report,” starring Tom Cruise as the luckless “precrime” detective sent on the run after he’s suddenly flagged as the culprit in a murder-yet-to-be. Spoiler alert: The movie ends with the precrime system being scrapped.
It is a truism of modern life that yesterday’s science fiction is today’s breaking headlines, as demonstrated in recent research into the widespread use of artificial intelligence programs to do, well, everything from journalism to cancer diagnostics. (Side note: This editorial was crafted by human hands, we swear.)
Of particular concern is the use of AI to help law enforcement, judges and the probation system “predict” the likelihood of recidivism by those convicted of crimes. A Propublica report from as far back as 2016 detailed the already-abundant concerns from researchers as well as the U.S. Justice Department that algorithms used to produce “risk assessments” for states and localities across the nation were producing results that weren’t especially accurate and seriously disadvantaged Black offenders.
Seven years is a long time in the development of this kind of technology, but new research from state University at Albany philosophy professor Jason D’cruz and IBM artificial intelligence programmer Kush Varshney concludes that technology hasn’t developed to the point that it trumps human empathy.
As the Times Union’s Kathleen Moore recently reported, Mr. D’cruz and Mr. Varshney made the case that AI lags human understanding when it comes to gauging what D’cruz referred to as “excusing conditions” — motivations that might explain why an otherwise law-abiding individual might end up kiting a check. Technology has a long way to go in developing an AI that can be programmed to develop the kind of empathy that — imperfect though it might be — humans mix with data to make their assessments.
There’s limited comfort to be found in the assurances from professionals such as Timothy Ferrara, Schenectady’s probation director, who told Ms. Moore that many in his field are fully aware of the limitations of this sort of predictive technology, and retain the ability to override a machine-produced score they feel fails to account for improvements in an offender’s conduct or environment.
Recent media attention to AI’S rapid developments, such as a New York Times tech writer’s memorable exchange with a chatbot that left reporter Kevin Roose as shaken as if he had been locked in a cell with a creepy adolescent, remind us that our programs often reflect the imperfections of their creators. Humans need to remain in the driver’s seat for any decision that impacts an individual’s path through the justice system, unless we want it to become the public-policy version of another tale of technology run amok — one bearing the title “Frankenstein.”