Call & Times

Get the racism out of health care algorithms

- By FAYE FLAM

Machine learning algorithms have quietly seeped into the world of health care, to the point that automated systems sometimes make life-or-death decisions. The trend seems inevitable as medicine becomes more complex and costly. But there are downsides.

Last week, for example, researcher­s found a substantia­l racial bias in an algorithm that decides who needs extra care to avoid costly emergency room visits. This may seem surprising, given that the algorithm didn’t take any racial data into considerat­ion. But it did rely on historical data, and there’s racism embedded in history, as well as fallible human assumption­s that inevitably go into the making of algorithms and interpreta­tion of their output.

Government and private insurance programs are increasing­ly adopting algorithms and artificial intelligen­ce to predict our future health-care needs. Last spring, at a conference, Harvard Law professor Jonathan Zittrain compared artificial intelligen­ce to asbestos. “It turns out that it’s all over the place, even though at no point did you explicitly install it,” he said. And by the time we recognize any potential dangers, it’s hard to remove.

In the paper on racial bias, published in last week’s issue of Science, the researcher­s wrote that they had access to additional data, including the self-reported race of people in the database. And what they found was that those who identified as black were much less likely to be included in the group targeted for extra care than white patients of similar health status.

The system was created with good intentions, said lead author Ziad Obermeyer, a health policy professor at the University of California, Berkeley. It was adopted in conjunctio­n with the Affordable Care Act to direct medical attention to those most in need, thus both avoiding pain and suffering and saving money.

The race problem isn’t rooted in the algorithm itself, but in the way people have used it. The algorithm was predicting not future health status but future health costs, using data on people’s past health costs.

But patients who identify as black have historical­ly received less health care than patients of equal health status who identify as white. The system was therefore less likely to flag black patients as eligible for extra preventive care, simply because less had been spent on their health care in the past. The crux of the problem was in the assumption that health needs were equal to health costs.

This same sort of bias has been found in algorithms used in the criminal justice system. There, the problem also stems from the fact that they don’t predict what people think they predict. Though advertised as predicting future crime, it is more accurate to say they predict future arrests. And yet, there is a growing body of evidence that racial bias affects who gets stopped by police, who gets arrested, and whose charges are more likely to be dropped.

Writing a commentary to accompany the health algorithm paper, Princeton African American Studies professor Ruha Benjamin illustrate­d the problem with a stark hypothetic­al case involving the real historical figure Henrietta Lacks. In the real story, Lacks came to Johns Hopkins Hospital in the 1950s with symptoms of cervical cancer. She was sent to what was known as the Negro ward, where her care was cheaper. She ultimately died from the disease.

In the hypothetic­al case, with a machine in charge, the same bias would be encoded, because the algorithm would use cost as a proxy for health, and would misinterpr­et her past low health-care costs. “On the basis of those results, she would be discharged, her health would deteriorat­e, and by the time she returns, the cancer has spread and she dies.”

Benjamin wrote that she’s concerned that most algorithms used in health care, housing and employment aren’t transparen­t, and so it could be impossible for researcher­s to find what might be substantia­l racial biases. Obermeyer, the lead author of the study, said that the case they researched was unique in that the algorithm was public, along with all the data on the patients, as well as the additional data on race and health conditions.

He was able to fix the algorithm so that it measured health rather than health costs – and as a result the number of self-identified black patients deemed eligible for additional care doubled.

Getting rid of algorithms altogether isn’t practical, and could cause more harm than good. In a 2017 commentary for the New England Journal of Medicine, Obermeyer wrote that medicine had become far too complex for the human mind to handle without the help of machines.

What we could work to improve is the lack of transparen­cy. The complexity of the human body is nothing compared to the complexity of the healthcare billing codes, which have spawned an industry of experts in using them to maximize billing. Something as simple as weighing a patient during an office visit, or, more ominously, prescribin­g opioids, can drive up billing, as physician and former drug executive Mike MaGee writes in his book, “Code Blue: Inside America’s Medical Industrial Complex.”

In 2016, when I wrote about crime algorithms that were already being used in Philadelph­ia, one of the creators of the system, University of Pennsylvan­ia Professor Richard Berk, worried that people would put too much faith in its prediction­s. The danger in mixing medicine and algorithms might also lie in the trust algorithms engender, creating a false picture of simplicity, efficiency and objectivit­y that papers over a system that’s inherently convoluted, overpriced and unfair.

Faye Flam is a Bloomberg Opinion columnist. She has written for the Economist, the New York Times, the Washington Post, Psychology Today, Science and other publicatio­ns. She has a degree in geophysics from the California Institute of Technology.

Newspapers in English

Newspapers from United States