Measly gains from models that predict readmissions
esearchers on the hunt for clues to predict which hospital patients will return for a second stay have so far made only marginal progress, said authors of a newly published study.
Researchers found studies of models that could point to patients at high risk of repeat hospital visits—using variables such as age, diagnosis, prior medical care—have been mostly limited and not terribly effective.
“The thing that was evident from reviewing a couple dozen of these models was how complex readmission risk really is,” said Dr. Devan Kansagara, the study’s lead author and director one of four evidence-based synthesis programs for the Veterans Affairs Department. Agency officials rely on analysis from the four programs to help guide VA policy.
The study was published in the Oct. 19 issue of the Journal of the American Medical Association.
Kansagara and six other researchers reviewed 30 studies of 26 models to predict hospital readmissions. “One of the bottom lines is that this is a complex phenomenon and the factors extend well beyond their illness and co-morbidity,” he said.
For hospitals, an effective model may better identify patients for programs to prevent avoidable hospital visits.
Federal officials have identified readmission rates as a measure of quality and have tied payment penalties to high rates starting
Rin 2013. Fourteen studies reviewed by Kansagara and his colleagues relied on retrospective data. Of those, nine were large U.S. studies that generally performed poorly, including three CMS studies for congestive heart failure, acute myocardial infarction and pneumonia.
Without more accurate models, Kansagara said, hospitals risk wasting resources by targeting the wrong patients. Findings also raise questions whether public comparisons for payment incentives could be flawed, with unintended consequences, he said.
One comparison found no difference between a predictive model that considered several factors—age, sex, self-reported health, heart disease or diabetes diagnosis, prior medical use, prior hospitalization and help from an informal caregiver—and the best guess of doctors and medical residents and interns.
The model and doctors proved poor predictors of who would land back in the hospital. Nurses and case managers had even less success.
John Adams, a senior statistician in RAND Health, said he was not surprised by the findings. “It’s just plain hard,” he said of devising an accurate model. “Predictions are difficult.” Adams said he also understands the interest and hopes that predictive healthcare models inspire, despite their mostly disappointing performance so far.
However, Adams noted that statistical scores commonly used to measure predictive models’ accuracy should not be the only criteria for their use. Even moderately accurate models may be useful to target prevention and case-management programs if the cost is offset by the gains, he said.
Research has explored a limited number of potentially telling indicators, Kansagara said.
Most studies considered whether patients had multiple diseases and how often, if at all, patients used medical care or were hospitalized, the authors wrote. Nearly all studies also used patients’ age and gender as factors to predict future hospital visits.
But other variables were frequently absent from research. Often omitted were measures of patients’ overall ability to function, the severity of their illness, and factors such as income, social support and access to care.
Kansagara also had a practical motive for the research. He and a colleague have begun to design a program to prevent unnecessary hospital stays for vulnerable patients at the Portland Veterans Affairs Medical Center. With no highly accurate model to help find high-risk patients or target prevention efforts, the physicians instead will carefully consider the characteristics of patients they hope to help, he said. “There is no off-theshelf-model.”