Why AI is difficult to review
Identifying patients at high risk for a stroke is a challenge— but one that could help physicians prevent stroke, if paired with appropriate interventions and treatments.
But there’s a ton of data to parse through, which is why researchers at New York City-based Montefiore Medical Center are using machine learning to study which clinical, demographic and social determinants variables are most associated with stroke, with a goal of using findings to inform development of new tools that assess risk of recurrent stroke.
Building a predictive model without AI would be difficult, given the number of variables the researchers wanted to include, said Dr. Charles Esenwa, a researcher and neurologist working on the project.
“That was the reason we experimented with machine learning,” said Esenwa, who’s also director of Montefiore’s Center for Comprehensive Stroke and an assistant professor at the Albert Einstein College of Medicine.
Despite its benefits, that ability to ingest massive amounts of data can also pose challenges. Even the researchers developing a machine-learning algorithm might not know what variables the algorithm is paying most attention to or how different variables are weighed, since the algorithm isn’t capable of describing its decision-making process.
That means that, unlike other types of software, it’s not always clear how an AI algorithm is reaching its conclusions—it’s often hidden in what experts call a “black box.” To add another layer of complication, some AI tools continually adapt in response to new data, changing how they make decisions over time.
“With traditional advanced analytics, you know all the variables upfront and you tune (the inputs) over time,” said Jason Joseph, chief digital and information officer at Spectrum Health in Grand Rapids, Michigan. “With AI you do not give it the formula. You don’t tell it what the variables are. You just give it a whole bunch of data … and it figures out what the variables are.”