AR­TI­FI­CIAL IN­TEL­LI­GENCE Get­ting Smarter About AI

Pi­o­neer health­care or­ga­ni­za­tions see ben­e­fits from AI.

Health Data Management - - INSIDE FEATURES - By Linda Wil­son

Pi­o­neer­ing health­care or­ga­ni­za­tions aim to de­ter­mine bot­tom-line ben­e­fits of ar­ti­fi­cial in­tel­li­gence.

Ar­ti­fi­cial in­tel­li­gence—a broad set of tech­nolo­gies that en­able ma­chines to mimic the hu­man brain’s abil­ity to process in­for­ma­tion, learn and adapt—holds po­ten­tial in health­care to im­prove pa­tient out­comes and re­duce costs, but it hasn’t yet been widely adopted in daily clin­i­cal prac­tice.

How­ever, some lead­ing health­care or­ga­ni­za­tions, such as the Cleve­land Clinic, In­ter­moun­tain Health­care and oth­ers, are be­gin­ning to build the in­fra­struc­ture and data sci­ence ca­pa­bil­i­ties to use AI to de­liver clin­i­cal and fi­nan­cial ben­e­fits.

While some in­dus­tries are us­ing AI pro­grams de­signed to rec­og­nize speech, writ­ten lan­guage or vis­ual data or do prob­lem-solv­ing, health sys­tems are gain­ing ex­pe­ri­ence with ma­chine learn­ing, a sub­set of AI fo­cused on find­ing pat­terns or re­la­tion­ships in data in an it­er­a­tive, or learn­ing, fash­ion. Early projects have demon­strated promis­ing re­sults.

In some of these cases, health­care or­ga­ni­za­tions have pur­chased a com­mer­cial tool to help them reach a spe­cific clin­i­cal goal, such as re­duc­ing hos­pi­tal read­mis­sion rates or pre­dict­ing which pa­tients are at high­est risk of be­com­ing ex­pen­sive cases. The work of­ten in­cor­po­rates ma­chine learn­ing tech­niques to hone ex­ist­ing mod­els or clin­i­cal pro­cesses, with the aim of im­prov­ing ac­cu­racy.

Bet­ting on the fu­ture

The po­ten­tial for AI to un­cover ac­tion­able in­sights from elec­tronic pa­tient data has con­vinced ven­ture cap­i­tal­ists and soft­ware de­vel­op­ers alike to in­vest in the health­care arena. In a 2016 re­port, Frost & Sul­li­van pre­dicted that the rev­enues in the health­care AI mar­ket­place will ex­plode from $633.8 mil­lion in 2014 to nearly $6.7 bil­lion in 2021.

The Cleve­land Clinic is one health sys­tem ac­tively work­ing with ma­chine learn­ing.

It spent more than three years build­ing an in­fra­struc­ture to sup­port ad­vanced an­a­lyt­ics. The tech­nol­ogy plat­form in­cludes both a struc­tured data­base en­vi­ron­ment us­ing Ter­a­data and a Hadoop data­base en­vi­ron­ment us­ing Cloud­era. The health sys­tem uses an­a­lyt­ics tools from SAS and sup­ports open-source pro­gram­ming lan­guages, such as Python and R.

“We also rec­og­nize that we are not al­ways go­ing to be start­ing from scratch,” says Christo­pher Dono­van, ex­ec­u­tive direc­tor of en­ter­prise in­for­ma­tion man­age­ment and an­a­lyt­ics in the di­vi­sion of fi­nance and in­for­ma­tion tech­nol­ogy at the Cleve­land Clinic. “We also think about how we are go­ing to en­gage with part­ners in the sys­tem.”

For ex­am­ple, the Cleve­land Clinic de­vel­oped a test for IBM’s Wat­son Health cog­ni­tive plat­form to see if Wat­son could cre­ate a prob­lem list based on the in­for­ma­tion— both struc­tured and un­struc­tured—in a pa­tient’s elec­tronic health record. Us­ing de-iden­ti­fied data, “they were able to get some good re­sults with it gen­er­at­ing a prob­lem list,” Dono­van says. The next step is to fig­ure out how to take that work be­yond the re­search phase and ap­ply it to clin­i­cal de­ci­sion sup­port, he adds.

The Cleve­land Clinic also has used ma­chine learn­ing to de­velop ap­pli­ca­tions from scratch, such as a set of tools to iden­tify pa­tients at risk of rack­ing up big med­i­cal bills.

For the first step, they used a va­ri­ety of math­e­mat­i­cal meth­ods—neu­ral net­works, de­ci­sion trees and gra­di­ent boost­ing—to de­velop al­go­rithms that rank the pa­tients

as­signed to care co­or­di­na­tors. The scores, which are up­dated monthly, aug­ment ex­ist­ing reg­istries to help care co­or­di­na­tors de­cide how to man­age their caseloads.

The team also de­vel­oped al­go­rithms to iden­tify pa­tients who are not en­rolled in the care co­or­di­na­tion pro­gram but are at risk of be­com­ing high-cost cases in the fu­ture. How­ever, that tool has not yet been in­cor­po­rated into the clin­i­cal work­flow, an es­sen­tial step to en­able case man­agers to in­ter­vene. “We might be in­ter­act­ing with those pa­tients a lit­tle dif­fer­ently,” says Joseph Doro­cak, se­nior fi­nan­cial an­a­lyst at the Cleve­land Clinic.

At Ohio State Univer­sity Wexner Med­i­cal Cen­ter, re­searchers in the ra­di­ol­ogy in­for­mat­ics lab also are us­ing ma­chine learn­ing to build tools that help clin­i­cians man­age their work­loads. For ex­am­ple, they de­vel­oped an al­go­rithm that pri­or­i­tizes com­puted to­mog­ra­phy im­ages of the head based on whether there are crit­i­cal find­ings.

Ra­di­ol­o­gists learn of the po­ten­tial se­ri­ous­ness of a given imag­ing study when a re­fer­ring clin­i­cian la­bels it as stat, ex­plains Lu­ciano Prevedello, MD, di­vi­sion chief in med­i­cal imag­ing in­for­mat­ics, adding that this is not an ideal sys­tem for pri­or­i­tiz­ing work­flow in ra­di­ol­ogy. Some­times im­ages show crit­i­cal find­ings the or­der­ing physi­cians didn’t an­tic­i­pate, he says. And even stud­ies la­beled stat—about 40 per­cent of all stud­ies—vary in de­gree of ur­gency.

To build the tool, re­searchers trained an al­go­rithm us­ing a data set of 2,583 head im­ages and val­i­dated the tool with a se­cond set of 100 head im­ages. The next step is to set up a clin­i­cal trial. “This is an im­por­tant step to see if what we de­vel­oped in the lab can be ex­panded to a clin­i­cal set­ting,” Prevedello says.

Com­mer­cial so­lu­tions

In­stead of start­ing from scratch, In­ter­moun­tain Health­care has pur­chased com-

mer­cial prod­ucts to help im­prove its clin­i­cal pro­cesses and pa­tient out­comes.

For ex­am­ple, In­ter­moun­tain—which has 22 hos­pi­tals, 1,400 em­ployed physi­cians and more than 185 clin­ics—be­gan work­ing with Ayasdi, an AI vendor in Menlo Park, Calif., in 2014.

“The first thing we did was try to val­i­date that [the Ayasdi so­lu­tion] would work on our data,” says Lonny Northrup, se­nior med­i­cal in­for­mati­cist at In­ter­moun­tain. To do this, the provider fed data on colon surgery into the tool. Colon surgery was se­lected be­cause the health sys­tem had an es­tab­lished clin­i­cal care path­way for the pro­ce­dure.

“In a mat­ter of two or three days, it cranked through the data,” Northrup says, adding that the tool repli­cated “a sub­stan­tial por­tion of what we have done over eight years in the in­sights it was able to de­rive from the data.”

Since then, In­ter­moun­tain has used Ayasdi’s tool to re­fine other care path­ways. For ex­am­ple, In­ter­moun­tain plans to roll out a re­vised care path­way this year for treat­ing new­borns with high fevers. Northrup pre­dicts that the changes, which he de­clined to dis­cuss in de­tail, will re­duce the av­er­age length of stay and im­pact thou­sands of ba­bies through­out the health sys­tem.

In­ter­moun­tain also plans to use the tool to track how well physi­cians are ad­her­ing to about 70 care path­ways the health­care or­ga­ni­za­tion has de­vel­oped. “It has the abil­ity to do that with more gran­u­lar­ity than we can get with our other so­lu­tions,” Northrup says. “If we are not get­ting the ad­her­ence we want, we will have the data to show the un­der­per­form­ing physi­cians how the bet­ter-per­form­ing physi­cians are get­ting bet­ter re­sults by fol­low­ing the care model.”

In­ter­moun­tain has been work­ing with other ma­chine learn­ing ven­dors as well. For ex­am­ple, In­ter­moun­tain in 2016 be­came a lead in­vestor in Ze­bra Med­i­cal Vi­sion, a ma­chine-learn­ing an­a­lyt­ics imag­ing com­pany. In 2017, In­ter­moun­tain, which has a li­brary of more than 3 bil­lion med­i­cal im­ages, an­nounced plans to de­ploy Ze­bra’s tech­nol­ogy to help In­ter­moun­tain’s ra­di­ol­o­gists di­ag­nose dis­eases.

In­ter­moun­tain also is eval­u­at­ing a tool from Jvion, Johns Creek, Ga., to cre­ate per­son­al­ized health risk pro­files for in­di­vid­ual pa­tients and rec­om­men­da­tions about how to lower their risk for de­te­ri­o­rat­ing health. “Our ini­tial val­i­da­tion of their plat­form is around avoid­able ad­mis­sions, and the find­ings we are gen­er­at­ing are ex­tremely en­cour­ag­ing,” Northrup says.

As­sist­ing ER cases

Like In­ter­moun­tain and the Cleve­land Clinic, MedS­tar Health, which op­er­ates 10 hos­pi­tals in Mary­land and the Wash­ing­ton met­ro­pol­i­tan area, also is eval­u­at­ing the ap­pli­ca­bil­ity of AI to solve clin­i­cal prob­lems.

MedS­tar’s In­sti­tute for In­no­va­tion worked with Booz Allen Hamil­ton to de­velop a tool for emer­gency depart­ment clin­i­cians. The tool, called Dic­ta­tion Lens, uses nat­u­ral lan­guage pro­cess­ing to sort through un­struc­tured elec­tronic pa­tient data, such as clin­i­cians’ notes, and pull out those that are rel­e­vant to a pa­tient’s cur­rent med­i­cal com­plaint.

“On av­er­age, MedS­tar pa­tients have 50 to 60 notes in their his­tory,” which is too many for an ED physi­cian to sort through man­u­ally, says Ernest Sohn, a chief data sci­en­tist at Booz Allen.

A hand­ful of ED physi­cians at MedS­tar tested the tool last year. Based on feed­back from those physi­cians, the MedS­tar/Booz Allen team plans to re­fine the tool this year and then retest it.

The pro­to­type took be­tween 10 and 20 sec­onds to present per­ti­nent notes to ED clin­i­cians, which is too slow, says Kevin Maloy, MD, an emer­gency depart­ment physi­cian and in­for­mati­cist with MedS­tar’s In­sti­tute for In­no­va­tion. To solve the prob­lem, they plan to change the back­end data pro­cess­ing so it be­gins culling through clin­i­cians’ notes when a pa­tient reg­is­ters in the ED, en­sur­ing that the in­for­ma­tion will be avail­able to clin­i­cians when they open a pa­tient’s record, Maloy says.

Cit­ing Dic­ta­tion Lens as an ex­am­ple, Sohn, Maloy and other au­thors of a 2017 blog post in Health Af­fairs, wrote about ma­chine learn­ing’s po­ten­tial to per­form mun­dane and time-in­ten­sive tasks for physi­cians. “By drain­ing time, en­ergy and at­ten­tion, such tasks can lead to clin­i­cian burnout and hin­der clin­i­cians’ abil­ity to prac­tice at the top of their ex­per­tise when pro­vid­ing care,” they wrote.

Over­com­ing chal­lenges

How­ever, there are sig­nif­i­cant bar­ri­ers to wide­spread adop­tion of ma­chine learn­ing and other AI tech­nolo­gies in health­care to per­form mun­dane tasks, or­ga­nize work­flow, di­ag­nose dis­ease, pre­dict out­comes, or pre­scribe treat­ments or be­hav­ior changes. This is par­tic­u­larly true for smaller or­ga­ni­za­tions be­cause they have fewer fi­nan­cial, tech­ni­cal and in­tel­lec­tual re­sources than large health sys­tems or aca­demic med­i­cal cen­ters.

When it comes to fi­nan­cial con­sid­er­a­tions, AI adop­tion com­petes with other press­ing is­sues in health in­for­ma­tion tech­nol­ogy, ac­cord­ing to a sur­vey of health sys­tem ex­ec­u­tives con­ducted by the Cen­ter for Con­nected Medicine at the Univer­sity of Pitts­burgh Med­i­cal Cen­ter and The Health Man­age­ment Academy, Alexan­dria, Va.

Of the 20 re­spon­dents to the sur­vey, “Top of Mind for Top U.S. Health Sys­tems 2018,” 63 per­cent said in­vest­ing in AI so­lu­tions would be a low pri­or­ity in 2018, com­pared with spend­ing in other ar­eas, such

“We are not al­ways go­ing to be start­ing from scratch.” —Christo­pher Dono­van

as cy­ber­se­cu­rity or vir­tual care. Those health sys­tems plan to spend an av­er­age of 2.6 per­cent of their IT bud­get on AI in 2018, and 13 per­cent plan to spend no money on AI in 2018.

Where they have im­ple­mented AI so­lu­tions in pre­vi­ous years, it was typ­i­cally in op­er­a­tional ar­eas, such as rev­enue cy­cle man­age­ment, sur­vey find­ings re­vealed.

In ad­di­tion to bud­getary con­straints, there are tech­ni­cal hur­dles to over­come. Chief among these is ac­cess to large, vet­ted data sets, so that ma­chine learn­ing al­go­rithms can be “trained” to rec­og­nize the cor­rect an­swer to a given prob­lem, such as which im­ages show can­cer­ous tu­mors. Re­searchers also need ac­cess to a se­cond data set to val­i­date an al­go­rithm’s per­for­mance, says Paul Chang, MD, pro­fes­sor and vice chair­man of ra­di­ol­ogy in­for­mat­ics at the Univer­sity of Chicago School of Medicine.

An­other is­sue is the un­der­ly­ing IT in­fra­struc­ture. “Our IT sys­tems are im­ma­ture in health­care,” Chang says. “We can’t get vet­ted data.”

Per­ti­nent data is stored in dis­parate sys­tems, such as nu­mer­ous in­pa­tient and out­pa­tient EHR sys­tems; an­cil­lary sys­tems for ra­di­ol­ogy, phar­macy or other de­part­ments; billing sys­tems; and pa­tient-gen­er­ated data from so­cial me­dia sites, mon­i­tors or wear­able de­vices. Be­cause of vari­a­tion in data­bases and data types, it’s dif­fi­cult to get that to­gether to en­able AI so­lu­tions to draw con­clu­sions.

Even within a sin­gle sys­tem, such as an EHR, data on clin­i­cal out­comes of­ten is dif­fi­cult to find be­cause it is not cap­tured in a stan­dard­ized way. In their blog post, Sohn and Maloy wrote that pain scores were cap­tured “in­com­pletely and in­con­sis­tently” in MedS­tar’s EHR, which made it dif­fi­cult for them to build a model to pre­dict pa­tients’ pain scores.

Af­ter an al­go­rithm is built and de­ployed into work­flows, so­phis­ti­cated data gov­er­nance also is needed to main­tain both data sets and al­go­rithms over time. For ex­am­ple, the Cleve­land Clinic’s risk pre­dic­tor is an au­to­mated process that runs data through nu­mer­ous math­e­mat­i­cal mod­els each time the process kicks off, and then au­to­mat­i­cally gen­er­ates re­sults from the model that gives the most ac­cu­rate pre­dic­tions that day.

IT staff mem­bers at the Cleve­land Clinic built the au­to­mated process to pre­vent model degra­da­tion over time. If one of the math­e­mat­i­cal mod­els falls be­low ac­cept­able lev­els of per­for­mance con­sis­tently, “the goal would be to reeval­u­ate that spe­cific model on its own; tweak it; fine tune it as needed; and en­ter it back into the process,” says Michael Lewis, se­nior direc­tor of health­care an­a­lyt­ics.

Work­flow con­straints

Even af­ter solv­ing the myr­iad data ex­trac­tion, model val­i­da­tion, data gov­er­nance and other tech­ni­cal is­sues, health­care or­ga­ni­za­tions may need to de­velop new work­flows to re­spond to the knowl­edge gen­er­ated by these ad­vanced an­a­lyt­i­cal tools.

That is the case at Memo­rial Sloan Ket­ter­ing Can­cer Cen­ter, where data sci­en­tists have de­vel­oped a model to pre­dict which chemo­ther­apy pa­tients are at risk of show­ing up at the health sys­tem’s ur­gent care cen­ter and pos­si­bly be­ing ad­mit­ted to an in­pa­tient unit.

Now, the health­care sys­tem is map­ping out new pro­cesses—in­clud­ing the use of telemedicine and on­go­ing pa­tient en­gage­ment—to mit­i­gate pa­tients’ risk of go­ing to the ur­gent care cen­ter. “There is a heavy lift. It is an am­bi­tious use case,” says Stu­art Gar­dos, chief data of­fi­cer at Memo­rial Sloan Ket­ter­ing.

The Cleve­land Clinic’s Dono­van urges CIOs to help build an or­ga­ni­za­tional cul­ture in which peo­ple are will­ing to in­cor­po­rate new in­sights into their daily work and de­ci­sion-mak­ing pro­cesses. “AI and ma­chine learn­ing are big buzzwords and peo­ple are say­ing, ‘We re­ally need to use this,’” he says. “We need to not only pro­duce this stuff, but we need to be able to use it—to make de­ci­sions with it.”

Work­ing with ar­ti­fi­cial in­tel­li­gence at the Cleve­land Clinic, from left: Christo­pher Dono­van, ex­ec­u­tive direc­tor of en­ter­prise in­for­ma­tion man­age­ment and an­a­lyt­ics; Joe Doro­cak, se­nior fi­nan­cial an­a­lyst; Michael Lewis, se­nior direc­tor of health­care...

PHOTO BY AN­GELO MERENDINO

Work­ing with ar­ti­fi­cial in­tel­li­gence at the Cleve­land Clinic, from left: Christo­pher Dono­van, ex­ec­u­tive direc­tor of en­ter­prise in­for­ma­tion man­age­ment and an­a­lyt­ics; Joe Doro­cak, se­nior fi­nan­cial an­a­lyst; Michael Lewis, se­nior direc­tor of health­care...

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.