Ar­ti­fi­cial in­tel­li­gence takes on med­i­cal imag­ing

Modern Healthcare - - NEWS - By Rachel Z. Arndt

Ra­di­ol­o­gists look at a new im­age ev­ery three to four sec­onds dur­ing an eight-hour work­day. That’s hardly enough time to find the pat­terns, ab­nor­mal­i­ties and other mark­ers es­sen­tial in mak­ing a di­ag­no­sis. Hos­pi­tals are hop­ing to lessen that load by out­sourc­ing some of that work—not to peo­ple across the ocean, but rather to ma­chines.

Th­ese com­put­ers, run­ning ar­ti­fi­cial in­tel­li­gence and ma­chine-learn­ing al­go­rithms, are trained to find pat­terns in im­ages and iden­tify spe­cific anatom­i­cal mark­ers. But they also go deeper and spot de­tails the hu­man eye can’t catch. Early ver­sions of th­ese al­go­rithms, cur­rently in tri­als, are both ac­cu­rate and fast.

Though hos­pi­tals are wel­com­ing ro­botic over­lords, ra­di­ol­o­gists need not worry about their jobs—at least not yet. Af­ter all, peo­ple are still nec­es­sary to read the in­for­ma­tion the ma­chines pro­duce and make sense of the data.

What’s more, it’s still the early days for ar­ti­fi­cial in­tel­li­gence in imag­ing, and though the tech­nol­ogy is promis- ing—po­ten­tially low­er­ing costs, im­prov­ing qual­ity and mak­ing providers more ef­fi­cient and ef­fec­tive—there are sig­nif­i­cant hur­dles to over­come.

“We’ll see our jobs chang­ing slowly,” said Dr. Keith Dryer, vice chair­man of ra­di­ol­ogy at Mas­sachusetts Gen­eral Hos­pi­tal, Bos­ton. “If you look 10 or 25 years from now at what a ra­di­ol­o­gist is do­ing, it’ll prob­a­bly be dra­mat­i­cally dif­fer­ent.”

In­deed, just as the ad­vent of dig­i­tal imag­ing and com­mu­ni­ca­tions in medicine—DICOM—drove trans­for­ma­tion in the field decades ago, so could al­go­rithms driven by big data once the kinks are worked out.

As ra­di­ol­o­gists do, ar­ti­fi­cial in­tel­li­gence learns as it goes. In fact, learn­ing is how it gets started in the first place. To “train” an al­go­rithm to rec­og­nize, for in­stance, a stroke, devel­op­ers feed it imag­ing stud­ies of a brain suf­fer­ing from an at­tack, teach­ing the ma­chine the nu­ances

that make pat­tern recog­ni­tion pos­si­ble. Then, as the al­go­rithm goes into ac­tion in the real world, act­ing on what it has al­ready been trained to do, it can gain new in­for­ma­tion from new im­ages, learn­ing even more in a per­pet­ual feed­back loop.

For in­stance, Arterys’ car­diac MRI au­to­mates the most te­dious steps in car­diac anal­y­sis, draw­ing on what it has learned from thou­sands of ex­am­ples and ap­ply­ing deep learn­ing al­go­rithms. This kind of au­to­ma­tion “frees up a lot of physi­cian time and brings a huge amount of con­sis­tency to imag­ing and track­ing changes over time in a pa­tient,” said Carla Lei­bowitz, Arterys’ head of strat­egy and mar­ket­ing. The browser-based soft­ware is in use at 40 sites around the world, in­clud­ing the Univer­sity of Cal­i­for­nia at San Diego and Fairfax (Va.) Ra­di­o­log­i­cal Con­sul­tants.

Like Arterys, Ze­bra Med­i­cal Vi­sion re­lies on a vast sup­ply of med­i­cal case data to train its al­go­rithms so ra­di­ol­o­gists can find what they’re look­ing for—and what they don’t yet know they’re look­ing for—more ac­cu­rately, more quickly and more con­sis­tently. “That’s a win for ev­ery­one,” said Elad Ben­jamin, co-founder and CEO of Ze­bra Med­i­cal. “Ra­di­ol­o­gists are able to de­liver bet­ter care at lower costs, and pa­tients get the ben­e­fit of im­proved di­ag­noses.”

Ze­bra Med­i­cal’s al­go­rithms draw on one of the largest data­bases of anonymized med­i­cal imag­ing data—mil­lions of pa­tient records and their as­so­ci­ated ra­di­ol­ogy re­ports. Each of Ze­bra Med­i­cal’s al­go­rithms is ded­i­cated to a par­tic­u­lar find­ing, such as em­phy­sema in the lungs. The com­pany has part­nered with In­ter­moun­tain Health­care, which will use th­ese al­go­rithms for pop­u­la­tion health. The Salt Lake City-based health sys­tem has con­ducted a pre­lim­i­nary val­i­da­tion of the al­go­rithm and is cur­rently run­ning more as­sess­ments. Once the tech­nol­ogy is fur­ther de­vel-

oped, In­ter­moun­tain hopes to use it to pre­vent ex­cess hos­pi­tal­iza­tions by giv­ing spe­cial treat­ment to those pa­tients most at risk of a health prob­lem.

Us­ing AI for clin­i­cal de­ci­sion­mak­ing de­pends in part on how the in­for­ma­tion is pre­sented. “AI pro­vides in­for­ma­tion in dis­crete an­swers to ques­tions,” said Dr. Keith White, med­i­cal di­rec­tor of imag­ing ser­vices at In­ter­moun­tain. “It’s interesting that AI and this kind of out­put cor­re­sponds to a change that ra­di­ol­o­gist lead­ers are al­ready try­ing to work to­ward—which is to trans­form ra­di­ol­ogy away from be­ing a nar­ra­tive, prose-based dic­ta­tion sys­tem into be­ing dis­crete data and an­swers.”

Get­ting the tech­nol­ogy into hos­pi­tals isn’t just a mat­ter of hav­ing ma­ture, ca­pa­ble tech­nol­ogy. There are reg­u­la­tory road­blocks, clin­i­cians must be trained how to use AI, and it has to be in­te­grated into the work­flow.

Get­ting AI through the Food and Drug Ad­min­is­tra­tion’s reg­u­la­tory process is the first or­der of busi­ness. Ac­cord­ing to some, the FDA hasn’t caught up with how AI works. The agency’s draft guid­ance on soft­ware changes, re­leased in 2016, calls for re-ap­proval of some med­i­cal de­vices—in­clud­ing those run­ning al­go­rithms—ev­ery time they change sig­nif­i­cantly. That’s par­tic­u­larly bur­den­some for AI, since chang­ing quickly is at the very heart of what learn­ing al­go­rithms are sup­posed to do.

FDA reg­u­la­tion “is quite bur­den­some to­day,” Lei­bowitz said, “and some­times it’s con­fus­ing and adds huge chunks of time to our de­vel­op­ment time­line.” But she and oth­ers rec­og­nize that reg­u­la­tion is nec­es­sary if any­one’s go­ing to trust th­ese al­go­rithms in the first place.

Get­ting an al­go­rithm cer­ti­fied is just the first step. It then has to be in­te­grated into ex­ist­ing sys­tems. Be­cause AI usu­ally pro­duces dis­crete data el­e­ments, as In­ter­moun­tain’s White said, it’s the­o­ret­i­cally pos­si­ble to bring those data el­e­ments smoothly into work­flows. Ide­ally, AI will run on a case au­to­mat­i­cally, pro­duc­ing dis­crete as­sess­ments that the ra­di­ol­o­gist can val­i­date and add to, which are then pulled into the elec­tronic health record, where down­stream providers can act ac­cord­ingly.

“If we put more struc­tured in­for­ma­tion into the EHR, it can fol­low the pa­tient more con­sis­tently, as op­posed to how we do it to­day, where we cre­ate a re­port,” Dryer said.

But the­ory is often neater than prac­tice, and some worry about how the out­put from AI will ac­tu­ally fit into the work­flow, not to men­tion the EHRs them­selves. “We need to en­sure that there’s in­ter­op­er­abil­ity,” said Dr. Bibb Allen, chief med­i­cal of­fi­cer of the Amer­i­can Col­lege of Ra­di­ol­ogy’s Data Sci­ence In­sti­tute. Much as the in­dus­try cre­ated the DICOM stan­dard to en­sure that med­i­cal im­ages were in­ter­op­er­a­ble, it’ll need to cre­ate stan­dard­ized use cases with com­mon data el­e­ments for AI. One po­ten­tial prob­lem is how the al­go­rithms are ini­tially trained. Some­times, the data they’re fed in the learn­ing process come from just one spe­cific model of imag­ing ma­chine. Be­cause dif­fer­ent mod­els have dif­fer­ent ra­di­a­tion doses and slightly dif­fer­ent tech­nolo­gies, “you’ve got an in­her­ent bias that’s built in,” said Steve Tolle, vice pres­i­dent and chief strate­gist of IBM’s Wat­son Health Imag­ing. To help avoid that bias, IBM is us­ing a col­lab­o­ra­tive ap­proach, work­ing with 20 health sys­tems to use im­ages from many dif­fer­ent sources to de­velop its Wat­son cog­ni­tive plat­forms, which one day, Tolle said, will be able to per­form im­age an­a­lyt­ics. Even when the tech­nol­ogy is strong, doc­tors may still be ret­i­cent to use it. “A real chal­lenge is physi­cian ac­cep­tance,” Tolle said. “We be­lieve you must have trans­parency so a doc­tor knows how a ma­chine is driv­ing to­ward a con­clu­sion or rec­om­men­da­tion. Doc­tors need to un­der­stand the sci­ence.” Once they do, they’ll be more likely to ac­cept the tech­nol­ogy as a tool they can use with con­fi­dence, and not fear it as some­thing that may re­place them. “We don’t think this is go­ing to re­place the physi­cian at all,” said Arterys’ Lei­bowitz. “The physi­cian does a lot more than look for pat­terns and con­nect the dots.” Allen sees the tech­nol­ogy as a way to al­low ra­di­ol­o­gists to do more. “It’s an op­por­tu­nity for ra­di­ol­o­gists to ex­pand their role,” he said. “There could be a way to push pa­tients with emer­gent dis­ease to the fore­front of our work list. Ra­di­ol­o­gists have an op­por­tu­nity to be­come man­agers of in­for­ma­tion that might go be­yond just what we see in the im­ages.” To fa­cil­i­tate that, the Amer­i­can Col­lege of Ra­di­ol­ogy es­tab­lished the Data Sci­ence In­sti­tute to work on the snags that could halt AI in its tracks: verification of al­go­rithms, in­te­gra­tion into work­flows, FDA reg­u­la­tions and use cases. It’ll be awhile be­fore all those ar­eas are fig­ured out, Allen ad­mit­ted. But the po­ten­tial—for pop­u­la­tion health, for pre­ci­sion medicine, for qual­ity in gen­eral—points to the even broader po­ten­tial to use AI not just for imag­ing but across the in­dus­try, mak­ing clin­i­cians more ef­fec­tive and ef­fi­cient, thereby low­er­ing costs and im­prov­ing qual­ity for pa­tients.

"It's an op­por­tu­nity for ra­di­ol­o­gists to ex­pand their role. There could be a way to push pa­tients with emer­gent dis­ease to the fore front of our work list. Ra­di­ol­o­gists have an op­por­tu­nity to be­come man­agers of in­for­ma­tion that might go be­yond just what we see in the im­ages." Carla Lei­bowitz Head of strat­egy and mar­ket­ing Arterys

Above: Usu­ally im­ages of the heart’s ven­tri­cle sur­faces, shown at right, are gen­er­ated man­u­ally. Arterys can gen­er­ate them au­to­mat­i­cally with a deep learn­ing al­go­rithm that iden­ti­fies the con­tours of the ven­tri­cle sur­faces on each slice of the study, seen on the left.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.