AI Hits Real-World Ra­di­ol­ogy

Al­go­rithms are ex­pected to help ra­di­ol­o­gists do their jobs, but IT ex­ecs must fig­ure out how best to de­ploy them.

Health Data Management - - CONTENTS - BY LINDA WIL­SON

As providers be­gin to see the po­ten­tial of ar­ti­fi­cial in­tel­li­gence and ma­chine learn­ing, IT must find cost-ef­fec­tive ways to put the tech­nol­ogy to work.

Al­go­rithms based on ma­chine learn­ing and deep learn­ing, in­tended for use in di­ag­nos­tic imag­ing, are mov­ing into the com­mer­cial pipe­line.

How­ever, providers will have to over­come mul­ti­ple chal­lenges to in­cor­po­rate these tools into daily clin­i­cal work­flows in ra­di­ol­ogy.

There now are nu­mer­ous al­go­rithms in var­i­ous stages of de­vel­op­ment and in the FDA ap­proval process, and ex­perts be­lieve there could even­tu­ally be hun­dreds or even thou­sands of AI-based apps to im­prove the qual­ity and ef­fi­ciency of ra­di­ol­ogy.

The emerg­ing ap­pli­ca­tions based on ma­chine learn­ing and deep learn­ing pri­mar­ily in­volve al­go­rithms to au­to­mate ra­di­ol­ogy pro­cesses such as de­tect­ing ab­nor­mal struc­tures in im­ages, such as can­cer­ous le­sions and nod­ules. The tech­nol­ogy can be used on a va­ri­ety of modal­i­ties, such as CT scans and X-rays. The goal is to help ra­di­ol­o­gists more ef­fec­tively de­tect and track the pro­gres­sion of dis­eases, giv­ing them tools to en­hance speed and ac­cu­racy, thus im­prov­ing qual­ity and re­duc­ing costs.

While the num­ber of or­ga­ni­za­tions in­cor­po­rat­ing these prod­ucts into daily work­flows is small to­day, ex­perts ex­pect many providers to adopt these so­lu­tions as the in­dus­try over­comes im­ple­men­ta­tion chal­lenges.

For ex­am­ple, Sig­nify Re­search pre­dicts that the world­wide mar­ket for ma­chine learn­ing soft­ware in med­i­cal imag­ing to “au­to­mate de­tec­tion, quan­tifi­ca­tion, de­ci­sion sup­port and di­ag­no­sis” in ra­di­ol­ogy will ex­ceed $2 bil­lion by 2023, ac­cord­ing to a re­port re­leased in Au­gust by the re­search firm, which is based in the United King­dom.

Sig­nify says the growth of these types of tools will be fu­eled by the preva­lence of cloud-based com­put­ing and stor­age so­lu­tions as well as the in­tro­duc­tion of deep learn­ing to an­a­lyze dig­i­tal im­ages.

In ad­di­tion to tech­ni­cal fac­tors, ra­di­ol­o­gists’ ac­cep­tance of AI also will fuel growth, ac­cord­ing to Sig­nify. “The in­ter­est and en­thu­si­asm for AI in the ra­di­ol­o­gist com­mu­nity has no­tably in­creased over the past 12 to 18 months, and the dis­cus­sion has moved on from AI as a threat to how AI will aug­ment ra­di­ol­o­gists,” says Si­mon Har­ris, a prin­ci­pal at Sig­nify.

Data dump

Ra­di­ol­o­gists’ grow­ing ap­pre­ci­a­tion for AI may re­sult from the tech­nol­ogy’s prom­ise to help the pro­fes­sion cope with an ex­plo­sion in the amount of data for each pa­tient case.

“As our sys­tems im­prove, we start to ac­quire more and more im­ages,” says Matt Dewey, CIO of Wake Ra­di­ol­ogy, which op­er­ates out­pa­tient imag­ing cen­ters in the Raleigh-Durham, N.C., area. One ex­am­ple is mam­mog­ra­phy, which is evolv­ing from 2D to 3D imag­ing, he says. “We go from a study that used to be 64 megabytes for a nor­mal, stan­dard study to about 2 gigs, so it just takes the ra­di­ol­o­gist much

more time to go through. If we can find a way that a com­puter looks through it, it should make a dif­fer­ence. It can high­light things for the ra­di­ol­o­gist.”

Ra­di­ol­o­gists also are grap­pling with the growth in data from sources out­side ra­di­ol­ogy, such as lab tests and elec­tronic med­i­cal records. This is an­other area where AI could help ra­di­ol­o­gists by an­a­lyz­ing data from dis­parate sources and pulling out key pieces of in­for­ma­tion for each case, Dewey says.

There are other is­sues AI could ad­dress as well, such as “ob­server fa­tigue,” which is an “as­pect of ra­di­ol­ogy prac­tice and a par­tic­u­lar is­sue in screen­ing ex­am­i­na­tions where the like­li­hood of find­ing a true pos­i­tive is low,” wrote re­searchers from Mas­sachusetts Gen­eral Hos­pi­tal and Har­vard Med­i­cal School in a 2018 ar­ti­cle in the Jour­nal of the Amer­i­can Col­lege of Ra­di­ol­ogy. These re­searchers fore­see the util­ity of an AI pro­gram that could iden­tify cases from rou­tine screen­ing ex­ams with a likely pos­i­tive re­sult and pri­or­i­tize those cases for ra­di­ol­o­gists’ at­ten­tion.

AI soft­ware also could help ra­di­ol­o­gists im­prove work­lists of cases in which re­fer­ring physi­cians al­ready sus­pect that a med­i­cal prob­lem ex­ists.

Ra­di­ol­o­gists learn of the po­ten­tial se­ri­ous­ness of a given imag­ing study when a re­fer­ring clin­i­cian la­bels it as STAT, ex­plains Lu­ciano Prevedello, MD, di­vi­sion chief in med­i­cal imag­ing in­for­mat­ics at Ohio State Univer­sity Wexner Med­i­cal Cen­ter. Prevedello says this is not an ideal sys­tem for pri­or­i­tiz­ing work­flow in ra­di­ol­ogy for two pri­mary rea­sons. Some­times, im­ages show crit­i­cal find­ings the or­der­ing physi­cians did not an­tic­i­pate, and even within the stud­ies la­beled STAT—about 40 per­cent of all stud­ies—there are vary­ing de­grees of ur­gency.

De­spite the prom­ise of ma­chine learn­ing and deep learn­ing in ra­di­ol­ogy, mov­ing such al­go­rithms from the realm of sci­en­tific dis­cov­ery to daily clin­i­cal work­flows in­volves over­com­ing prac­ti­cal and fi­nan­cial chal­lenges.

De­vel­op­ing in-house so­lu­tions

Con­vinced of the po­ten­tial of AI so­lu­tions for ra­di­ol­ogy, Ohio State cur­rently is fo­cus­ing on de­vel­op­ing al­go­rithms through in-house re­search.

Prevedello says the eight-mem­ber ra­di­ol­ogy-in­for­mat­ics team at Ohio State has de­vel­oped an al­go­rithm that pri­or­i­tizes com­puted to­mog­ra­phy im­ages of the head based on whether there are crit­i­cal find­ings. Re­searchers have con­fig­ured the al­go­rithm so it re­turns re­sults in six sec­onds by pro­cess­ing CT im­ages on a sep­a­rate server and then sends a mes­sage about whether there is a crit­i­cal find­ing for an imag­ing study to the work­list soft­ware within the PACS sys­tem.

The next step is to set up a clin­i­cal trial. “This is an im­por­tant step to see if

what we de­vel­oped in the lab can be ex­panded to a clin­i­cal set­ting,” Prevedello says. “If it is suc­cess­ful there, we will im­ple­ment it in the clin­i­cal set­ting.” Ohio State has 65 at­tend­ing physi­cians in its ra­di­ol­ogy de­part­ment.

To build the tool, re­searchers trained an al­go­rithm us­ing a set of 2,583 head im­ages and val­i­dated the tool with a sec­ond data set of 100 head im­ages.

The team also is work­ing to de­velop sim­i­lar so­lu­tions for im­ages of other body parts. “We hope to have a very com­pre­hen­sive so­lu­tion at some point,” he says.

Eval­u­at­ing com­mer­cial op­tions

Other providers are ven­tur­ing into the com­mer­cial sec­tor to eval­u­ate po­ten­tial so­lu­tions.

A chal­lenge to over­come with this ap­proach is the frag­mented na­ture of the emerg­ing mar­ket­place for these al­go­rithms. “I see these small startup AI com­pa­nies that solve one or two prob­lems. They do very nar­row AI, which is a very rea­son­able ap­proach,” says Don Den­ni­son, pres­i­dent of Don K. Den­ni­son So­lu­tions, an imag­ing and in­for­mat­ics con­sult­ing firm. For ex­am­ple, they may fo­cus on just one dis­ease, body part or imag­ing modal­ity.

“The prob­lem is that now there are many, many, many of those, or tens of thou­sands of those com­bi­na­tions to be eval­u­ated,” Den­ni­son sug­gests.

That’s one of the prob­lems ex­ec­u­tives at Wake Ra­di­ol­ogy—which also reads stud­ies for UNC REX Health­care’s in­pa­tient and out­pa­tient ra­di­ol­ogy de­part­ments—are try­ing to solve as they work to bring these ap­pli­ca­tions into daily clin­i­cal use.

They do not want to go through the usual soft­ware pur­chase and im­ple­men­ta­tion process ev­ery time they want to add a new al­go­rithm. “Ef­fec­tively, what I needed was some­thing like an app store, where I can choose this al­go­rithm or that al­go­rithm,”

Dewey says. “I don’t want to talk to ev­ery com­pany that has an al­go­rithm. I want to just choose this app and know the plat­form is go­ing to work.”

This is one of the rea­sons Wake be­came a beta site for a plat­form from En­voyAI, which is de­signed to give physi­cians ac­cess to mul­ti­ple al­go­rithms through a sin­gle so­lu­tion.

Wake Ra­di­ol­ogy is live with one al­go­rithm, which de­ter­mines bone age in pe­di­atric pa­tients. Se­nior man­age­ment chose to start with this al­go­rithm be­cause mea­sur­ing bone age is “a pretty stan­dard cal­cu­la­tion,” with­out a lot nuance, Dewey says.

To set up the process, Wake Ra­di­ol­ogy routes the per­ti­nent imag­ing stud­ies from its ven­dor-neu­tral archive to En­voyAI’s on-premise plat­form, which anonymizes the data and routes it to the al­go­rithm in the cloud. En­voyAI then routes the re­sults back to Wake’s PACS.

Ohio State’s Prevedello be­lieves this type of plat­form so­lu­tion has po­ten­tial if AI de­vel­op­ers are will­ing to cus­tom­ize their apps for mul­ti­ple plat­forms, as is the case to­day with Google Play and Ap­ple’s App store.

In ad­di­tion to En­voyAI, Nuance Com­mu­ni­ca­tions has said it is work­ing on such a plat­form, Nuance AI Mar­ket­place for Di­ag­nos­tic Imag­ing. Black­ford Anal­y­sis also has an­nounced a plat­form to in­te­grate mul­ti­ple al­go­rithms and make them avail­able in ei­ther a PACS or im­age viewer. The plat­form in­cludes its own prod­uct, Smart Lo­cal­izer, which au­to­mat­i­cally reg­is­ters im­ages, en­abling ra­di­ol­o­gists to com­pare im­ages taken at dif­fer­ent times, on equip­ment from dif­fer­ent ven­dors and in dif­fer­ent modal­i­ties.

As Den­ni­son ex­plains, “The value propo­si­tion of any app store is the breadth and value of the apps it has and how easy it is to im­ple­ment in pro­duc­tion. The race is on to get the most, high­est value apps in­stalled in pro­duc­tion in the most places.”

To scale these so­lu­tions, ven­dors are an­nounc­ing part­ner­ships. For ex­am­ple, Nuance has an­nounced part­ners for its plat­form, such as NVIDIA, which has a deep learn­ing plat­form, and Part­ners Health­Care. Black­ford has an­nounced a part­ner­ship with In­tel­erad Med­i­cal Sys­tems, a ven­dor of PACS, view­ers and re­lated prod­ucts.

Cre­at­ing open sys­tems

Even if these plat­forms build siz­able cus­tomer bases, it’s still im­por­tant to de­velop stan­dard pro­to­cols for com­mu­ni­cat­ing the types of in­for­ma­tion these al­go­rithms are likely to take in and re­port out, ex­perts say.

The need for com­mu­ni­ca­tions stan­dards will be­come an even big­ger is­sue in the fu­ture as the AI-based al­go­rithms be­come in­creas­ingly ro­bust, an­a­lyz­ing and pro­cess­ing data from many sources—such as im­ages and data from EHRs or wear­able de­vices. Even more chal­leng­ing will be the process of manag­ing out­puts from AI al­go­rithms lo­cated through­out providers’ en­ter­prises, in­clud­ing many in dif­fer­ent clin­i­cal spe­cial­ties and ad­min­is­tra­tive ar­eas.

With­out an open process, switch­ing be­tween plat­form ven­dors, re­plac­ing out­dated al­go­rithms or manag­ing al­go­rithms across mul­ti­ple de­part­ments would be time con­sum­ing and ex­pen­sive for providers, ex­perts say.

Den­ni­son says DICOM and HL7 API com­mit­tees are al­ready think­ing about how to adapt ex­ist­ing stan­dards to in­cor­po­rate out­puts from AI al­go­rithms. He says In­te­grat­ing the Health­care En­ter­prise (IHE) In­ter­na­tional also plans to de­velop an in­te­gra­tion pro­file for en­ti­ties that cre­ate, store, man­age and

dis­play out­put from AI al­go­rithms.

Prac­ti­cal mat­ters

In ad­di­tion to IT ar­chi­tec­ture chal­lenges, there are peo­ple is­sues, such as con­fronting ra­di­ol­o­gists’ skep­ti­cism about AI’s mer­its. Cit­ing the ex­am­ple of mam­mog­ra­phy, Dewey notes that com­puter-aided de­tec­tion does not ap­peal to all ra­di­ol­o­gists be­cause they worry that the ap­pli­ca­tions used for this pur­pose may mis­in­ter­pret in­for­ma­tion in the im­ages.

In the case of the AI-based al­go­rithms, how­ever, ra­di­ol­o­gists have the op­tion to feed in­for­ma­tion back into the al­go­rithms when they dis­agree with the re­sults—this would en­able the mod­els to learn con­tin­u­ally and be­come in­creas­ingly ac­cu­rate.

Ra­di­ol­o­gists at Wake also want to ac­cept or re­ject the out­put from an al­go­rithm on a spe­cific imag­ing study be­fore the data flows into the PACS. As a re­sult, Wake plans to re­vise the cur­rent work­flow, which it is us­ing for the bone age cal­cu­la­tion, so re­sults from other al­go­rithms are not in­cor­po­rated di­rectly into the PACS but in­stead will go to a sep­a­rate viewer, North­star AI Ex­plorer, from Ter­aRe­con.

For ex­am­ple, Dewey says, Wake Ra­di­ol­ogy plans to im­ple­ment an al­go­rithm, Neu­roQuant, from CorTechs Labs, which mea­sures the vol­umes of cer­tain brain struc­tures on MRI im­ages to help de­tect de­men­tia and neu­rode­gen­er­a­tive dis­eases. How­ever, the al­go­rithm does not cur­rently con­sider data from out­side sources, such as elec­tronic med­i­cal records, that might be per­ti­nent.

For in­stance, Dewey says, if a pa­tient had surgery in which part of his or her brain was re­moved, that in­for­ma­tion must be con­sid­ered be­fore ar­riv­ing at a di­ag­no­sis of de­men­tia. That’s why it’s im­por­tant for ra­di­ol­o­gists to re­view the out­puts of the al­go­rithm.

Wake Ra­di­ol­ogy also plans to use this ap­proach to in­cor­po­rate two other al­go­rithms—one for CT lung im­ages and a sec­ond for chest X-rays. It also in­tends to im­ple­ment a feed­back func­tion­al­ity to en­able ra­di­ol­o­gists to help train the al­go­rithm.

A sub­se­quent phase of the pro­ject would in­te­grate the viewer di­rectly in­side the PACs and in­te­grate the out­puts from the al­go­rithms with au­to­mated dic­ta­tion soft­ware, so ra­di­ol­o­gists won’t have to ver­bally com­mu­ni­cate find­ings from the al­go­rithm for their re­ports.

An­other prac­ti­cal is­sue is hir­ing IT staff peo­ple with the skills to trou­bleshoot prob­lems or op­ti­mize a plat­form or ap­pli­ca­tion, par­tic­u­larly as AI be­comes more preva­lent.

Search­ing for a model

But per­haps, the big­gest hur­dle will in­volve de­vel­op­ing a fea­si­ble eco­nomic model that re­wards al­go­rithm de­vel­op­ers for their work and providers for us­ing the tech­nol­ogy.

Hop­ing to im­prove the cost-ben­e­fit cal­cu­la­tion among providers mulling whether to pur­chase a com­mer­cial prod­uct, Ze­bra Med­i­cal Vi­sion last year an­nounced that it will charge $1 per scan for ac­cess to its cur­rent and fu­ture cloud-based al­go­rithms. Ze­bra has de­vel­oped a deep learn­ing tool that de­tects liver, car­dio­vas­cu­lar, lung and bone dis­eases in imag­ing stud­ies.

Ben Pan­ter, CEO of Black­ford Anal­y­sis, ar­gues that the plat­form ap­proach takes costs out of the sys­tem for both buy­ers and sell­ers. “Eco­nom­i­cal ways of de­ploy­ing AI in health­care are es­sen­tial,” he says. “We need to de­velop more ef­fi­cient ways than in­di­vid­ual com­pa­nies go­ing to in­di­vid­ual hos­pi­tals try­ing to make a sale. That drives the costs up so high that we are never go­ing to solve any of the prob­lems in health­care.”

How­ever, these ap­proaches do not di­rectly ad­dress the un­der­ly­ing ten­sion of ap­ply­ing these AI mod­els to the cur­rent process of read­ing and re­port­ing on imag­ing stud­ies in ra­di­ol­ogy.

Den­ni­son notes that these al­go­rithms do not re­place ra­di­ol­o­gists but help them—a role he de­scribes as a “vir­tual res­i­dent.” In this sce­nario, an out­pa­tient imag­ing cen­ter or hos­pi­tal has taken on a new layer of costs, but it hasn’t re­duced the amount it pays the ra­di­ol­o­gist or in­creased the amount it gets re­im­bursed by a payer.

This won’t work fi­nan­cially un­less the new soft­ware aids ra­di­ol­o­gists’ ef­fi­ciency enough so they can re­view more im­ages and pro­duce more re­ports.

That is why providers are not likely to fully re­al­ize the value of their in­vest­ments in AI-pow­ered al­go­rithms un­til they use those tools to re­place— rather than as­sist—hu­mans in some steps in the work­flow, says Robert

Fuller, manag­ing part­ner for health­care at Clar­ity In­sights, a con­sult­ing firm spe­cial­iz­ing in data an­a­lyt­ics.

Take the ex­am­ple of chest X-rays that physi­cians or­der to screen pa­tients for lung can­cer, he says. Af­ter an al­go­rithm de­ter­mines that a given set of X-rays shows a neg­a­tive find­ing, those re­sults could move di­rectly to the re­port­ing phase of the process with­out re­view by a ra­di­ol­o­gist, while X-rays with ab­nor­mal find­ings would then be for­warded to ra­di­ol­o­gists to re­view.

“The only way you are go­ing to drive out cost is by be­liev­ing in the so­lu­tion you put in place—prov­ing it out; feel­ing com­fort­able with the ac­cu­racy—and re­duc­ing the work­load,” he says. “It is more about get­ting peo­ple’s buy-in; this has to be a process where peo­ple ac­cept the ac­cu­racy of the so­lu­tion. That won’t hap­pen overnight.” ☐

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.