Google pre­dicts chance of death with ac­cu­racy

AI sys­tem could boost pa­tient care

Western Times - - TRENDING - SEAN KEACH

GOOGLE knows ev­ery­thing (or at least it feels that way), and now it can even tell you when you’ll die. The tech gi­ant helped test an ar­ti­fi­cial in­tel­li­gence com­puter sys­tem that can pre­dict whether hospi­tal pa­tients will die 24 hours af­ter ad­mis­sion. What’s more stag­ger­ing is tri­als put the ac­cu­racy of the AI’s pre­dic­tions as high as 95 per cent. It works by chew­ing up data about pa­tients – their age, eth­nic­ity and gen­der. This in­for­ma­tion is then joined up with hospi­tal in­for­ma­tion – prior di­ag­noses, cur­rent vi­tal signs and any lab re­sults. What makes the sys­tem par­tic­u­larly ac­cu­rate is that it’s fed data typ­i­cally out of reach for ma­chines, such as doc­tors’ notes buried away on charts or in PDFs. Ar­ti­fi­cial in­tel­li­gence sys­tems be­come smarter over time through a process known as ma­chine learn­ing. The AI was de­vel­oped by re­searchers from Stan­ford, the Univer­sity of Chicago and Univer­sity of Cal­i­for­nia San Fran­cisco. Google then took the AI sys­tem and “taught” it us­ing de-iden­ti­fied data of 216,221 adults from two US med­i­cal cen­tres. This meant the AI had more than 46 bil­lion data points to vac­uum up. Over time, the AI was able to as­so­ciate cer­tain words with an out­come (life or death), and un­der­stand how likely (or un­likely) some­one was to die. What’s par­tic­u­larly ex­cit­ing about Google’s sys­tem is re­searchers can throw al­most any type of data at it. Stan­ford pro­fes­sor Nigam Shah told Bloomberg about 80 per cent of devel­op­ment time spent on pre­dic­tive mod­els went to­ward mak­ing the data pre­sentable for the AI. But Google’s sys­tem can chew up any­thing and make pre­dic­tions based on it through its pow­er­ful ma­chine learn­ing abil­i­ties. The sys­tem can es­ti­mate the length of a pa­tient’s hospi­tal stay and chance of be­ing read­mit­ted. So how ac­cu­rate is the AI? When we talk about prob­a­bil­ity, a 1.00 score is per­fectly ac­cu­rate. And a 0.50 score is a 50/ 50 chance – an AI that scores 0.50 is no bet­ter than a hu­man mak­ing ran­dom guesses. Here’s how Google’s AI fared on var­i­ous out­comes: Pre­dict­ing whether a pa­tient would stay long in a hospi­tal – 0.86 (Google) v 0.76 (tra­di­tional meth­ods) Pre­dict­ing in­pa­tient mor­tal­ity – 0.95 (Google) v 0.86 (tra­di­tional meth­ods) Pre­dict­ing un­ex­pected read­mis­sions af­ter a pa­tient was dis­charged – 0.77 (Google) v 0.70 (tra­di­tional meth­ods). “These mod­els out­per­formed tra­di­tional, clin­i­cally-used pre­dic­tive mod­els in all cases,” Google’s Alvin Ra­jko­mar said. Pro­fes­sor Ra­jko­mar said hos­pi­tals adopt­ing the AI could use it to “im­prove care” for pa­tients.

Photo:iStock HAVE YOU GOOGLED IT? An ar­ti­fi­cial in­tel­li­gence sys­tem, tested by Google, can pre­dict pa­tients’ health out­comes.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.