Google takes tem­per­a­ture of the healthcare mar­ket

The National - News - - BUSINESS | IN DEPTH - Car­ring­ton Malin is an en­tre­pre­neur, mar­keter and writer who fo­cuses on emerg­ing tech­nolo­gies CAR­RING­TON MALIN

There can be no doubt now that ar­ti­fi­cial in­tel­li­gence does help save lives. AI tech­nolo­gies are in­creas­ingly be­ing used for robotic surgery, med­i­cal im­age anal­y­sis, study­ing large vol­umes of med­i­cal data and even pa­tient di­ag­no­sis.

Of course, the suc­cess of any AI sys­tem is heav­ily de­pen­dent on the data avail­able and de­vel­op­ers of­ten need ac­cess to pa­tient in­for­ma­tion in or­der to de­vise ef­fec­tive med­i­cal sys­tems. The more am­bi­tious the goals, the more data is re­quired.

It should come as no sur­prise that Google, one of the largest AI de­vel­op­ers in the world, this week an­nounced a part­ner­ship agree­ment with As­cen­sion, the sec­ond largest healthcare sys­tem in the US. The deal will gain Google ac­cess to the health records of mil­lions of Amer­i­cans across 21 states.

What has proved to be a sur­prise to the me­dia, Amer­i­can pub­lic and other stake­hold­ers is that the part­ner­ship (code-named “Pro­ject Nightin­gale”) be­gan last year in se­cret and with­out com­mu­ni­ca­tion with doc­tors or pa­tients, re­ported The Wall Street Jour­nal.

Although the English adage “trust me, I’m a doctor” per­haps does not carry the same weight as it once did, pa­tient pri­vacy is some­thing that the med­i­cal pro­fes­sion and gov­ern­ments around the world take very se­ri­ously. Mean­while, there are pri­vacy ad­vo­cates who have con­cerns about moves to share pa­tient data more widely and the im­pact on per­sonal pri­vacy.

In this case, As­cen­sion has con­firmed the pro­ject is in com­pli­ance with the US 1996 Health In­surance Porta­bil­ity and Ac­count­abil­ity Act and Google wrote in a blog post on Mon­day that pa­tient data “can­not and will not be com­bined with any Google con­sumer data”.

Frankly, it’s un­likely that the Pro­ject Nightin­gale team will be in­ter­ested in, for in­stance, Mrs Smith’s 2008 kid­ney stone op­er­a­tion in par­tic­u­lar. What will in­ter­est them is the large vol­ume of pa­tient data that can be pre­pared for AI sys­tems to an­a­lyse at scale, thereby iden­ti­fy­ing trends, sim­i­lar­i­ties in data re­lated to phys­i­cal con­di­tions and shed­ding light on med­i­cal anom­alies.

For ex­am­ple, ev­ery­one wants a cure for cancer and its di­ag­no­sis is one of the most ac­tive ar­eas in AI-as­sisted re­search. By pre­par­ing vol­umes of med­i­cal im­age data from CT or MRI scans and cre­at­ing al­go­rithms to process that data, AI sys­tems can of­ten learn to spot the signs of cancer far ear­lier than hu­man tech­ni­cians, al­low­ing for ear­lier pa­tient di­ag­no­sis and treat­ment. Broadly speak­ing, the greater the vol­ume of cancer cases that can be an­a­lysed by AI, the bet­ter the sys­tem will work and so more lives can po­ten­tially be saved.

The same prin­ci­ple ap­plies for many dif­fer­ent sub­jects of med­i­cal ma­chine learn­ing projects. It’s of­ten pos­si­ble to get some en­cour­ag­ing re­sults from small sets of data, but to en­sure re­li­a­bil­ity and re­alise the full ben­e­fit of us­ing AI tech­nolo­gies for med­i­cal anal­y­sis, big­ger data sets are re­quired.

In the past, many tech­nol­ogy providers have had ac­cess to pa­tient records and per­sonal med­i­cal data, and in the US this has long been gov­erned by a law that was writ­ten to leg­is­late shar­ing of health records among ecosys­tem part­ners. So, why the uproar about Google gain­ing ac­cess to pa­tient healthcare records this week?

It may sim­ply boil down to a mat­ter of trust.

Although com­par­ing Google’s han­dling of con­fi­den­tial pa­tient records and its han­dling of per­sonal so­cial me­dia data would hardly be a fair com­par­i­son, last year’s data se­cu­rity breach of Google+ com­pro­mis­ing the data of more than five mil­lion users is still fresh in the minds of peo­ple, pol­i­cy­mak­ers and the cy­ber-se­cu­rity com­mu­nity.

Mean­while, there is the in­ves­ti­ga­tion into po­ten­tial “mo­nop­o­lis­tic be­hav­iour” launched by 50 US states and ter­ri­to­ries in Septem­ber. There is also the an­titrust rul­ing by the Euro­pean Com­mis­sion ear­lier this year re­quir­ing Google to pay a fine of €1.49 bil­lion (Dh5.9bn). Nei­ther help the dig­i­tal gi­ant en­gen­der pub­lic per­cep­tions of trust.

Nev­er­the­less, few would ar­gue that har­ness­ing the power of Google’s AI to im­prove pa­tient treat­ment, re­duce pain and suf­fer­ing and, ul­ti­mately, save more lives is a bad thing.

The fact is that in need of im­prove­ment though some of them may be, there are ex­ist­ing laws and pro­fes­sional stan­dards that can be ap­plied to the us­age of pa­tient data. As Google chief ex­ec­u­tive Sun­dar Pichai said ear­lier this year to In­dian news chan­nel

NDTV: “If AI can shape health care, it has to work through the reg­u­la­tions of healthcare.”

That is a given. Trust, on the other hand, isn’t al­ways a mat­ter of law.

Newspapers in English

Newspapers from UAE

© PressReader. All rights reserved.