Wachter, McClellan of­fer takes on mea­sur­ing qual­ity

Modern Healthcare - - EVENTS -

Two of the na­tion’s leading ex­perts on health­care qual­ity— Dr. Bob Wachter, pro­fes­sor and chief of the di­vi­sion of hospi­tal medicine at UCSF Med­i­cal Cen­ter in San Fran­cisco, and Dr. Mark McClellan, se­nior fel­low and di­rec­tor of the Health Care In­no­va­tion and Value Ini­tia­tive at the Brook­ings In­sti­tu­tion and for­mer ad­min­is­tra­tor of the CMS— called for new ways of mea­sur­ing clin­i­cal qual­ity and out­comes dur­ing Mod­ern Health­care’s sec­ond an­nual Vir­tual Con­fer­ence on Qual­ity and Pa­tient Safety on June 18. The fol­low­ing are edited ex­cerpts from a ple­nary panel mod­er­ated by ed­i­to­rial pro­grams man­ager Mau­reen McKin­ney.

“We are not very good yet at mea­sur­ing the real pa­tients who we ac­tu­ally see, those who have mul­ti­ple co­mor­bidi­ties.”

DR. BOB WACHTER

Dr. Bob Wachter on “feed­ing the mea­sure­ment beast”: I had the priv­i­lege to spend the day at Boe­ing and fly a 777 sim­u­la­tor. One of the things I learned is the de­gree to which they try to pro­tect the pi­lots from too many mea­sure­ment re­quire­ments, be­cause they know they can get in the way of them fo­cus­ing on their work. Bob My­ers, Boe­ing’s chief flight deck en­gi­neer, a very in­ter­est­ing guy, said to me, “Air­lines are al­ways ask­ing the pi­lots, ‘Can you just doc­u­ment what time you took off, how much gas you had when you started, and then, did you have any pas­sen­ger com­plaints?’ So the pi­lots spend a fair amount of time on a com­puter or their iPad doc­u­ment­ing things. But they don’t do that stuff be­low 10,000 feet.”

So I said, “Well, we’re con­stantly do­ing that stuff be­low 10,000 feet in terms of doc­u­men­ta­tion in health­care,” and Bob said, “That’s the dif­fer­ence. When you’re in the OR or with the pa­tient, you’re be­low 10,000 feet, and you shouldn’t be do­ing that stuff.”

So the de­gree to which qual­ity mea­sure­ment has cre­ated an ex­tra­or­di­nary bur­den for clin­i­cians who have to en­ter all of that stuff in our records while we’re try­ing to take care of pa­tients, I think it’s been vastly un­der­es­ti­mated by the pro­mul­ga­tors of qual­ity mea­sure­ments. It’s a very, very im­por­tant is­sue.

Wachter on the need for mea­sures that

re­flect com­plex­ity: We are not very good yet at mea­sur­ing the real pa­tients who we ac­tu­ally see, those who have mul­ti­ple co­mor­bidi­ties. We’re get­ting de­cent at mea­sur­ing qual­ity for the pa­tient who hap­pens to have just a my­ocar­dial in­farc­tion or just a stroke. But the real world prob­lem is much more com­plex than that.

Wachter on di­ag­nos­tic er­rors: Mea­sures, of course, tend to fo­cus us on the things that we can mea­sure and there­fore, by ne­ces­sity, fo­cus away from things that might be equally or more im­por­tant but that we can’t mea­sure very eas­ily. We have no idea how to mea­sure di­ag­nos­tic er­rors. We’re de­cent at mea­sur­ing med­i­ca­tion er­rors, sur­gi­cal er­rors and health­care-as­so­ci­ated in­fec­tions, so of course these get a lot of at­ten­tion in a world driven by mea­sure­ment, and di­ag­nos­tic er­rors get vir­tu­ally no at­ten­tion in the same world de­spite the fact that they are equally, if not more im­por­tant as a safety haz­ard.

In to­day’s en­vi­ron­ment of trans­parency and pay-for­per­for­mance, a hospi­tal can look great by giv­ing pa­tients with pneu­mo­nia the right an­tibi­otics; giv­ing heart fail­ure pa­tients ACE in­hibitors and giv­ing heart at­tack pa­tients as­pirin, even if they got ev­ery sin­gle di­ag­no­sis wrong.

Wachter on op­por­tu­ni­ties in qual­ity mea­sure­ment: One hope is that au­to­mated mea­sures will flow di­rectly from pa­tient care. I think we’ve got a lot of work to do on this, but one can cer­tainly see a day where this is pos­si­ble.

It would also be healthy to move from process or struc­tural mea­sures to out­come mea­sures, but not if we can’t risk-ad­just those mea­sures, and if we can’t tell whether pa­tients re­ally are older or sicker or more com­plex in a lot of dif­fer­ent ways. I think our abil­ity to do that will get bet­ter over time and it’s al­ready get­ting bet­ter as

the sci­ence im­proves. Pol­i­cy­mak­ers who man­age this ecosys­tem need to get bet­ter at look­ing at life from the per­spec­tive of the mea­sured, not just of the mea­surer, or of the ben­e­fi­cia­ries of the mea­sure. You can have 10 dif­fer­ent qual­ity pro­grams all ask­ing for dif­fer­ent mea­sures, and from where they sit, what they’re ask­ing for is rea­son­able, but from the stand­point of the prac­tic­ing physi­cian tak­ing care of a sick pa­tient, it’s un­doable. We have to be able to take on that per­spec­tive as we think through mak­ing the mea­sure­ment process bet­ter. I think we’re go­ing to get there, but we’ve had a some­what rocky start.

Dr. Mark McClellan on promis­ing signs in qual­ity mea­sure­ment: Mea­sure­ment is far from per­fect, but it is get­ting bet­ter. We’ve gone from mea­sures of a limited num­ber of never events and some process-of-care mea­sures that can be cal­cu­lated from claims or billing data, to in­creas­ingly mea­sures that are based on clin­i­cal in­for­ma­tion, with an aim to do a much bet­ter job at track­ing things like pa­tient-re­ported func­tional sta­tus, com­bi­na­tions of risk fac­tors for car­dio­vas­cu­lar dis­ease and con­di­tion-spe­cific out­come mea­sures. This is not easy.

Hav­ing bet­ter data avail­able at the point of care for sup­port­ing pa­tient de­ci­sions, mak­ing it more read­ily avail­able for other uses, in­clud­ing qual­ity reporting and pay­ment, and gen­er­at­ing mea­sures as a byprod­uct of care de­liv­ery rather than hav­ing a sep­a­rate process— all of that can make it eas­ier to de­velop ev­i­dence on what works for par­tic­u­lar kinds of pa­tients and to sup­port per­for­mance im­prove­ment.

I want to em­pha­size that while this may seem like a long way off, there are sys­tems be­ing de­vel­oped now that are in­tended to do this. Most of the med­i­cal spe­cialty so­ci­eties and a range of pri­vate or­ga­ni­za­tions are now de­vel­op­ing clin­i­cal registry pro­grams that are in­tended both to help sup­port de­ci­sion-mak­ing and to help pro­vide bet­ter ev­i­dence on how dif­fer­ent kinds of pa­tients do with al­ter­na­tive treat­ments. Health in­for­ma­tion ex­changes at the re­gional level are de­vel­op­ing more ca­pac­ity to iden­tify gaps in care, and ex­change in­for­ma­tion to sup­port care de­liv­ery and re­port on per­for­mance mea­sures. And the grow­ing num­ber of sys­tems that are pro­vid­ing in­te­grated care, ei­ther as ACOs or health plans, are also de­vel­op­ing these kinds of ca­pa­bil­i­ties.

“I do think there are some promis­ing op­por­tu­ni­ties and some good ex­am­ples of how we can ac­tu­ally get to qual­ity-mea­sure­ment im­ple­men­ta­tion in a way that fully sup­ports bet­ter care and re­duces the bur­den on clin­i­cians.”

DR. MARK MCCLELLAN

McClellan on novel ap­proaches: The Food and Drug Ad­min­is­tra­tion has been pilot­ing a sys­tem over the last few years called the Sen­tinel Sys­tem. What it does is cre­ate a sys­tem of ac­tive and on­go­ing med­i­cal sur­veil­lance around safety is­sues for pre­scrip­tion drugs to de­velop a way of con­duct­ing queries on key is­sues—in this case ques­tions of drug safety—pulling from data that are used through­out the health­care sys­tem in a dis­tribu­tive fash­ion. So the FDA hasn’t set up its own data ware­house, but it has col­lab­o­rated with dif­fer­ent pri­vate health plans, an in­creas­ing num­ber of elec­tronic health record-based sys­tems, as well as in­te­grated sys­tems of care, to come up with stan­dard data mod­els that can be de­rived, with no additional ef­fort on the part of clin­i­cians, from the data that they are us­ing in ac­tual prac­tice re­lated to their pa­tients’ treat­ments and sub­se­quent out­comes.

This FDA Sen­tinel dis­trib­uted-anal­y­sis ap­proach has fo­cused not on solv­ing all of the in­ter­op­er­abil­ity prob­lems and all of the ter­mi­nol­ogy prob­lems with elec­tronic data, but it fo­cuses in on some prac­ti­cal is­sues, some of the most im­por­tant events, like my­ocar­dial in­farc­tions, or rare but se­ri­ous side ef­fects, and some ro­bust ways of iden­ti­fy­ing from the di­verse data sys­tems out there the types of pa­tients that might have these events based on the drugs they use and their other clin­i­cal and pa­tient char­ac­ter­is­tics. Sys­tems that are used for ac­tual care de­liv­ery can pro­duce re­li­able mea­sures.

McClellan on the fu­ture of mea­sure­ment: There are cer­tainly a lot of prob­lems with qual­ity mea­sure­ment to­day. While these chal­lenges are real, the need for mov­ing to­ward an in­creas­ing use of sys­tems that rely on qual­ity mea­sures for pay­ment and other pur­poses is not go­ing to go away. But I do think there are some promis­ing op­por­tu­ni­ties and some good ex­am­ples of how we can ac­tu­ally get to qual­ity-mea­sure­ment im­ple­men­ta­tion in a way that fully sup­ports bet­ter care and re­duces the bur­den on clin­i­cians.

Wachter on the CMS’ new ef­fi­ciency mea­sure for value-based pur­chas­ing: I think there’s no con­sen­sus yet. I think there’s a gen­eral agree­ment that we have to shift our fo­cus from just look­ing at qual­ity and safety and the pa­tient ex­pe­ri­ence, but also driv­ing ef­fi­ciency at the same time. And yet, I think that ef­fort is re­ally in its in­fancy. We’re look­ing at ad­justed cost-per-ben­e­fi­ciary. It’s a very early mea­sure, and I sus­pect that there will be a lot of prob­lems with it. Over time, it will need to get bet­ter and we’ll also need to be­gin look­ing at ap­pro­pri­ate­ness mea­sures: Did you do too much? Did the pa­tient need that scan? Did the pa­tient need that ex­pen­sive drug when they could have got­ten the cheaper drug? It’s still early in our abil­ity to mea­sure that and mea­sure it fairly.

McClellan on the po­ten­tial of PCORnet, the Pa­tient-Cen­tered Out­comes Re­search In­sti­tute’s new clin­i­cal data re­search in­fra­struc­ture: The main fo­cus of PCORI is com­par­a­tive ef­fec­tive­ness, what treat­ments lead to bet­ter or worse out­comes for par­tic­u­lar kinds of pa­tients. PCORnet is con­fronting these is­sues of how you de­velop stan­dard mea­sures of mean­ing­ful out­comes from real-world sys­tems of care de­liv­ery. This is hard to do for many types of mea­sures, but fol­low­ing the prin­ci­ple of start­ing some­where, I think the early PCORnet stud­ies are go­ing to fo­cus on some im­por­tant clin­i­cal out­comes that are rel­a­tively easy to mea­sure. That’s go­ing to be one more set of in­cen­tives, sup­ports, mo­men­tum for try­ing to get to con­sis­tent, re­li­able ways of mea­sur­ing mean­ing­ful out­comes and do­ing risk ad­just­ment, and other things that need to go along with it from clin­i­cal prac­tice. And be­cause the same kinds of out­come mea­sures that will be im­por­tant in those com­par­a­tive ef­fec­tive­ness stud­ies will be im­por­tant for qual­ity mea­sure­ment and im­prove­ment ef­forts as well, that’s po­ten­tially one added syn­ergy for get­ting to bet­ter mea­sures in prac­tice.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.