‘We still have very se­ri­ous qual­ity prob­lems’

Modern Healthcare - - Q & A -

The Joint Com­mis­sion, which cer­ti­fies qual­ity and safety at the na­tion’s hospi­tals and other providers, cur­rently has ac­cred­ited about 22,000 or­ga­ni­za­tions in the U.S. and abroad.

It re­cently stirred con­tro­versy when it sus­pended its Top Per­former award pro­gram for 2016, cit­ing a need to re­view its qual­ity mea­sures. Dr. Mark Chas­sin, the CEO of the Joint Com­mis­sion, re­cently spoke with Mod­ern Health­care qual­ity and safety reporter Sabriya Rice about the move. This is an edited tran­script.

Mod­ern Health­care: The Top Per­former pro­gram has been around for only five years. Why did you sus­pend it?

Dr. Mark Chas­sin:

The Joint Com­mis­sion is con­tin­u­ously evolv­ing and im­prov­ing how we do our work, hope­fully an­tic­i­pat­ing the changes that are go­ing on in health­care de­liv­ery sys­tems. In this par­tic­u­lar in­stance, we are keep­ing pace with the move to elec­tronic qual­ity mea­sures and want­ing to make sure that it’s done well.

We are not get­ting out of that work. For ex­am­ple, we are in­creas­ing our re­quire­ments for hospi­tals to re­port peri­na­tal mea­sures, go­ing from hospi­tals that had at least 1,100 de­liv­er­ies for the past two years to hospi­tals that have 300 or more de­liv­er­ies. That will en­com­pass about 80% of all hospi­tals with de­liv­ery ser­vices.

But we thought that the Top Per­former pro­gram could not con­tinue in its cur­rent form be­cause of the flex­i­bil­ity that hospi­tals now have in re­port­ing data on qual­ity, which in­cludes re­port­ing elec­tron­i­cally. We don’t have enough ex­pe­ri­ence to be able to com­pare mea­sures re­ported that way with the tra­di­tional chart ab­strac­tion mea­sures.

MH: Will it def­i­nitely be back in 2017?


I can’t an­swer that right now be­cause we’re in the middle of eval­u­at­ing this pro­gram. I can say, though, that over the course of the en­tire Core Mea­sure Pro­gram, we’ve seen a tremen­dous im­prove­ment on the part of hospi­tals on th­ese very solid, highly valid mea­sures. Back in 2002, we only had eight mea­sures that meet our cur­rent cri­te­ria for ac­count­abil­ity mea­sures, and only 7% of hospi­tals were over 95% per­for­mance on those eight mea­sures. Fast for­ward to last year, the data we re­ported for 2014 cov­ered 49 mea­sures and 80% of hospi­tals were over 95% per­for­mance. That’s enor­mous im­prove­ment over the life of this pro­gram.

We thought that the Top Per­form­ers was a great way to rec­og­nize and fur­ther en­cour­age im­prove­ment on mea­sures that hospi­tals were col­lect­ing in com­mon. When we are con­fi­dent that we can rec­og­nize top per­form­ers in a sim­i­lar way across a wide ar­ray of dif­fer­ent mea­sures, we’ll bring it back.

MH: The Joint Com­mis­sion has been crit­i­cized for fo­cus­ing on process mea­sures rather than out­come mea­sures.


We have out­come mea­sures in our port­fo­lio. But an out­come mea­sure has to meet some pretty strict cri­te­ria. A lot of the out­come mea­sures that are used by the CMS, Health­grades and U.S. News fail to meet the cri­te­ria for ac­count­abil­ity. For ex­am­ple, the mor­tal­ity mea­sures that the CMS and oth­ers use fail be­cause they are very poorly riskad­justed for crit­i­cal pa­tient char­ac­ter­is­tics that af­fect the risk of mor­tal­ity.

I’ll give one ex­am­ple. The stroke mor­tal­ity mea­sure that the CMS uses does not ad­just for dif­fer­ences be­tween pa­tient pop­u­la­tions for the sever­ity of the stroke that caused the hos­pi­tal­iza­tion. When you add sever­ity as a crit­i­cal com­po­nent, 58% of hospi­tals clas­si­fied as worse than ex­pected are re­clas­si­fied as av­er­age mor­tal­ity. So, the fail­ure to in­clude sever­ity, which af­fects the acute my­ocar­dial in­farc­tion mea­sure, the heart fail­ure mea­sure, the pneu­mo­nia mea­sure as well as the stroke mea­sure in the CMS’ data­base, is an ab­so­lutely crit­i­cal fail­ing.

Not all out­come mea­sures are in that cat­e­gory. When I was health com­mis­sioner in New York 20-plus years ago, we started the first pro­gram of statewide data col­lec­tion of clin­i­cal data on both sever­ity and other fac­tors

“We be­lieve that both process mea­sures and out­come mea­sures are es­sen­tial to ef­fec­tive qual­ity im­prove­ment.”

pre­dict­ing risk for mor­tal­ity. We pub­lished data on riskad­justed mor­tal­ity fol­low­ing coro­nary by­pass surgery by hos­pi­tal and sur­geon. There are sim­i­lar pro­grams in many other states. Those kinds of out­come mea­sures are ab­so­lutely fine for ac­count­abil­ity.

The over­ar­ch­ing prob­lem is that in or­der to im­prove out­comes, hospi­tals have to work with their physicians and other care­givers on im­prov­ing pro­cesses. They can’t im­prove out­comes di­rectly. They can’t wave a magic wand. So we be­lieve that both process mea­sures and out­come mea­sures are es­sen­tial to ef­fec­tive qual­ity im­prove­ment.

MH: Joint Com­mis­sion-ac­cred­ited fa­cil­i­ties of­ten get hit with im­me­di­ate jeop­ardy warn­ings from the CMS. Why do those not af­fect ac­cred­i­ta­tion?


We in­ves­ti­gate se­ri­ous safety events our­selves. We have a dif­fer­ent def­i­ni­tion and ap­proach to those in­ci­dents. But in some in­stances, se­ri­ous ad­verse events do jeop­ar­dize ac­cred­i­ta­tion sta­tus.

Our job is to make sure that the hospi­tals and other or­ga­ni­za­tions where we see se­ri­ous lapses in safety fix them as rapidly as pos­si­ble. If they don’t fix them, and we go back and see they con­tinue to be not fixed, we deny ac­cred­i­ta­tion to those or­ga­ni­za­tions.

It doesn’t hap­pen very of­ten be­cause that’s an ex­treme out­come. The vast ma­jor­ity of or­ga­ni­za­tions want to fix their safety prob­lems.

MH: Fed­eral health of­fi­cials say fewer pa­tients were harmed in hospi­tals over the past five years. Yet the Na­tional Pa­tient Safety Foun­da­tion says over­all health­care is not any safer.


We re­ally don’t have good met­rics on a na­tional ba­sis to judge over­all safety or qual­ity. It’s clear that we’ve made progress in a num­ber of ar­eas, in re­duc­ing health­care-as­so­ci­ated in­fec­tions, for ex­am­ple. But we still have very se­ri­ous qual­ity prob­lems, partly be­cause the goal posts keep mov­ing.

We keep adding in to the health­care de­liv­ery ar­ma­men­tar­ium: tests, treat­ments, pro­ce­dures and equip­ment that re­quires safe adop­tion and safe in­te­gra­tion into how we pro­vide health­care. What con­sti­tuted high qual­ity 10 years ago is not the same as what con­sti­tutes high qual­ity to­day. It’s a con­stant state of ac­tiv­ity to in­crease safety and qual­ity.

We are learn­ing from other kinds of or­ga­ni­za­tions that man­age sim­i­lar lev­els of risk, but do it much bet­ter than health­care. They’re called high-re­li­a­bil­ity or­ga­ni­za­tions. Health­care can get to that state where the op­er­a­tion of the or­ga­ni­za­tion is so good that zero harm is a byprod­uct of the way they do their work. That’s the way com­mer­cial avi­a­tion, nu­clear power, even amuse­ment parks main­tain high lev­els of safety.

The jour­ney starts with the com­mit­ment of lead­er­ship to get­ting to the ul­ti­mate goal of zero harm. That means the board of trustees, physi­cian lead­ers, nurse lead­ers, ex­ec­u­tives— all com­po­nents of lead­er­ship need to be com­mit­ted to achiev­ing that goal.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.