Patient satisfaction cannot be judged on just one measure
An article in Modern Healthcare’s Aug. 15 issue by Rich Daly (“Unsatisfactory marks”) raises some troubling objections to patient satisfaction as a legitimate quality indicator. The objections are nothing new. However, given that the CMS will soon tie reimbursements to quality scores that include patient satisfaction, the doubts need to be laid to rest.
Essentially, the article suggests that patient satisfaction scores are “biased” in that “high marks for perceptions of care may have little connection to high quality clinical outcomes.” Well-respected medical centers with high “quality” scores and reputations may have low satisfaction ratings.
Daly correctly notes that patient satisfaction scores vary by hospital size and region. Scores for larger hospitals tend to be lower than those for smaller hospitals, while Northeastern hospitals tend to score lower than those in the South and Midwest. However, some researchers dismiss the relevance of lower satisfaction scores for academic medical centers in the North by suggesting that they have a preponderance of patients “with either depression or complex and serious illnesses.” Intentionally or not, this comes across as saying that patients have to be either emotionally disturbed or very sick not to recognize “the high quality of the clinical care that other measures have found those institutions provide.” Thus, patient satisfaction surveys are “biased.”
It is indeed true that satisfaction and clinical process measures may vary independently. But this shouldn’t be surprising, as they are measuring quite different aspects of care. It is not unusual for patients to receive high-quality technical care (proper tests and treatments with minimum errors and mortality) while experiencing low-quality interaction, empathy, information and logistical management (such as ambient noise, delays in appointments, transport or pain relief). None of these patient experiences is included in the public process-of-care measures that many view as the only legitimate proof of quality. Patients, of course, are typically unaware of the many technical processes and indicators that constitute these quality scores. They judge hospitals on the basis of their personal experience. To patients, this experience defines “care.”
The full definition of “care” must necessarily include both the technical interven-
It is wrong to define–or reward–quality on the basis of either patient satisfaction or technical process measures.
tion and the manner in which it is delivered. In an ideal world, the two measures would vary in tandem. This is not such a world. In our world, the patient can have a lousy personal experience while getting first-class treatment, or vice versa. In neither instance is care of high quality. It is wrong to define—or reward—quality on the basis of either patient satisfaction or technical process measures alone. To deny a full role in the definition of “quality” for the patient’s personal experience is essentially to define “care” as involving only the technical treatment or diagnostic procedure.
If the quality of treatment and satisfaction do not vary in tandem, is it legitimate to riskadjust patient satisfaction scores? The article quotes Dr. James Merlino of the Cleveland Clinic as saying that hospitals should be held responsible only “for things that they can actually improve.” This suggests that for things over which hospitals have no control, risk adjustment could be appropriate.
No one can argue with this. The risk with risk-adjusting, however, is its admission that we cannot control a situation. Even if we riskadjust the satisfaction scores of large Northern medical centers, their patients won’t be more satisfied, nor their overall quality of care better. Where do we draw the line? We also know that younger patients are less satisfied with care than older. Male and female scores differ, as do the scores of different ethnic and economic groups. There are good reasons for these experiential differences, and many are ostensibly addressable and within a hospital’s control. For example, if younger patients are less satisfied with care than older, it may be due to the threat of illness to their self-image as indestructible and their unfamiliarity with hospital procedures. Perhaps more information and empathy from staff could make their experience easier. Risk adjustment is a cop-out.
Admittedly, there may be no fix that could enable large academic medical centers to score near the top of patient satisfaction. Size, noise, impersonality, a huge and diverse staff, the presence of inexperienced medical, nursing and tech students and other factors can make any patient’s experience less than optimal.
Here’s a suggestion: Rather than let all of them off the hook by risk-adjusting, the CMS could put academic medical centers in their own database, and use that for judging the patient experience portion of their quality equation. This would force those in the lower half of this peer group database to take patient satisfaction more seriously. Some academic medical centers actually do quite well with patient satisfaction, meaning that improvement is possible. Risk-adjusting satisfaction scores for anything only makes hospitals more satisfied—not their patients.
No one (I hope!) would agree that “care” consists only of diagnosis or treatment. If care necessarily embraces both technical interventions and the manner in which patients experience them, then patient satisfaction must be taken seriously and hospitals must be held accountable for the full meaning of “quality care.”
Irwin Press is professor
emeritus at the University of Notre Dame and co-founder of healthcare qualityconsulting firm Press Ganey Associates,
both based in South Bend, Ind.