RATING THE RATERS
EXPERTS RAISE QUESTIONS ABOUT HOSPITAL RATERS’ ASSORTED METHODS AND LACK OF TRANSPARENCY
When administrators and staff at 127-bed Bolivar Medical Center in Cleveland, Miss., learned in March that their hospital had scored 11 out of 100, placing it at the bottom of a list of more than 2,500 U.S. hospitals in the 2014 Consumer Reports Hospital Safety Report, they were surprised and disappointed. That same month, the hospital had earned a Gold Seal of Approval from the Joint Commission. And for three years in a row, Bolivar had won the Joint Commission’s Top Performer designation for its use of evidence-based practices in three care areas. Plus, in spring 2013, it got an “A” hospital safety grade from the Leapfrog Group, a not-forprofit representing large employers.
Since 2012, when Bolivar’s owner, LifePoint Hospitals, started participating in the federal Partnership for Patients program, the hospital had achieved significant quality-of-care improve- ments, said Dr. Rusty Holman, LifePoint’s chief medical officer. The hospital saw a 45% drop in mortality rates and a 9% reduction in 30-day readmissions, he said.
Holman complains that Consumer Reports used older data that did not reflect Bolivar’s improved quality of care. “We applaud the spirit in which ratings are intended,” he said. But overall, he said, the proliferation of different hospital ratings is complex and confusing for hospital leaders. “One can only imagine how dizzying it is for consumers.”
Many organizations have started publishing
hospital performance measures and report cards in recent years, growing out of the movement for improved quality and patient satisfaction, lower costs, and greater accountability and transparency. Among the organizations publishing these ratings and measures are government agencies, news organizations, healthcare accreditation and quality groups, and companies and not-for-profits focused on transparency. The emergence of these reviews has put pressure on hospital leaders to do what’s necessary to improve their scores.
But the various reports use significantly different methodologies and have different areas of focus, often producing sharply different ratings for the same hospitals during the same time period. Some hospital leaders say this makes it more difficult to know which areas to prioritize to improve their quality of care and rankings.
Some ratings groups do not disclose their methodology on the grounds that it’s proprietary. That has prompted criticism from hospitals and independent experts. “It can take a considerable amount of digging on some sites to find the metrics and how much they count toward the final rating,” said Dr. Ashish Jha, a Harvard University professor of health policy who serves on the advisory committee for the Leapfrog Group. “If a rating program isn’t willing to make its methodology completely transparent, then no one should use it.”
Lately, some hospital groups have started pushing back by rating the raters. Even so, some experts say hospitals should prepare for even more ratings scrutiny as consumers, facing higher cost-sharing in their health plans, increasingly shop and compare healthcare providers on quality, service and price.
“The science of performance management is still in the early stages and we have not all come together and agreed on an evaluating system. It’s a chaotic picture,” said Dr. John Santa, medical director of Consumer Reports Health. “But if we want to get to knowledge, we have to go through that stage of confusion.”
Reasonable people disagree on what measures are most important to include, which makes for significant differences in the various ratings, Jha said. One problem with that, though, is that hospitals can cherry-pick favorable ratings for marketing purposes, whether or not those ratings have much validity. “Anyone who wants to dodge accountability can hang their hat on some obscure rating that was good,” he said.
A recent Modern Healthcare online survey of readers indicated that healthcare executives are ambivalent about hospital ratings efforts. Of more than 230 respondents, 53% said their facility had received a poor rating from at least one ratings organization while receiving a high rating on similar measures during the same time period from another group. More than 81% said there are too many groups publishing ratings. Still, most respondents said the ratings were moderately valuable.
“I would say that the market is crowded,” said Alicia Daugherty, practice manager of research and insights at the Advisory Board Co., which in January 2013 published an analysis of the organizations that publish hospital quality data. “There are so many out there that patients and organizations have difficulty distinguishing.”
The Advisory Board evaluated 12 ratings groups, including independent organizations such as Leapfrog and Total Benchmark Solutions, news organizations such as U.S. News & World Report, government measures such as CMS’ Hospital Compare and the Agency for Healthcare Research and Quality’s Patient Safety Indicators, and accreditation groups such as the Joint Commission.
Some groups use a star rating system, some use a 1 to 100 percentage scale, and others use an academic-style A to F grading range. The groups also vary on how frequently they publish ratings, with some issuing reports annually and others offering more frequent updates.
The raters rely on data sets from the government, such as the Medicare Provider Analysis and Review and the Hospital Consumer Assessment of Healthcare Providers and Systems. Some create their own surveys and solicit voluntary responses from the hospitals. Others use diagnostic and procedure coding for specific diseases, conditions and ser-
vices. But not all groups disclose how they weight the various quality measures in producing their final scores. “They have to create a distinct product,” Daugherty said.
Clear purpose statement
Now other organizations are beginning to scrutinize the ratings services to provide hospitals guidance in deciding which ones they think require attention and action. In March, the Association of American Medical Colleges issued a set of guiding principles it hopes academic medical centers will use to evaluate quality reports.
They advise hospitals to make sure a ratings group offers a clear and concise purpose statement, explicitly describes the intended audience and offers transparent methodology. The AAMC’s principles were endorsed by the American Hospital Association, America’s Essential Hospitals and the Federation of American Hospitals, among others.
The Informed Patient Institute, a consumer-oriented not-for-profit based in Annapolis, Md., has rated the usefulness of about 70 hospital ratings, 70 online doctor rating sites and 60 nursing home report cards. The IPI gives each ratings group a grade ranging from A for “outstanding” to F for “not worth your time,” based on 15 criteria including timeliness of the information, presentation and ease of use.
The IPI gave the CMS’ Hospital Compare, U.S. News & World Report’s America’s Best Hospitals, and Leapfrog’s Hospital Safety Score a B, while giving Healthgrades and the Joint Commission’s Quality Check a C.
Last fall, the Healthcare Association of New York State
decided to address frustrations among its hospital members that different ratings groups had published sharply contradictory reports about the same hospitals. For example, one hospital ranked in the top 20 by U.S. News and World Report’s Best Hospitals received a 49—a below-average score—from Consumer Reports’ Hospital Safety Rating, and garnered a B from Leapfrog’s Hospital Safety Score.
“Hospitals take (rating sites) very seriously and use them to figure out how to deliver better care,” said Kathleen Ciccone, HANYS’ vice president of quality and research initiatives. “But unless there is some type of standardized approach with very transparent methodology, it’s going to be very difficult for hospitals to really apply the ratings for the purposes of quality improvement.”
So HANYS created its own evaluation, called the “Report on Report Cards,” in which the association rated the raters on a scale of zero to three stars. HANYS gave the Joint Commission’s Quality Check and the CMS’ Hospital Compare the highest rating of three stars. The Truven Health Analytics 100 Top Hospitals, Healthgrades’ America’s Best Hospitals and Consumer Report’s Hospital Safety Ratings received one star. U.S. News & World Report’s Best Hospitals received a half star.
The ratings groups argue, however, that HANYS has an
Consumer Reports’ Dr. John Santa said it’s hypocritical for hospitals to complain about ratings groups when they often use favorable rankings in their marketing and advertising or make marketing claims about having the “best doctors” or “the most innovative technology” without good evidence.
obvious conflict of interest in rating the raters and that it was not transparent about its own methodology.
Representatives of Truven and U.S. News dismissed their low ratings from HANYS, saying the association’s report card did not take account of their unique goals.
Truven focuses on helping hospital leaders reach actionable benchmarks, said Jean Chenoweth, Truven’s senior vice president of performance improvement.
U.S. News evaluates hospitals on whether they excel in treating the most medically challenging patients, said Ben Harder, director of healthcare analysis for U.S. News.
Healthgrades did not respond to requests for comment.
Consumer Reports’ Santa said it’s hypocritical for hospitals to complain about ratings groups when they often use favorable rankings in their marketing and advertising or make claims about having the “best doctors” or “the most innovative technology” without good evidence.
“I chuckle when I get reports that hospital CEOs are worried or confused about ratings,” Santa said. “They’re not so confused that they are not using comparisons in their own advertising.”
Hospital leaders will have to accept that consumers are starting to scrutinize healthcare the same way they look at other products and services, said Leah Binder, president and CEO of the Leapfrog Group. “Consumers are accustomed to reviewing a lot of reviewers and coming to their own conclusions,” she said. “Hospitals shouldn’t be exempt.”
Indeed, studies show that consumers increasingly are consulting published ratings to choose providers. A report published February in JAMA found that 65% of survey respondents were aware of online physician ratings sites. Among those who used the sites, 35% reported selecting a doctor based on good reviews while 37% avoided a doctor based on bad reviews.
That is raising the stakes for hospitals. In the recent Modern Healthcare online survey, when asked how damaging a poor rating could be for a hospital on an ascending 1 to 5 scale, 38% of respondents said 3.
While 79% said their hospital had not been harmed by a negative rating, others said a poor rating had hurt employee morale, led to fewer referrals, and caused a reduction in payments.
While hospitals would prefer a more uniform approach among the raters, that may not be practical, said Dr. Jeff Rice, CEO and founder of Healthcare Bluebook, which helps patients compare healthcare prices. “I completely appreciate that hospitals would want to have one standardized set of criteria,” Rice said. “But I don’t think there’s any way to keep people from innovating.”
Hospitals themselves should take the lead in disclosing quality and cost information in a clear and useful way, he argued. “If they did that,” he said, “then others wouldn’t spend so much time trying to reinvent the wheel.”