We know patients want to choose providers that deliver the highest quality of care. They tell us so in survey after survey, after all. The trouble is – how do healthcare providers tell them they’re good or, at the very least, better than local competition?
The business of communicating quality is a tough one. There is no one clear definition of what constitutes quality healthcare. I think this surprises many people not involved in the field, but those of us who spend our time here realize the complexities of our discipline.
Every specialty has its own elements of quality. But even within a specialty, there are many different ways organizations measure what quality means to them. The number of cases performed can be important, the training the care team has completed may be a factor, adherence to best of breed practices and protocols may be the key as can be the high tech tools available at the facility.
Add to this that no two patients are alike – arriving with different levels of progression with a disease, differing basic levels of overall health and a range of comorbities, all of which adds layers to the quality picture. With all of this complexity, you begin to see the difficulty in delivering solid quantitative measures of the relative quality of, for example, cardiology programs.
The quality data that’s reported to government agencies is little help here. Truly, most patients would be shocked that one of the key metrics for the quality of a cardiology program is how long it takes for a patient with symptoms of a cardiac event to receive an aspirin!
And we have little credibility when talking about the quality of our organization and the care it provides. In a world saturated with marketing messages and manned with cynical consumers, it’s assumed that we’re going to say great things about our own organizations, after all.
So despite the lack of consistent definitions for quality and the complexity of the measurement problem, there’s a clear desire for external marks of quality. Preferably from impartial third parties to give healthcare providers a tool for communicating they’re just simply better than the guy across the street.
When there exists such a desire, companies step in to fill it and there’s no shortage of players in our space – Healthgrades, Leapfrog Group, Truven, US News & World Reports, Zagat and many, many more.
So many more, in fact, that it’s not uncommon to see dueling quality awards within a market with each provider touting a different award. The result is greater consumer confusion. Healthcare providers spend their time talking about someone else’s award program rather than about something meaningful to patients and the community.
It’s with this in mind that the Association of American Medical Colleges (AAMC) has released its guidelines to assist provider organizations in better evaluating rating programs. These guidelines fall into three categories:
- Purpose – What’s the purpose of your ratings? What’s your particular take on the idea of “better”?
- Transparency – What data factors into scoring and where are you getting this data? What’s the underlying methodology? Show your work.
- Validity – What makes your rating meaningful? Is this the output of research and standards or is it an arbitrary scoring scheme that you’ve made up in your free time?
According to a recent piece in Healthleaders, none of the rating systems used today meet all the criteria. Unfortunately, rather than applauding this effort to deliver a better quality of quality ratings, it seems that a number of rating firms disapprove of this effort for better transparency. These efforts could undermine many of these firms’ business models.
It turns out that most organizations are enthusiastic about transparency and ratings when it works to their benefit. We all benefit when these factors are the norm. Kudos to AAMC for providing a thoughtful way to evaluate the evaluators.