Calculating Quality

Calculating QualityI’m reading (listening, actually) to the recently released SuperFreakonomics by Steven Levitt and Stephen Dubner.  It’s the sequel to the best selling Freakonomics in which the tools of economic analysis are used to evaluate scenarios that we do not typically think of as economics problems.

One of the topics in the new book is healthcare quality.  The book has a fantastic product placement for Microsoft’s Amalga (and kudos to whomever it was at Microsoft who thought to hand these guys a few terabytes of data). Amalga, coupled with social security records (for information on patients dying after leaving the hospital), provides the authors with fantastic amounts of data to work with.  They have more information than any organization I’ve seen, which makes the conclusion that I carry away from their analysis even more surprising.

There is no way to systematically measure the quality of doctors.

This seems crazy to any of us paying attention to the healthcare space.  Hospitals are measuring and reporting thousands of quality metrics.  It seems that every government agency that touches healthcare along with every certification or designation program has its own list of items that must be tracked and communicated.

There’s even pressure to adjust reimbursements based on quality.

Further, communicating quality is a foundation for the empowerment of health consumers.  If consumers can’t determine the quality of care being given at one facility or by one physician over another, how can they ever make better care decisions?

So why do Levitt and Dubner come to this conclusion?  The answer is selection bias.

Sicker patients end up with different doctors and at different facilities than less sick patients.  Multiple comorbidities associated with a condition that one physician may treat often pushes that patient to a different physician or, in the case of surgical care, the surgeon may choose to perform surgery at a hospital rather than an outpatient surgical center.

Some of this selection bias is perfectly reasonable.  For example, a heart patient with an unusual issue may be referred to another specialist who focuses on treating that specific issue.

Unfortunately, as the stakes for “making the numbers” grows, there are greater concerns about whether or not we’re tracking the right numbers and the potential for unforeseen consequences.

A Wall Street Journal Op-ed piece from earlier this year presents the risks in making quality a high-stakes game for providers.  The concerns range from the potential for insurers to be mandating sub-optimal treatment to physicians who drop patients that don’t respond to treatment or fail to adhere to treatment regimens, thereby damaging that physicians’ outcome statistics.

While we need to continue to push for improvement and standards for healthcare, how we go about doing so is critically important. Measuring the wrong things can be worse than measuring nothing at all.

Plusone Twitter Facebook Email Stumbleupon Pinterest Linkedin Digg Delicious Reddit

3 thoughts on “Calculating Quality

  1. In addressing your question of how we go about making quality comparisons in healthcare, a few thoughts come immediately to mind. It seems logical that seperate analysis & scales are required, according to the specialty being examined. That way you can compare apples to apples. In addition, perhaps some method of “weighing averages” should be applied so that patient acuity that is unrelated to treatment does not count negatively against the physician. Lastly, the entity that collects and analyzes quality data, and then publishes the results of that research should be fully independant of the organizations being examined, in order to assure unbiased results.

    In the Primary Care arena, the National Committee for Quality Assurance (NCQA) serves this “3rd Party” function. The NCQA focuses it’s measurement on individual specialties and/or conditions. Participating providers are held to standardized measures for determining their quality “score”, which keeps the playing field fair & level for everyone.

    Brian Mack
    Marketing Manager
    Grand Valley Health
    Grand Rapids, MI

  2. Thanks for the response. I’m familiar with NCQA, but don’t know the specifics of the data that they’re collecting or how they do that collection.

    What I’ve generally seen in this area are metrics related to activity, such as what portion of heart attacks coming into the ED receive asprin within the alloted time, or indicators of good overall process, such as hospital acquired infection rates.

    Genuine outcome measures seem less well represented or, in some cases, misrepresentative. For example, I have heard organizations complain that standard quality measures today don’t capture if a patient dies after being sent home. As a result, mortality statistics between facilities are not always comparing apples-to-apples.

    The Superfreakonomics arguement though, goes further (they had outside data for mortality regardless if it occurred in the hospital). If the best physicians get the most challenging cases, their outcomes may, statistically, be worse than others managing the same conditions. It is likely that the costs of the care that they’re delivering may be higher as well.

    Do you know if NCQA uses some sort of case mix adjustment to compensate for this selection bias?

    If not (and I’ve not seen that in any organization’s reporting of quality numbers), then we run the risk of physicians simply avoiding those problem cases. This is clearly an issue.

    Certainly each specialty would have unique metrics. The question is, are they the right metrics?

  3. Pingback: Tweets that mention Calculating Quality – GeoVoices -- Topsy.com

Leave a Reply

Your email address will not be published. Required fields are marked *


+ seven = 15

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>