WhiteCoat

Anatomy of a Tragedy and Healthgrades.com

Saul Elbein deserves a shout out for the article he wrote in the Texas Observer titled Anatomy of a Tragedy. If you haven’t read the article, you need to go get a cup of coffee, sit down and take it all in. I disagree with his suggestion that the problems raised in the article may have been the price of living in a “free market”, because a free market system would require more transparency, but I won’t let my disagreement with him on this point overshadow an excellent article.

The article chronicles how a neurosurgeon in Texas permanently injured and likely even killed multiple patients during surgery and how the Texas Medical Board failed to timely respond to complaints that were raised. As a result, the neurosurgeon, Christopher Duntsch, continued operating on patients and patients continued having bad outcomes from his surgeries. The article also shows the down side to tort reform in Texas – noneconomic damages are limited to patients who have been permanently injured and to families whose loved ones have died due to the physician’s malpractice.

One of the issues raised in the article that I wanted to expand upon was why patients kept going to Dr. Duntsch for surgical procedures. After all, this doctor reportedly maimed patients during surgeries. Who would go to him knowing that information? Obviously his quality as a physician was substandard, right? Healthgrades.com Duntsch Patient Satisfaction Ratings

Maybe not. Check out Dr. Duntsch’s profile on Healthgrades.com. What you would have seen before Healthgrades.com reomved the information was that the same doctor who was reported to have caused the deaths of several patients and who reportedly permanently injured multiple other patients was rated as a 4.3 out of 5 in patient satisfaction. Dr. Duntsch rated above the national average in every one of Healthgrades’ patient satisfaction survey details except the total wait time in exam rooms – where he rated the same as the national average.
Now Healthgrades.com has decided to remove all of the satisfaction information from Dr. Duntsch’s profile, so all you’ll see is a bunch of blanks on his ratings page.
But I got a screen grab of the ratings before Healthgrades erased them.

Why am I making such a big deal about Dr. Duntsch’s satisfactions ratings on Healthgrades.com? Simple. The discrepancy between Dr. Duntsch’s patient satisfaction and his quality of patient care clearly shows how patient satisfaction fails an a measure of health care quality.
Quite a few patients were extremely impressed with Dr. Duntsch … until they woke up from his surgeries paralyzed, in severe pain, or dead (yes, I really wrote “woke up dead” — how many of you remember that post?) Patients in the article told Saul Elbein that they didn’t know any better. They had no way to know how bad of a physician Dr. Duntsch may have been.

Healthgrades.com is not the way to make that determination.

In fact, Healthgrades.com has many complaints about the accuracy and validity of its ratings. It is rated at the lowest score by 88% of all people giving it a rating on ConsumerAffairs.com. I had one reader write me about how Healthgrades.com published that he was still seeing patients when he has been retired for 10 years, how Healthgrades published his home phone number, and how patients call his home phone number at all hours of the day and night, then yell at him because he is retired. According to the comments on the Consumer Affairs.com site, Healthgrades has repeatedly been accused of publishing inaccurate information about physician practices and of publishing a physician’s personal information (such as home addresses, private telephone numbers, and names of spouses). Before allowing the physicians to change the information, Healthgrades.com reportedly requires physicians to agree to legally inappropriate terms of service on its web site.

Of course when doctors complain about how patient satisfaction survey companies like Press Ganey have invalid statistics that don’t measure physician or hospital quality, those with a vested interest … like Press Ganey CEO Patrick Ryan … tell them to “suck it up.” We’re just a bunch of whiney professionals who can’t stand the fact that we’re being rated, right? Want to know the real kicker? Healthgrades.com CEO Roger Holstein has a lot of experience with healthcare information services. According to this article, he is a board member and director at … Press Ganey.

Healthgrades.com erased the ratings on Dr. Duntsch’s profile for a reason. That reason was that Healthgrades.com KNEW that Saul Elbein’s article exposed the disconnect between satisfaction and quality and if that disconnect is made public, it would adversely affect Healthgrades’ business model.

Healthgrades.com SloganIf you want to keep pretending that patient satisfaction is a good measure of health care quality, that’s your call. I’m sure that there are plenty of other physicians like Christopher Duntsch who will be rated highly at the Healthgrades.com web site. Healthgrades’ Twitter account states that more than 200 million people use its services to select physicians and hospitals and that it gives “comprehensive healthcare information to help you take action.” If you believe that what you read on Healthgrades.com or even on Press Ganey reports is an accurate reflection of health care quality, now you’ve got some transparency. You can now take full ownership of whatever bad outcomes may result from your decisions.

Oh, and if you want to talk to Healthgrades.com’s CEO Roger Holstein about this whole Dr. Duntsch issue, his number is 303-716-0041. If you need to speak to him, you better call him quickly, though … before Healthgrades.com changes that information, too.

30 Responses to “Anatomy of a Tragedy and Healthgrades.com”

  1. Pat says:

    Even if Healthgrades were completely reliable, it doesn’t appear to contain any information about whether the treatment actually worked — which, to me, is sort of the point of going to a doctor.

    Where do you suggest patients look for that information? When I needed such info, all I found was stuff about waiting times and nothing about effectiveness, accurate diagnosis, rates of complications, need for follow-up treatment after their work, etc.

    • WhiteCoat says:

      Good question with no good answer.
      Picking a doctor is like picking a plumber or a lawyer, or even a significant other. Different people value different qualities. For example: I know a doctor who is difficult to get in touch with and who has a very bland personality but who is incredibly bright and who has made some amazing diagnostic and therapeutic decisions – including outsmarting specialists from a tertiary care center on several occasions. I know another doctor who has a reputation for overordering expensive testing on patients when tests are only tangentially related to a patient’s symptoms … but he repeatedly reminds people of how he has found a couple of needles in the haystacks throughout his career. And a third physician is also an excellent diagnostician who doesn’t rely on a bunch of testing, but he is so busy that patients have to wait weeks to see him and he tends to overlook routine medical care with patients. Whom would you choose for your doctor?

      The points of the post are that medical “quality” is a nebulous concept and that there isn’t a way to effectively gauge “quality” in healthcare by using some survey – especially when the quality is determined by people who don’t know much about the profession. Quality should be about accurate diagnosis and complication rates. However, if you had to pay tens of thousands of dollars for testing to get to the accurate diagnosis a doctor makes – even if the diagnosis doesn’t affect the treatment plan – would you judge that doctor higher or lower in quality? What about if a doctor diagnoses a viral head cold as an ear infection and prescribes antibiotics? In either case, the symptoms resolve on their own, but in one case the misdiagnosis of an ear infection and prescription of unnecessary antibiotics put the patient at risk for resistant infections in the future and cost the patient extra money. Most patients would never know the difference, but do you see my point about how judging diagnostic abilities is a moving target that is likely impossible to hit? And that’s just one potential measure of “quality.”
      Major complications like those that this Doctor Duntsch experienced should be reported, but then if they are reported to the public, it will likely create a bias for physicians to either label the complications as something else that isn’t reported or will cause physicians to avoid patients likely to experience certain complications. See this similar post illustrating this point from a few years ago: http://www.epmonthly.com/whitecoat/2010/02/reducing-bloodstream-infections/

      With regards to where patients should find information about doctors, pick out qualities that are important to you and then ask your friends or your family doctor or the staff you see in the ED or paramedics about what doctor they would recommend. People in the business know about other people in the business. I know which doctors I would and would not take my family to for multiple different medical issues.

      That being said, you also have to be realistic about what you are seeking. If you want a physician as smart as House who looks like George Clooney who will treat you over the phone and see you at a moment’s notice and who charges $20 for an appointment, you’re setting yourself up to be disappointed.

      The bottom line is that if you are looking for metrics to measure “quality” of a physician or a health care institution, those metrics touted by Healthgrades and other ratings companies are often misleading at best.

      When considering sites like Healthgrades.com, think to yourself whether it is better to have a broken watch that is correct twice a day or whether it is better to have no watch at all. Or is there another option?

      If patients are more concerned with the speed at which they get an appointment and the courtesy of the office staff than they are with a physician’s effectiveness, knowledge or quality, then I’m sure that Healthgrades.com can point them in the right direction.

      Just not what I’d recommend and the post illustrates one reason why.

  2. Janice says:

    Patients should be able to access that information from medical experts. Healthcare should be about caring.

    Before a doctor becomes overwhelmed and resorts to substance abuse, they should be able to access help from medical experts.

    Medical experts should encourage learning and healing and there should be systems in place to do so. Caring and prevention of harm should be a priority.

    • WhiteCoat says:

      I know lots of caring physicians that are not very good clinicians or diagnosticians.
      If you want a physician who sits at your deathbed and holds your hand while you cry about the cancer he missed, that’s your prerogative. I personally think that health care should be more than just about caring.

  3. SteveM says:

    I assume you have mentioned this ‘physician’ before, but ‘Dr.’ John Alexander King essentially caused an entire hospital system to enter bankruptcy after less than nine months.

    http://www.spartacus.blogs.com/spartacus/2005/09/Doctors_tale.html

    Unfortunately, it takes time before referring physicians have enough time to evaluate the competency of a surgeon. During that time, you basically have to rely on the vigilance of the credentials committee. For King, he managed to generate hundreds of millions of dollars in bankruptcy claims in only seven months.

    I have had a number of CEO’s be very upset with me during my tenure as chair of the hospital credentials committee. They saw false information on a physicians credentials as just “routine resume polishing.” I viewed it as deceit and grounds to reject an application for privileges (or to not offer employment.) They saw the fact that it took four or five different residency programs for someone to finally graduate as “issues from the past.” I viewed it as a significant red flag.

    One of the few positive outcome with King’s case is once I started waving it in the administrator’s face, they became a LITTLE more concerned with rejecting physicians with serious ‘red flags.’

    ————–
    Now for some back-story: I have never practiced in Texas and know very few physicians who do. However, around 1990, I worked in the AF Surgeon General’s office just outside of DC. We were bringing in a Colonel to serve as chief of our quality assurance efforts. As part of the routine paperwork we were stunned to find out that the Texas medical license of this extremely well-respected physician had been suspended. He was just as surprised.

    After a little investigation, we discovered that an ambitious new staff member of the Texas Medical Board had decided to administratively suspend the license of any physician who had a malpractice claim against them. (Not a judgment or settlement, just a claim.) Although I don’t want to claim credit, the complaints of the Air Force medical system MAY have helped to get this idiotic policy quickly revoked.

    Is there, a causal relationship between the case I mention and the case that is the focus of your blog entry? Probably not. BUT, it is not unheard of for an organization to overreact in the opposite direction when they get their hand slapped for misconduct.

    —–
    The trial lawyers will use this story to advocate for the rejection/removal of tort reform policies. (In fairness, both sides often use sensational stories in the place of reasoned debate.) However, this is also an opportunity to argue for an arbitration-like system to handle allegations of medical malpractice. Slam those who are truly negligent, and have criminal penalties for those who attempt to defraud the system. If we are going to have the ‘bureaucracritization’ of medicine, we might as well extend it to all aspects.

    • Matt says:

      This is not an argument for an arbitration-like system. It’s an argument for not letting the foxes guard the henhouse, because they won’t do a good job. And more importantly, it’s an argument that an arbitrary cap on your lost quality of life set by lobbyists, irrespective of the facts of the case, makes no sense.

      • SteveM says:

        I will agree with the “foxes guarding the henhouse”, provided you will accept that it is far worse in the legal profession. Law is the only profession that is entirely self regulating. In every state, the medical board has at least a few ‘lay’ members. In the legal profession, the state supreme court – which handles all disciplinary action – has no ‘lay’ members.

        I will agree to strengthen the powers of the medical boards, if you will agree to create an independent structure in each state to regulate the legal profession.

      • Matt says:

        I wouldn’t agree with you, as I know several lawyers who specialize in pursuing malpractice claims against other lawyers, and we aren’t out there lobbying for “tort reform” to eliminate our liability.

        The State Supreme Court is elected in most states. I’m not for strengthening or weakening the medical boards – that’s an internal debate for the profession. I’m just not for reducing the ability to seek redress before a jury.

      • WhiteCoat says:

        I’m going to say that I think a cap of $250,000 for noneconomic damages is unfair. However, arbitrary caps are set all of the time. There are caps on insurance payouts. There are caps on how fast you can drive. There are caps on what you can write off on your taxes. There are caps on how much emissions a factory can create. Why is it that the only arbitrary caps that make no sense to you are the ones that affect your income, counselor?

        Your assertion that “foxes” who guard “henhouses” won’t do a good job is disingenuous. Using that logic, you are implicitly stating that the whole judicial system is a farce. There is self-dealt immunity for any decision a judge or prosecutor makes in their line of work. Those lawyers don’t have to “lobby for ‘tort reform'” to eliminate their liability, they’ve already created complete immunity for themselves. Juries don’t get to decide whether a judge’s poor ruling or a prosecutor’s unsubstantiated decision to charge someone with a crime had caused inappropriate harm to a litigant.
        If you want to expand the ability to seek redress before a jury, then do you agree that legal immunity for judges and prosecutors (and elected officials for that matter) should be abolished?

        And the determination whether to strengthen or weaken medical boards is another example of foxes guarding the henhouse – but you’re apparently OK with that foxy administrative setup.
        Your logic is what doesn’t make sense to me.

      • Matt says:

        The only payouts on insurance caps I’m aware of are tort reform for physicians. I’m not sure what you’re talking about. Your other examples aren’t really analogous to deciding the value of one’s case, and don’t have Constitutional implications, nor were they relevant to our country’s founding.

        Self dealt immunity? I’m not sure where you picked that phrase up. Or really what you mean by it. The jury system is the ultimate check on the prosecutor – he/she merely brings the case, they don’t decide the outcome. They are also subject to the people at the ballot box, same as judges. As far as the “immunity” of judges, I’m not sure what you mean. A judge is like a referee, and is subject to review by appellate courts. What elected official immunity are you talking about? I really don’t see what you’re getting at.

        I don’t care if you “strengthen” or “weaken” medical boards. That’s an inter-profession issue.

  4. Rick says:

    This case and the fake orthopaedic surgeon in NY both are related to how doctors are rated and ranked. The amount of time spent reviewing charts for dates and signatures is staggering yet no one bothers to critique my exams, h and p nor surgeries. For ranking doctors, I would move away from asking other doctors ( Dr Hodad) and patients and move towards asking RN’s, scrub techs and anesthesiologists. Those are the people that see it all.

  5. V says:

    As a patient, where should I be looking for information on a doctor before I choose one? Review sites like healthgrades.com at least give me *some* idea what other patients say about a doctor, and cover things which aren’t listed anywhere else – is it impossible to get an emergency appointment at the practice? Will appointments always run late? Will the doctor listen to my questions or concerns?

    • WhiteCoat says:

      If those qualities are important to you, then use them. Just realize that they are a poor measure of a physician’s quality.
      A rusted out Kia with a busted muffler is inexpensive, gets good gas mileage, and will cost you next to nothing for insurance. Would you call that a “good” or a “quality” car? Would you buy one?

    • JJ says:

      The problem is I typcially see 30-40 patients a day. I have been in this town for 6 years. I currently have a grand total of 2 responses on health grades. Both indicate I rock. Can you really take anything away from what 2, 10, 50 people say about a provider when the denominator is so large? (See WC post about Press Ganey.) The people that post on these sights are either very happy or very unhappy, typically the latter. Point being, should you rule in or out a provider based on online grading?

  6. Seth Trueger says:

    This is a sad example of how normal market principles often don’t work in healthcare: how are his patients supposed to know that they are getting terrible care? Because his wait-times are average? By the time a terrible doc does enough harm for the market to respond, many patients’ lives have been ruined.

    • WhiteCoat says:

      Exactly my point.
      The public is being led to believe that “good” doctors and “good” hospitals are “good” because their staff is friendly and it is easy to get appointments.
      By using inaccurate metrics to tell everyone how “good” a provider’s health care is, these companies are misleading the public.
      This case is a concrete example of how the public is being misled.

      • Matt says:

        Well where are other metrics?

      • Jerry says:

        In some cases, figuring out which metrics should be used is relatively easy (e.g. procedure volume and complication rates). But, where will we find the data for such metrics? I think the lack of response to Matt’s question is enlightening, and I’ve outlined a larger response here.

      • WhiteCoat says:

        Matt, again, you’re illustrating my point. You’re assuming that the metrics exist. They don’t There aren’t other metrics to reliably measure quality in medicine.
        Use whatever yardstick you want. I can show you why it won’t work. It’s like trying to take a temperature with a bowling ball.
        If people want to believe that fast appointments and friendly office staff equate with good quality medical care, then that’s their choice. Some of them, like many of Dr. Duntsch’s patients, will learn the hard way that the measures are unrelated.
        Go back to the example of an automobile. What measures of “quality” would you use? Are those measures reliably present in high quality cars and reliably absent in low quality cars?

        Jerry, you assume that procedure volume and complication rates are “easy” metrics.
        A couple of examples:
        Several years ago I was part of a panel involved in a hearing regarding whether a doctor should lose his hospital privileges. He performed a specialized surgery more than any other physician in the city. But he had several patient deaths and multiple postoperative complications. He ended up being removed from the hospital staff and another hospital where he remained on staff wouldn’t let him perform the surgery without supervision. But he performed the procedures more than anyone in the area. Procedure volume doesn’t necessarily equal quality.
        Complication rates depend on large part on the patient. Each patient is different. If you say that complication rates are going to be a quality measure, first you have to define the complications – because there are a large number of them. Which complications are associated with quality and which aren’t?
        A surgeon with a large number of complications may raise red flags (a la Dr. Duntsch), but a surgeon with few complications is not necessarily practicing high quality medicine.
        In addition, you have to realize the unintended consequences of associating complications with “lower” quality. Look at the Healthcare Update from Wednesday where the cardiologists in Massachusetts stopped performing procedures on sick patients to avoid having higher death rates. Want to replicate that on a national level?
        Finally, your post mentions access to the NPDB – as if number of lawsuit payouts equate to quality? Probably about as much as friendly office staff do. I know many doctors who haven’t been sued to whom I wouldn’t bring a family member. NPDB doesn’t track suits filed, either – only payouts. Given that over 200,000 medical professionals are listed on the NPDB, should we just say that most practitioners in the US suck?
        I’ve been sued four times but I’m not in the databank. Does that mean I’m a good doctor or a bad doctor? How about a doctor who’s been sued once and lost?

        You can try to come up with reliable metrics as much as you want. Government has been mulling this issue for decades and still can’t come up with them. Satisfaction companies use their easily-measured metrics as a proxy to make money. Press Ganey makes hundreds of millions of dollars per year. Drink their Kool-Aid at your own risk.

      • Jerry says:

        Thanks for your response (and not sure why I can’t respond directly to the September 20 comment).

        I’m not arguing that any metric is perfect (nor do I think anyone else is); I do think, however, that perfection should not be the enemy of progress. With any of these metrics, we should ask whether releasing information on that metric is likely to help patients make better decisions than if they don’t have that information. If so, the onus should be on explaining why that information shouldn’t be released, rather than the opposite.

        Does having twenty years of experience of extensively performing a particular type of surgery with a clean record absolutely guarantee that the next case will succeed without complication? Nope. Does a ten-year track record with many complications and deaths necessarily mean that the next case will be botched? Nope. But not knowing anything else about the two surgeons, it’s hard to see why anyone would prefer the second one. We have to make decisions without perfect information all the time (“Will I thrive if I switch to this new job?” or “Will this stock appreciate?”), but that doesn’t mean we should throw out all of the imperfect information that is available.

        My earlier post was to point out that patients know that doctors don’t perform equally and that they are on the lookout for information that can help them decide; by abdicating leadership in creating meaningful metrics, the medical community has created a vacuum where even less important signals (like fast appointments and friendly staff) are amplified. If the medical community is irked by the reliance on patient satisfaction scores, it should formulate some metrics (as imperfect as they are) and encourage dissemination of information along those lines. So, rather than just reprimanding the patient when s/he goes to drink the “Kool-Aid,” offer something more wholesome.

      • medicalquack says:

        Hot off the Medical Quack blog, New ONC interim leader, find him on Healthgrades still accepting new patients and a full list of insurers, get an appointment:) This and Vitals needs to go, a wonder how long Athena Health is going to let Vitals hang around, they funded their start up.

        http://ducknetweb.blogspot.com/2013/09/jacob-reider-to-lead-onc-until-new.html

      • WhiteCoat says:

        Jerry –

        “With any of these metrics, we should ask whether releasing information on that metric is likely to help patients make better decisions than if they don’t have that information.”

        I agree. And my point with this post is that the metrics being used are unrelated to quality of care. If patients want to choose a doctor based on unverified and potentially inaccurate statistics about a physician’s office staff, that’s their prerogative. But a four star rating on Healthgrades is no more associated with a physician’s quality of care than is the average daily temperature in the state where the physician practices. It’s like saying a specific brand of car must be a quality car because the salesman was pleasant. You want to make those leaps in logic, go ahead. I’m simply providing evidence that the logic is faulty.

        “But not knowing anything else about the two surgeons, it’s hard to see why anyone would prefer the second one.”

        Possibly true. But if the second surgeon has nice office staff on Healthgrades.com, then more patients may go to him.
        And I agree that complication rates may be a little better at predicting “quality” than the timeliness of appointments. However, I can guarantee that if complication rates of physicians start being tracked as a means to judge “quality,” that physicians will take whatever measures are necessary to minimize their chances of having complications … such as overtesting and referring high-risk patients somewhere else.
        Look at the tremendous amount of “defensive medicine” and lengths to which doctors will go in order to avoid malpractice suits and reports to the NPDB.
        So if “complications” are associated with “quality,” then many doctors would transfer a large majority of the very sick patients who are statistically more likely to suffer complications to tertiary care centers. Then the “quality” numbers for the surgeons at tertiary care centers drop – in large part because they are treating sicker patients. Then will it be better to see the doctor who has no experience with high-risk patients and who therefore has no “complications” or better to see the doctor who operates daily on complicated patients but has several “complications”? And if we’re going to measure “complications,” what “complications” should be measured?

        “That doesn’t mean we should throw out all of the imperfect information that is available.”

        Not saying that everyone should. Only that patients need to know that the information is entirely unrelated to quality.
        By your logic, should Healthgrades therefore start publishing the average temperatures of the states in which doctors practice as a measure of quality as well? It’s not a perfect measure, but there may be higher quality physicians in warmer states, you know.
        And if you respond that ambient temperature is not related to quality health care, you’re making my point for me.

        “by abdicating leadership in creating meaningful metrics, the medical community has created a vacuum”

        You have created a straw man argument. Your logical fallacy is the assumption that “meaningful metrics” for quality medical care exist and can be reliably measured. My assertion is that “meaningful metrics” do not exist and the metrics being used are unreliable. I’ve shown you how every example of a metric you have come up with won’t measure what you think it will measure.
        How can the medical community “abdicate leadership” for failing to create something that can’t be created? That’s like faulting you for failing to create metrics to measure a high quality web site and failing to offer a more “wholesome” alternative when called out on the topic.
        It would be quite easy for me to create a rating system containing a bunch of “imperfect metrics” showing how your web site design is of lower quality when compared to other web sites and asserting that therefore no one should want to use your web site … but that wouldn’t be fair to you.
        See my point?

      • Jerry says:

        At a very high level, it sounds like we agree that if there exists information that can meaningfully help patients decide among doctors, that information should be made available. Our disagreements seem to boil down to two points: 1) I believe that patient ratings have at least limited value in helping people select a medical provider, and you believe that patient ratings have no value whatsoever and 2) I believe that there are metrics that are even better at helping patients decide, and you believe that such meaningful metrics do not exist and even if they do, they would be unreliable (e.g. providers would game the system). Is that a fair summary of our discussion so far?

        On the first point, my first response is that there was actually a study showing correlation between Yelp ratings and hospitals’ HCAHPS ratings. That suggests that the overall rating that patients give have some correlation with the quality metrics, even though that overall rating might include ratings on components like parking and cleanliness. (And while mortality rates and readmission rates might not be perfect indicators of quality, they do mean something.) It seems likely that in aggregate, this correlation holds to some extent for ratings on individual physicians when enough ratings are available. My second response is that if you are correct on the second point (that no meaningful metrics exist), then patient reviews are still helpful on the peripheral aspects of care. Let’s pretend for the moment that there’s absolutely no way to predict outcomes. In that case, peripheral aspects of care (such as timeliness or friendliness) are what’s left. If I’m going to have a random outcome, I might as well pick the one that is punctual and friendly. Another way of phrasing this is that there are elements of receiving care that are somewhat meaningful to patients and are captured by patient reviews, even if they aren’t related to outcomes.

        On the second point, there is evidence that for example, procedure volume is correlated with outcomes. Since I care about outcomes, I would consider procedure volume to be a meaningful metric even if “Procedure volume doesn’t necessarily equal quality.” A lot of people even within the medical community agree that proxies for quality can be measured and are useful, even if they are not perfect — so they would disagree with your assertion that such metrics can’t be created. For example, the National Quality Forum lists a large number of medical associations as members. Likewise, NCQA boasts of a number of doctors as directors. That doctors would try to game the system is a separate question. I agree that some doctors would try, some won’t, and we would probably disagree on the percentages. I would repeat the point, though, that it’s important to not let perfection be the enemy of progress: as these practices come to light, the criteria can be refined and improved. As an analogy, we don’t say that medicine shouldn’t be practiced at all just because it can’t solve all cases.

        As for your specific questions (not in order):

        “By your logic, should Healthgrades therefore start publishing the average temperatures of the states in which doctors practice as a measure of quality as well?”

        If average temperatures were found to have some bearing on outcomes and patients could choose treatment across state lines, sure. I haven’t seen any indication that there are higher quality physicians in warmer states — have you? (versus, for example, the Bardach study does indicate some correlation for patient reviews)

        “Then will it be better to see the doctor who has no experience with high-risk patients and who therefore has no ‘complications’ or better to see the doctor who operates daily on complicated patients but has several ‘complications’?”

        Why not let consumers see this data and let them decide? It’s not as if we expect the government or the car industry to tell consumers that they should buy model X versus model Y; rather, data is made available on mileage, price, engine size, and many other metrics and consumers can decide how to use that data.

        “And if we’re going to measure ‘complications,’ what ‘complications’ should be measured?”

        The medical community is in a much better position to decide this than laymen. The general public will be appreciative of sincere efforts from the medical community. If the medical community doesn’t decide what complications should be measured, or doesn’t actually measure and disseminate the data (i.e. if they “abdicate leadership” in this area), patients will turn to less useful metrics like patient reviews.

        “It would be quite easy for me to create a rating system containing a bunch of ‘imperfect metrics’ showing how your web site design is of lower quality when compared to other web sites … See my point?”

        The situation that you’ve laid out doesn’t correspond well with what doctor rating sites do. I think we all understand why the target would be irritated if someone decided to pick on a specific individual or company; it’s really hard to believe that a doctor rating site decided that they want to denigrate Dr. Smith specifically, but not Dr. Jones. A better analogy would be someone coming up with a list of metrics and saying that sites X, Y, and Z are good, while A, B, and C are not. That seems fine. And, if the site got tons of people off the internet to vote and displayed the results of that vote, even better. I’d welcome that, even if the site that I’m working on got a low rank — that sort of data that could actually help guide the refinement of the site (some companies even pay for that sort of data in the form of user testing). I’d be supportive of thoughtful website metrics even though I believe that website design is far more subjective than the practice of medicine — quality metrics within the context of medicine would be that much more valuable.

      • WhiteCoat says:

        “At a very high level, it sounds like we agree that if there exists information that can meaningfully help patients decide among doctors, that information should be made available.”
        Agreed. Information can’t be misrepresented, though. If you want a doctor who has cleanest office or friendliest office staff, those are the metrics that should be searched and graded. Mixing those random variables to conclude that one doctor is better “quality” than another is misleading, unethical, dangerous, and likely libelous.

        “I believe that patient ratings have at least limited value in helping people select a medical provider, and you believe that patient ratings have no value whatsoever”
        No, I believe that they have no value in determining “quality”. As I have maintained all along, if patients want to choose a doctor because that doctor has the shortest wait for an appointment, that’s fine. Rate that metric. Grade that metric. Let people search for that metric. But don’t make the false leap in logic to somehow say that a doctor who will give you a same day appointment is somehow of better quality than a doctor who has a two month wait list. Again, misleading, unethical, potentially dangerous, and likely libelous. Aggregating ratings for appointment times, office staff hospitality, doctor hair color, and diet preference and any other metric that sounds catchy into a single score for “quality” only compounds the problem.

        “I believe that there are metrics that are even better at helping patients decide, and you believe that such meaningful metrics do not exist and even if they do, they would be unreliable”
        There are several variables involved in these ratings: 1. What can be measured, 2. whether it can be measured reliably, 3. how those measurements affect outcomes, and 4. whether the data are pertinent to the information being sought. I’m sure there are others, but let’s focus on these four for now.
        1. “Quality” as a pure concept cannot be measured. Ever. There is no secret formula to measure “quality” because there is no agreement on a definition for “quality.” This is an a priori fact. Therefore, any argument based upon measuring “quality” in healthcare necessarily fails.
        2. We know that “quality” can’t be measured reliably and can’t be defined. So ratings sites and “satisfaction” firms create a second logical fallacy by alleging that not only can “quality” be measured, but that certain metrics are accurate indicators of “quality.” I assert that neither of those proposals are true. But let’s assume that both proposals are true for the sake of argument.
        So those sites and firms create a bunch of tangentially related metrics such as wait times and office staff friendliness and use them as substitutes for “quality.” But even if we assume that those metrics are a good measure of “quality,” the measures cannot be reliably measured and are not “standardized.” Everyone knows that a “meter” is 100 cm and exactly what distance that covers. The definition of “cleanliness” and “friendliness” differes between each rater. I crack jokes when I see patients. Some patients think I’m a jerk for doing so and have written the same on my low patient satisfaction scores, while most patients enjoy the banter. Different patients have different concepts of “friendliness”. Perhaps a person who is always happy and smiling would be regarded as being “fake” and graded lower. How should I be marked for “friendliness”? Should those marks be used to gauge my “quality”? Do old magazines in a waiting room equate to “cleanliness”? Some people may believe so and mark a sterile office lower in “cleanliness” because that is the closest measurement to something important to the rater – up to date magazines. If some people thought a “yard” was 1 foot, some people thought it was 2 feet, and some thought it was 10 feet, a yardstick would be a useless tool. Same logic applies to satisfaction scores.
        Ulterior motives for upset patients to grade providers low in every category is yet another way in which the ratings themselves are unreliable on their face — there’s no quality control.
        3. I challenge you to show me how “friendliness” and office cleanliness affect patient outcomes. No reliable data on that. Never has been, never will be. So if there is no relationship between the metrics and the outcome, remind me exactly why we are going to great lengths to collect data on the metrics.
        I’ll tell you why: Money. That’s right. Those paying the bills can withhold payments to providers who are “low performing” … on unreliable metrics … that have no standardized definition … that have little relationship to the job they are supposed to be performing. HCAHPS participation was mandated by the Deficit Reduction Act, not the Healthcare Quality Improvement Act.
        4. Finally, we get to the crux of the issue. When potential patients see a “five star” rating next to a physician’s name, what does that rating mean to the potential patients? I assert that the patient believes that a “five star” rating means that the physician is a “quality” physician based on that rating. I can guarantee that the potential patient does NOT think “Oh, OK. That physician has the friendliest office staff and the cleanest office. He may very well be a [poor clinician/untrained imposter/serial pedophile] because I’m fully aware that those type of qualities aren’t included in this five star rating. I’m going to make an appointment with HIM.”
        Office cleanliness does not equal quality. You want to create a summary star rating for “office cleanliness”, I’m perfectly OK with that. Lumping unrelated metrics into a single star rating of “quality” and misleading the public is absolutely not OK. I’m sure that is precisely the reason that Healthgrades.com removed Dr. Duntsch’s profile after news of Dr. Duntsch’s malpractice was made public. Healthgrades.com knew that the ratings weren’t accurate and was trying to hide the evidence of its own culpability for misleading the public. If I were a consumer that used Healthgrades’ ratings to choose Dr. Duntsch and later was injured by one of his botched surgeries, I would absolutely name Healthgrades.com in the lawsuit for misleading me. Strong state consumer fraud case – plus attorney’s fee provision if you win.

        I have a lot of other work to finish today, but wanted to respond to a couple of your other statements.

        My question: “By your logic, should Healthgrades therefore start publishing the average temperatures of the states in which doctors practice as a measure of quality as well?”
        Your response: If average temperatures were found to have some bearing on outcomes and patients could choose treatment across state lines, sure. I haven’t seen any indication that there are higher quality physicians in warmer states — have you?
        You’re making my point for me. Substitute the term “office cleanliness” for “average temperatures” in your sentence above. Show me data where office friendliness or office cleanliness have any bearing on *outcomes*. Why should some unrelated data be counted towards “quality” but not other unrelated data?

        My question: “Then will it be better to see the doctor who has no experience with high-risk patients and who therefore has no ‘complications’ or better to see the doctor who operates daily on complicated patients but has several ‘complications’?”
        Your response: Why not let consumers see this data and let them decide?
        Agree to a point. If consumers want to see how many surgeries a surgeon has performed or how many patients a physician has evaluated, that’s fine. But equating those numbers to “quality” is inappropriate and potentially dangerous. Your comparison of physician ratings to automobile ratings is misleading. Mileage, price, engine size are all easily ascertainable, have a common definition, are generally reliable, and are able to be replicated. None may by themselves may be a sufficient substitute for “quality,” but consumers can choose how much weight to give each metric when they decide on the vehicle that is best for them. Contrast those metrics with vague concepts such as “cleanliness, ability to listen, and office staff friendliness.” Using these undefinable and unreliable metrics in your example would be like gauging a car’s quality based on the car’s cleanliness, how the car handles in the snow (without any reference to tires, how much snow, or ambient temperatures), and the quality of the roads upon which the car drives. Now I wonder why these types of metrics aren’t made available to the public so that they can use this type of data to decide on which car to purchase …

        “website design is far more subjective than the practice of medicine”
        I just wanted to highlight this statement and say that I could not disagree more with you on this point.

        I’m enjoying the debate back and forth. I bet that EP Monthly would be interested in publishing a point/counterpoint on this topic. Would you be interested?

      • Jerry Lin says:

        “I’m enjoying the debate back and forth. I bet that EP Monthly would be interested in publishing a point/counterpoint on this topic. Would you be interested?”

        Whew! I was afraid that you might be getting annoyed :) A point/counterpoint would be great — I think we’ve outgrown the comments section. I’ll e-mail your Gmail account to get details on what format would be preferred. Thanks!

  7. MedicalQuack says:

    You folks want some more history on Health-grades..I have tons of it, just search it on my blog, been doing it for about 4 years now ever since I found my former MD who had been dead for 8 years on there listed as still taking in new patients:) Even got the attention a couple years ago of the AMA and we had a great talk about it as published on AMA News:)

    For one they are now owned by a marketing firm..

    http://ducknetweb.blogspot.com/2011/11/healthgrades-to-merge-with-cpm.html

    My original “Dead Doctor” post…

    http://ducknetweb.blogspot.com/2010/09/healthgrades-and-other-md-rating-and.html

    And as of a few months ago you could still go see Michael Jackson’s doctor too…who’s been gone and filed bankruptcy a while back..

    http://ducknetweb.blogspot.com/2013/02/flawed-data-with-physician-and-hospital.html

    These sites don’t update and blame the medical boards for the errors, but if they are going to put stuff out like this, they should expand their data mining to keep it current not use only the medical boards as a source…don’t get ripped..

    I said in June of 2012, time to get rid of these folks, they want money from the ads and service they run on the sites..think that’s a good idea?

    http://ducknetweb.blogspot.com/2012/06/healthgrades-puts-out-top-hospital.html

  8. The Hamburglar says:

    I can’t speak for Texas but here in Ohio any licensed clinician who has complaints filed against them can be located on the State Medical Board’s website. However, it won’t give you specs on disposition, bedside manners, or quality of care. I still believe that must be maintained on an individual basis. Medicine is not an exact science so getting good results can never be guaranteed. If we could do that you’d never see or hear ‘disclaimer’.

  9. Mary Martin says:

    I first looked at the Healthgrades website, while looking up my daughter’s oral surgeon. Being very surprised at the comments, I did more research and found this site. My research consisted of asking Google: Can doctors pay to have good grades on Healthgrades? I don’t know if there is anything kinky going on there, but I would not be surprised. It appeared that they are trying to steer people away from her doctor and towards other doctors. He got “below average” on everything. OUR experience with him (removing squamous cell cancer and treatment) was wonderful.

    • Jerry Lin says:

      I doubt that your daughter’s surgeon is getting penalized because he didn’t pay Healthgrades. Nevertheless, you could do other potential patients a favor by leaving a review of that doctor to let others know about your wonderful experience. There are many sites for you to do so (including the one that I work on at DocSpot).

Leave a Reply


+ 7 = fifteen

Popular Authors

  • Greg Henry
  • Rick Bukata
  • Mark Plaster
  • Kevin Klauer
  • Jesse Pines
  • David Newman
  • Rich Levitan
  • Ghazala Sharieff
  • Nicholas Genes
  • Jeannette Wolfe
  • William Sullivan
  • Michael Silverman

Subscribe to EPM