Print
E-mail
Reprint

Six-sigma medicine: Pitfalls and promises in the quest for mistake-free healthcare

No mistakes! That’s the mantra these days for medical care in the United States. Another term for reducing error rates to near zero is “Six Sigma” a business strategy developed by Motorola back in the ‘80s, and popularized by Jack Welch at General Electric in the mid-‘90s. Six Sigma is about removing the causes for errors by minimizing the variation in how specific tasks get done. In business-speak, the sigma refers to the percentage of error-free products produced. A six sigma process is where 99.99966% of the products are free of defects.

“Six sigma medicine” is about reducing error rates by improving patient safety – such as eliminating error-prone systems problems within hospitals – and avoiding really egregious errors, or “never events.” Never events include performing the wrong surgery, operating on the wrong side of a patient, or leaving a surgical instrument inside someone’s body. Improving safety is also about ensuring people don’t get the wrong medicine, an erroneous diagnosis, or have a preventable complication. The focus on errors started with the release of the 2001 Institute of Medicine report, To Err is Human1, which proposed the oft-quoted number of “44,000 to 98,000 deaths per year” attributable to medical mistakes.

Since then, heaps of money has been spent on improving patient safety and reducing errors. But has it been worth it?

While some progress has been made, it has been hard to show that patients are really any safer. According to a 2010 New England Journal of Medicine article2, patient harms are still very common in U.S. hospitals, and there are no trends toward improvement.

Why can’t healthcare fix itself, despite the huge investment?

Most important is the inherently complex nature of human disease. As emergency physicians, we are trained to recognize common signs and symptoms of a disease, so when an oddball case happens – as it not infrequently does – it is sometimes misdiagnosed. Oddball cases are rare but often have bad outcomes.

The second issue is that the expectation that healthcare providers can’t ever make mistakes is unrealistic. People expect their care to be error-free. A system that strives for perfection can be good. However, when a mistake happens, the system can respond in ways that makes care better for some, but worse for others.

Take the recent tragic case of the 12-year-old boy Rory Staunton who was treated at the NYU emergency department. Based on news reports, his symptoms seemed like a run-of-the-mill gastroenteritis, but tragically he died three days later from sepsis. Many of the details of the case are still under dispute, but his was almost certainly an “oddball” case. Clearly, it seemed to his doctors that his symptoms were just a run-of-the mill gastroenteritis. One of the issues was that his “bandemia” was reported by the hospital lab after he’d left the hospital and his doctors never knew about it.

The problem comes in how our system responds to cases like this. Instead of searching for and identifying the weak links in the system that, if altered, could have possibly caught and prevented the tragic outcome, the public goes on a witch-hunt.

In the Staunton case, there was a New York Times article about the case written by a friend of the Staunton family3. This was followed by about a week of continuous buzz in the lay-media and medical communities. Here, the court of public opinion spoke. Was it that the emergency physicians themselves made an inherently preventable thinking mistake? Or was it, as others argued, that it was just an unpreventable oddball case?

When a media storm hits like this, it can be a game changer. The problem is that the system sometimes responds in draconian, untested ways. Hospital administrators feel compelled to do something, because of the fear that inaction may be viewed as complacency.

To NYU’s credit, according to the Times, their response focused on changing the system rather than rooting out individuals. The hospital changed a policy: now the emergency physicians have to complete a discharge checklist ensuring that all laboratory results and vital signs are considered before the patient leaves the hospital. Also, when abnormal laboratory results occur, the physicians get notified directly, and if the patient has already been discharged, the patient gets a call.

The goal of the policy is to reduce the likelihood of important information being missed. On its face this makes sense, and it may actually prevent someone from being discharged in the future with a concerning lab value or vital sign abnormalities.

But there are a few issues to consider. First is the assumption that Staunton’s physicians would have changed their treatment plan after knowing the result of the bandemia, which is not certain. Second, the new hospital policy could potentially create even more trouble for NYU’s already crowded ED. A policy that requires a discharge checklist for each of the hundreds of patients discharged every day from NYU’s ED may potentially worsen crowding because of the extra time-consuming step.

To be clear, we think that NYU’s fix is a reasonable response to address a systems issue, but the overall effect of fixes like this should be carefully considered. In this case, the fix focuses on making absolutely sure that we find everyone who is ill – six sigma. But it also creates extra work that can take emergency physicians off their game, giving them less time to focus on the sickest patients or manage ED crowding.

A similar tradeoff happens these days for trauma patients. We tend to overdo it on the CT scans to ensure that not one single internal injury is missed. This is good if patients have a serious injury that may have been overlooked, but on the flip side, a lot of people without serious injuries get exposed to tons of CT radiation and costs spiral up for everyone.

So how can we make medical care safer for all, not just some?

First, healthcare needs patient safety solutions that are proven to work. A good example is surgical checklists, which are cheap, easy, and have been touted by well-known doctors Peter Pronovost and Atul Gawande. In a study across 26 countries, checklists were associated with dramatic reductions in mortality and surgical complications4. (NYU’s intervention was a checklist). Another example is creating a culture of patient safety, which emphasizes reporting near misses to find underlying system problems.

There are also difficult questions our society needs to ask, like “do we really want six sigma medicine?” Six sigma medicine makes our system very expensive. As a simple example, take chest pain symptoms. Even after tests are negative, we admit patients even when there’s a small (1-2%) chance they’re having a heart attack. This approach is driven by a fear of lawyers and all the other consequences to doctors and hospitals for missing subtle cases. It comes at a tremendous cost to everyone, and while it does catch the occasional patient with more serious causes for chest pain, it is not even clear that fewer heart attacks are missed.

Maybe a better approach is to encourage the creation of guidelines for dealing with these six sigma medical decisions and not fault physicians when they follow them. Providing safe harbor from malpractice suits to doctors for providing care that is both evidence-based may be very effective in changing medical practice and reducing costs.

Another potential solution is that perhaps the facts and penalties in the case of medical errors should be handled by the medical system, not the legal system. Maybe there should be a fund to compensate victims of medical errors. A U.S. model for this is vaccines: if someone is injured by a vaccine that helps millions of others, there is no lawsuit, but the victim is compensated for their injuries.

When these systems have been piloted in the U.S. for other conditions, like birth injuries, the results were mixed in terms of cost savings. In New Zealand, there is an injury-based compensation system rather than an error based one, which appears to save money but hasn’t necessarily improved patient safety.

We think Six sigma is a laudable goal for the U.S. medical system. It is just important to understand the costs and trade-offs of our quest for perfect medicine.

Jesse M. Pines, MD, MBA is Director of the Center for Health Care Quality; Associate Professor, George Washington University
Zachary Meisel, MD, MPH, MS is an Assistant Professor of Emergency Medicine and the Director of Research Translation and Dissemination Science
Center for Emergency Care Policy Research, Perelman School of Medicine, University of Pennsylvania

 

 

Comments   

# Zero-Mistake Medicine -- Unintended Consequences and Unrealistic ExpectationsChris Carpenter 2012-11-19 04:05
Interesting essay by Dr. Pines & Dr. Meisel reviewing the often unspoken unintended consequences of a Six Sigma approach to medicine. I agree that the goal of mistake-free medicine is commendable and worth striving towards, but not without considering the side effects of this approach.

1) Can emergency physicians with no prior knowledge of or relationship with a patient provide 100% accuracy all of the time after just minutes to evaluate a constellation of signs and symptoms? If so, can they do so in an era when policy makers are concurrently calling for an end to "overdiagnosis" (see http://www.preventingoverdiagnosis.net/ and http://pmid.us/22645185)?

2) If truly achievable, who will pay for the exorbitant testing required to reach diagnostic perfection? Already, third-party payers are deeming many ED visits unnecessary and non-reimbursabl e (http://tinyurl.com/3qweq7r).

3) Is error-free medicine at any cost what the majority of patients desire? Is there a role for truly shared decision making with patients in a manner that shares acceptable uncertainty? A recent review in BMJ called "preference misdiagnosis" suggested that doctors may not truly understand patient's viewpoints regarding diagnostic accuracy (see http://tinyurl.com/c92kwld)!

4) What resources are available to ED physicians to move towards diagnostic perfection? A substantial body of literature suggests that most physicians are not trained to be discriminating or rational about diagnostic test decisions (see http://pmid.us/6734107 or http://pmid.us/22733387). Pines' et al have provided one tool to facilitate "disruptive innovation" and help EP's simultaneously improve efficiency and accuracy with a recent textbook (see http://tinyurl.com/c72uj25). What else is out there?

Stimulating essay!
Reply
# "Disruptive" Self-assessment sTom Scaletta 2012-11-28 10:35
As influences shift from fee-for-service to capitation, EPs will order more and discharge less. As a result, next day wellbeing checks are necessary. My "old" model for this (http://tinyurl.com/c7td5o3) is no longer viable due to the inherent HR costs. Thus, I am now pefecting a "disruptive" model for electronic self-assessment s.
Reply

Add comment

Security code
Refresh

Popular Authors

  • Greg Henry
  • Rick Bukata
  • Mark Plaster
  • Kevin Klauer
  • Jesse Pines
  • David Newman
  • Rich Levitan
  • Ghazala Sharieff
  • Nicholas Genes
  • Jeannette Wolfe
  • William Sullivan
  • Michael Silverman

Earn CME Credit