Archive for the ‘Medical Studies’ Category

Stay Away From That Oxygen Stuff – It’ll Kill Ya

Monday, August 2nd, 2010

A recent publication in the Journal of the American Medical Association is right up there with the study on how thrombolytics improve outcomes in patients with hemorrhagic strokes.

Researchers found that patients who were admitted to the intensive care unit after suffering a cardiac arrest were almost twice as likely to die if they had “hyperoxia” – which was defined as a PaO2 of 300 mmHg or more.

Hyperoxia patients died 63% of the time, hypoxia patients (PaO2 < 60 mmHg) died 57% of the time, and normoxia patients (PaO2 between 60 and 300) died 45% of the time.

Common thinking with the docs I know is that more oxygen is better – except with COPD patients.

Don’t have full access to the JAMA article, so am not sure what percentage of each group ended up actually walking out of the hospital. It is entirely possible that the patients who survived ended up in chronic vegetative states.

Nevertheless, this study plus the work of Gordon Ewy in advocating “chest compression only” CPR (no mouth-to-mouth) really bring the current “standard of care” for resuscitation of cardiac arrest into question.

Sending Home the LOL who DFO

Thursday, July 29th, 2010

The Journal of the American College of Cardiology presented the ROSE study for triaging patients with syncope in the emergency department. No, ROSE isn’t some LOL that the study was named after. ROSE is an acronym standing for “Risk Stratification of Syncope in the Emergency Department.” They just left out a few letters because an acronym of “RSOSITED” just isn’t quite as catchy. Maybe SOS-ED would have been cooler, but ROSE it is.

Anyway, the study looked at what factors were likely to be present in patients who passed out and who had a “serious outcome” or death in the following month. Serious outcomes or death occurred in 7% of all patients who passed out in this study. They found that positive fecal occult blood, low hemoglobin levels, low oxygen saturation, and Q waves on the EKG were all predictive of worse prognosis for patients with syncope.

In addition, a BNP (brain natriuretic peptide) level greater than 300 was present in 36% of syncopal patients who later suffered serious cardiovascular events and in 89% of syncopal patients who later died.

More than 98% of patients who had none of these risk factors also had no serious outcome or death in the subsequent month after their syncopal event.

So check the BNP on syncope patients and get out those rubber gloves, ladies and gents. Add syncope to the list of patient complaints for which rectal exams may be indicated.

After all this, if you’re still wondering what “LOL who DFO” means, then you have to read this religious post.

Regulating Radiation

Monday, June 28th, 2010

A recent article in the New England Journal of Medicine touches off another salvo about how nonclinicians have no problems judging the abilities of clinicians in the world of medicine.

The article begins by presenting the case of a woman who awoke with facial paralysis and then went to the emergency department. On arrival, she received CT scans and MRI scans of her brain. When those were normal, she was diagnosed as having Bells Palsy. Two weeks later, she developed hair loss and other symptoms and it was found that during her first ED visit, the radiology department mistakenly exposed her to 100 times the normal dose of radiation for a brain CT scan. She now has a federal class action suit pending against the CT scanner manufacturer and a medical malpractice lawsuit pending against the treating physicians.

It appears that the case cited may be this one. More information here.

The author of this article then uses her own calculations to conclude that the “risk of cancer from a single CT scan could be as high as 1 in 80 — unacceptably high.” The study she cites shows that radiation doses for the same tests vary – as they should. Giving the same dose of radiation to a 90 pound grandma and a 500 pound grandson would result in at least one uninterpretable study. Based on the 4 hospitals they studied, they disputed the risk of cancer being 1 in 2000 from a CT scan and stated that the risk to a 20 year old woman from a single chest CT or single multiphase abdomen and pelvis CT could be as high as 1 in 80.

The paper advocates lessening and standardizing radiation doses for examinations, noting that the improved image quality obtained with higher radiation doses often has no change in clinical outcomes. Diagnostic accuracy would not be affected if the radiation dose were reduced by 50%.

The paper also suggests tracking a patient’s dose of radiation over time and including that measurement in the medical records. Great idea, but how is a radiation dose from someone living in California going to help me when the patient has a potential cervical spine injury in my town while she’s on vacation?

Finally, the author suggests that we need to reduce the number of CT scans being performed. Each year 10% of the population receives a CT scan and 75 million scans are conducted each year – with the rate growing more than 10% annually. At the heart of the increased number of scans is “increasing ownership of machines by nonradiologists” and the resulting “self-referral” which increases the use of the scanners. Those bastard non-radiologists. Only radiologists should be able to self-refer and get away with it.

In general, I think that Dr. Smith-Bindman is on point with her suggestions. It would be great if patients’ total radiation doses could be tracked throughout their lives. However, assuming that could happen, would a high dose of radiation make any difference in determining whether an 80 year old lady with abdominal pain got a CT scan? How about in determining whether a hypotensive unconscious 50 year old trauma victim should undergo CT scanning? What about deciding whether the obese 30 year old complaining of severe difficulty breathing should get a chest CT to rule out a pulmonary embolism?

Can we reduce radiation dose at the sacrifice of less clear scans? That’s a radiologist’s call. Is Dr. Smith-Bindman following her own suggestions? If we missed a small nodule that later became metastatic cancer, would the defense that “at least the patient didn’t get as much radiation” be a sufficient defense in a medical malpractice trial?

The suggestions are good, but they don’t apply to clinical practice.

In addition, while the FDA does regulate “radiation-emitting electronic products” including diagnostic x-ray equipment, telling patients how many diagnostic radiographic studies they may “safely” obtain is likely an area of mission creep for the agency – akin to regulating how many hours of television people may watch in a day (yes, television receivers emit radiation) or how many hours George Hamilton may spend under his radiation-emitting sunlamp. I’m not so sure that having the FDA limit the number of scans a patient can receive is a good thing.

I also take issue with Dr. Smith-Bindman’s statistics that demonstrate greater than a 1% incidence of developing cancer from a single CT scan. If 1 in 80 patients can get cancer from a single CT scan and almost 80 million CT scans are performed every year, each year we are causing close to 1 million cases of cancer in US citizens. According to the American Cancer Society, it is estimated that 1.5 million total cases of cancer will be diagnosed in the US in 2010. Is our use of CT scans really causing more than half the cases of cancer in the US each year? Even if we cut the incidence in half, causing 500,000 new cases of cancer each year is a hard allegation to substantiate.
While the number of CT scans is allegedly increasing at 10% per year, the number of new cancer cases in the US was 1.22 million in 2000 and is 1.53 million in 2010 – hardly an increase of 10% each year over 10 years. Those increases in new cancer cases also paralleled an increase in the population size – from 281 million in 2000 to 305 million in 2009. On a per capita basis, the incidence of newly diagnosed cancer went from 4.3 per thousand to 5 per thousand during those nine years.

Dr. Bruce Hillman wrote an accompanying article citing how “an unknown but substantial fraction of imaging examinations are unnecessary and do not positively contribute to patient care.” As some causes of the unnecessary use of diagnostic imaging he cites patients who “pressure their physicians to refer them for imaging studies even when imaging is unlikely to provide any value.” He also cites defensive medicine, self-referral, medical training programs that ingrain “shotgun” diagnostic testing to confirm diagnoses with the “greatest possible certainty.” He also acknoledges that radiologists also share in the blame for fueling the explosion in diagnostic imaging.
His ideas for changing the system are much more realistic and include tort reform, better physician and medical student education, engagement of radiologists as consultants, and “a change in mindset among physicians.”
I agree with Dr. Hillman on every point except the last one. Physicians have a “mindset” that is created by attorneys and by the public. In most cases, physicians are expected to be perfect or to exhaust all possible testing in finding a diagnosis (I still haven’t had one person who disagrees with me on this point present me with a diagnosis that it is OK to miss). If a diagnosis is missed, the lack of “appropriate” testing that would surely have made the diagnosis is a central theme in the physician’s malpractice trial.

We don’t need to change the physician’s mindset. We need to change the public’s mindset. If less testing is performed, more people will have diseases that won’t be diagnosed … or that won’t be diagnosed early enough. That is an inevitable result of reducing the use of diagnostic radiologic testing. Will the public and the juries sitting in medical malpractice trials accept this fact? Can we say that a 95% possibility you don’t have a deadly disease or severe injury is “good enough” and non-actionable? Until society makes the commitment to lower the bars over which clinical physicians must jump, the incidence of diagnostic imaging – and all the radiation that accompanies it – will go up and not down.

There were many news articles published about this study, including USA Today and Forbes.com.

I didn’t see any clinical physicians interviewed in these articles – only radiologists. That should be the first clue that something is amiss — news articles with non clinicians commenting on how clinicians should do their jobs. How confident would we be if USA Today encouraged readers to pick up the paper next week when USA Today will have chemical engineers commenting on how mechanical engineers should stress test products more thoroughly? Hey – they’re both engineers, right?

The money quote from Dr. Hillman in a Reuters article really irked me, though:
“We need to convince physicians that a quest for certainty is impossible, costly and is harmful because of indirect diagnoses.”

If radiologists are so certain that diagnostic imaging doesn’t need to be done, then cancel the test. That’s right. You think a test is a prospective waste of radiation? Refuse to perform it. Right now, you’re talking the talk, but you’re doing a face plant on the concrete when you try to walk the walk.

How about this: When the dumb ER doc orders the next total body scan, walk over to the emergency department, examine the patient, and come up with your own diagnosis without using your deadly CT scanner. Get rid of your hindsight bias and make a prospective diagnosis without having the benefit of a “normal” diagnostic test sitting on the computer screen in front of you.

Isn’t as easy as your little news sound bites make it seem, is it?

Want to regulate something to really stop the flow of radiation into patients? How about making the American College of Radiology start regulating the number of diagnostic radiology reports from their members that contain phrases such as “cannot rule out underlying lesion, recommend CT scan for comparison”, and then “CT scan non-diagnostic, recommend bone scan for further clarification”. We could cut down on costs if radiologists would stop recommending  “MRI for clinical correlation”, also.

I’m betting that we won’t see too many sound bites about the implications from this radiology report lingo hitting the headlines any time soon.

Will Insurance Deny Payment if You Leave AMA?

Tuesday, May 11th, 2010

Fifty seven percent of all health care providers (and probably just as many patients) believe that if you leave the hospital or the emergency department against medical advice, insurance companies will not pay for the visit. Half of doctors surveyed have told or would tell patients that insurance would not pay the bill if they left AMA.

With 1 in 70 of all discharges in the US being against medical advice, such a policy would have a significant effect on finances for both patients and hospitals (if patients are unable to pay for denied coverage).

Enter a study in last month’s Annals of Emergency Medicine titled “Insurance Companies Refusing Payment for Patients Who Leave the Emergency Department Against Medical Advice is a Myth

Several researchers reviewed 104 AMA discharges in a suburban hospital emergency department and queried 19 insurance companies including HMOs, PPOs, Medicare, Medicaid, and worker’s compensation.

Out of 104 AMA discharges, each and every visit was fully reimbursed by the  insurance companies.

Now that the cat is out of the bag, will insurers change their tunes?

May not be a bad idea to find out what your policy covers before you have to make a decision to leave AMA.

Add Another Thing to the List

Tuesday, March 16th, 2010

Looking at ListIn addition to calling it the “ER,” using cell phones in said “ER,” and engaging in baby talk, we can now add “scientific studies” like this to the list of things that drive me friggin batty.

The American College of Radiology published this study that purported to analyze the “appropriateness” of outpatient CT and MRI scans ordered from primary care clinics at an academic medical center.

In the study, researchers at the University of Washington used “appropriateness criteria from a radiology benefit management company” to determine whether CT scans and MRIs ordered by the lowly primary care physicians met “criteria for approval.”

Then researchers compared studies that did meet “criteria for approval” with those that did not meet “criteria for approval” and found that 26% of the studies ordered were considered “inappropriate.” The authors listed several examples of “inappropriate” studies such as obtaining a brain CT for chronic headache, obtaining a lumbar spine MR for acute back pain, ordering knee or shoulder MRI in patients with osteoarthritis, and ordering a CT for hematuria during a urinary tract infection.

Here’s the thing, though. The study states that “only” 24% of the “inappropriate studies” had positive results and affected patient management. In other words, if the researchers had not performed the “inappropriate studies”, they would have missed clinically significant findings in a quarter of patients. The conclusion of the “study” is that because the sensitivity of appropriate studies is higher than that of inappropriate studies, primary care physicians need help to “improve the quality of their imaging decision requests.”

Want some help? Here’s some help for you: Stop the Monday morning quarterbacking and create a policy at your academic institutions so that none of the lowly primary care physicians can obtain a diagnostic radiology test without the esteemed radiologist’s approval. Lowly family practitioners can order the tests and you researchers just veto them when they cross your desk. Think of all the money and wasted testing you’ll save. Oh yeah … then you can be legally liable for the bad patient outcomes when you don’t allow the test.

Why doesn’t one of you suggest that as an official ACR policy at your annual meeting in April?

Those tests don’t look quite so “inappropriate” when you don’t have the benefit of a retrospectoscope, do they?

P.S. Have family practitioners ever done a study to determine how many of the additional radiographic tests recommended in a radiologist report (i.e. “hip fracture present, cannot rule out pathologic fracture, recommend MRI and bone scan”) were retrospectively “appropriate”?

Reducing Bloodstream Infections

Monday, February 22nd, 2010

Emperor_Clothes_01There’s this light on my way to work that is just a royal pain. It’s set up so that you have to wait for the arrow to make a left hand turn. The intersection is busy, especially in the mornings, and the arrow only stays lit for about 13 seconds. So you end up waiting five minutes or more – through several light cycles – to make the turn.
OR … you can go straight through the intersection, turn left into McDonald’s parking lot, pull out of the parking lot, come back to the intersection from the other direction, and make a right turn, saving yourself 4 minutes and 30 seconds.
Now mind you that drivers who choose the latter route are, in effect, going through a red turn arrow – they’re just taking a bunch of extra steps to make sure that they are complying with all of the traffic laws in the process.

You’re probably wondering what a traffic light has to do with bloodstream infections. I’ll get to that later.

This month, Consumer Reports published a well-written article about reducing hospital infections, and a lot of the take-home messages are good ones. The Consumer Reports article focuses on blood stream infections – also known as “septicemia“. Consumer Reports compared central line infection data for intensive care units at 926 hospitals in 43 states. Hospitals voluntarily submit such information to the Leapfrog Group, a nonprofit organization based in Washington, D.C. and Consumer Reports obtained the data from Leapfrog.

As many people realize, septicemia and sepsis can lead to significant mortality in patients. Approximately 20–35% of patients with severe sepsis and 40–60% of patients with septic shock die within 30 days. Anything that we can do to prevent bloodstream infections will be a net positive for patient care.

So it was interesting to read the data Consumer Reports collected regarding central line-related bloodstream infections. In every state, hospitals significantly decreased the number of central line infections that occurred. In fact, many hospitals – several with more than 6,000 central line days – reported ZERO central line-related blood infections. You read that right. ZERO. Zilch. Nada. Absolutely no incidents of central line-related bloodstream infections.

The prevention in central line-related infections is credited to a simple five step checklist that was developed by Peter Pronovost, a Johns Hopkins critical care specialist. He felt that public disclosure of infection rates was a powerful motivator for hospitals to reduce the incidence of infections.

I agree, to a point, but there is a bigger motivator out there, though. Cold hard cash.

Under Section 5001(c) of the Deficit Reduction Act, the Centers for Medicare and Medicaid Services was required to select diagnosis codes that “have a high cost or high volume”, results in higher payment, and “could reasonably be prevented using evidence-based guidelines.” Bloodstream infections related to catheters was chosen as one of these codes and eventually became known as a “never event” – at least alluding to the notion that such infections should “never” happen and making a firm statement that the government would “never” pay for care related to such infections. In law, the concept of incurring liability for the occurrence of an event, regardless of whether that event is within one’s control is called strict liability. Here are come comments I previously made about strict liability in medicine.
Faced with public scrutiny and the possibility of being held liable for providing significant amounts of uncompensated care to sepsis patients, hospitals needed to make changes … and they did.

So first I’d like to start by congratulating the hospitals in Pennsylvania that made the Consumer Reports list for ZERO central line-related bloodstream infections.
At the top of the list was UPMC Presbyterian – Shadyside. Shadyside was not only tops in the state, it was tops in the NATION. Shadyside had 13,596 patient “central line days” without a single central line-related infection. Amazing.
Also included in Pennsylvania’s list were UPMC St. Margaret in Pittsburgh with 2,902 infection-free central line days, UPMC Magee Women’s Hospital in Pittsburgh with 1,600 infection-free central line days, and Southwest Regional Medical Center in Waynesburg with 1,040 infection-free central line days.

Congratulations to these hospitals on jobs well done.

You’re probably wondering why I chose to look at the hospitals in Pennsylvania, aren’t you?

As part of the public shame er, um, disclosure efforts required under Pennsylvania law, Pennsylvania created a web site to compare various costs of treatment and efficiency of health care for multiple different medical problems. Pennsylvania collects information on more than 4.5 million patient visits each year and then summarizes that information on its Health Care Cost Containment Council web site (which it calls “PHC4″).
It just so happens that one of the metrics on the PHC4 web site is “septicemia” – those same “blood infections” that Consumer Reports wrote about.

Now if all four hospitals dropped their cathether-related blood infections to ZERO, then the incidence of blood infections should also decrease at least a little, right?

Let’s look at UPMC Shadyside. Even though the number of catheter-related blood infections was ZERO, the cases of septicemia increased each year between 2002 and 2008, and they increased a lot. As in 145 cases in 2002 up to 881 cases in 2008. The costs to treat those cases also increased – from $30,000 to more than $69,000 per event. AND their “outlier” numbers for prolonged length of hospital stay in patients with sepsis were worse than expected between 2006 and 2008.

UPMC St. Margaret’s data also showed an upward trend, from 152 cases of septicemia in 2002 to 250 cases of septicemia in 2006 and then down to 209 cases by 2008. Costs also more than doubled during that time period, reaching $37,228 per case by 2008.

Southwest Regional was the only hospital that had a downward trend of septicemia cases, but even that data was haphazard. 32 cases of septicemia in 2002, 40 cases in 2004, 14 cases in 2006, and 23 cases in 2008. The costs for treating septicemia at Southwest Regional also doubled, but in 2008, its charges were $16,253 – less than one quarter of UPMC Shadyside charges for treatment of the same medical problem.

Magee-Women’s Hospital also had strange data. The number of septicemia cases it reported remained between 5 and 9 per year from 2002 to 2005. Suddenly in 2006, the number of cases at Magee-Women’s jumped to 28 and remained between 23 and 28 per year from 2006 to 2008. Its costs increased by almost double from 2004 to 2008, reaching $41,288.

You’re probably thinking that other variables can affect this data, and I’d agree with you. Perhaps more people in Pennsylvania just happened to develop non-catheter related bloodstream infections during those years. Maybe all the other hospitals except for those above are getting contaminated central line kits delivered to them. Maybe some hospitals focus so much on preventing catheter associated bloodstream infections that they drop the ball in other areas. Who knows what other facts may explain the precipitous fall in catheter related bloodstream infections despite a significant increase in bloodstream infections as a whole. It just puts a question in my mind. Are things really getting better or are hospitals all over the country just telling us … and CMS … what we want to hear?

Think about it. For the sake of example, I’m going to use UPMC Shadyside because of their high volume of patients. Assume in 2008, that 10% of the patients with septicemia at UPMC Shadyside were Medicare patients with catheter-related bloodstream infections (this article from Great Britain cites catheter related bloodstream infections as 10%-20% of all hospital acquired infections in the UK, so I’m staying on the low side of the cited statistics). If all those infections were considered “never events,” Shadyside would have lost more than $6 million dollars in 2008 on the care of those patients. Every patient with a catheter-related bloodstream infection at Shadyside can translate into more than $69,000 in lost revenue for the hospital.

With reimbursements being cut and many hospitals bleeding red ink, you think that every hospital out there doesn’t have an incentive to selectively interpret bloodstream infection data?

Here are some examples of how that selective interpretation might occur.

All of the data that I could find relates to catheter associated bloodstream infections in the Intensive Care Unit. If a patient develops signs of an infection and is then moved out of the ICU before official culture results come back, does that patient get dropped as a data source? Don’t know. I couldn’t find any guidelines on what to do in that situation.

How is a “catheter associated bloodstream infection” even defined? There’s no universal definition. Even the CDC admits that “the rate of all catheter-related infections (including local infections and systemic infections) is difficult to determine. Although CRBSI [“catheter related blood stream infections”] is an ideal parameter because it represents the most serious form of catheter-related infection, the rate of such infection depends on how CRBSI is defined.”

We can use the definition from the National Nosocomial Infections Surveillance System requiring “presence of recognized pathogen” in blood cultures not “related to” infection at another site. What if the pathogen was not specified? Perhaps only gram positive cocci but subtyping not performed. Does that data get thrown out? What if the patient has a pimple at another site? Is that “related to” the blood stream infection? Does that data get thrown out? What if there is a bedsore anywhere on the patient’s body? No longer a catheter-related bloodstream infection?

Appendix A of this MMWR report (.pdf download) has other definitions. One definition requires that the same organism be cultured from the blood and the tip of the catheter that has been removed. What if the catheter tip wasn’t cultured? Another definition requires that two blood cultures at different times show the same organism. What if only one blood culture was done?

The definitions don’t say anything about antibiotics, either. If a patient receives antibiotics prior to blood cultures being drawn, it is likely that the antibiotics in the bloodstream will inhibit bacterial growth and will falsely decrease the numbers of positive blood cultures. If the patients get antibiotics through their central lines, how do you think that will affect the results of the cultures of the tips of the central lines? Is that reportable?

Leapfrog Group and the federal government make a big deal about paying for performance. “Tie payment to outcomes” the Leapfrog Group advocates. When you start tying payments to outcomes without a well-thought out plan on how to reliably measure the outcomes, you’re going to get exactly what you pay for. Garbage in, garbage out. Just like  drivers trying to avoid waiting five minutes to turn a corner when they’re late for work, hospitals have an incentive to avoid undesirable situations by taking advantage of loopholes in the rules and definitions.

The thing that bothers me most about data like this is that it tends to make people both complacent and angry.
People become complacent when they go to hospitals with “zero” catheter related bloodstream infections. What a great place this must be! I’m safe here! Maybe that’s true, but maybe it isn’t true. How is their data interpreted?
People become angry when they’re affected by one of these highly-publicized negative outcomes.  Hospitals that still “allow” patients to develop such infections are viewed as negligent and get a bad reputation.

Does this mean that hospitals shouldn’t follow the Dr. Pronovost’s five step checklist? Absolutely not. But if those checklists work sooooo well, then why doesn’t the government just say “we’re not going to pay you if you don’t use the checklist”? Focus on the process, not the outcome. You’ll get everyone following the checklist overnight. Then you’ll see how effective it really is.

Nah. There’s more political capital in making the agencies look good and making the hospitals look bad.

What’s the point of this protracted post? There are a few.
1. You get what you pay for. If you pay for statistics showing a decrease in some measured outcome, you’ll get statistics showing a decrease in some measured outcome.
2. You don’t get what you don’t pay for. When you stop paying for an outcome, those providing the services might find a way to avoid the outcome, they might find a way to make it look like the outcome never happened, they might find a way to make someone else pay for the outcome, or they just might stop providing the services altogether.
3. The devil is in the details.

Now, what’s all this about CMS representatives marching in some parade … with an Emperor?

P.S. Did anyone see any government run hospitals in Consumer Reports’ list? I didn’t.

Contrast Allergy and Shellfish

Wednesday, January 27th, 2010

shrimp_cocktailA recent EMedHome Clinical Pearl sheds some light on the alleged relationship between “allergies” to radiocontrast/iodine and seafood allergies.

The pearl noted that iodine is found throughout our bodies and is added to most kinds of table salt used in the United States. Our thyroid glands need iodine to function properly. While seafood contains iodine, the allergies to seafood are due to muscle proteins, not to the iodine.
Because reactions to IV contrast are not IgE-mediated, they are not considered “anaphylactic” or “allergic.” Sensitization does not occur since the reactions are not immune-mediated. In other words, your immune system won’t “remember” a prior reaction to contrast material.
Administration of steroids has no effect on whether a severe reaction will occur. Since the reaction is not “allergic”, Benadryl probably won’t have any effect, either – although this was not specifically stated in the study.
Severe reactions to contrast media occur in 0.02-0.5% of cases and deaths occur in 0.0006-0.006% of patients (something else to consider when deciding whether to undergo repeated CT scans), but serious reactions and death are not related to allergies to iodine/seafood or to prior reactions to contrast media.

One recently-published study used to create the pearl dispels this “medical myth” quite nicely.

Want to Avoid Appendicitis? Get Your Flu Shot.

Tuesday, January 26th, 2010
en_Appendicitis[1]Could appendicitis be a viral illness … or be related to a viral illness? A recent Archives of Surgery article raises some interesting questions.Researchers performed a retrospective analysis of appendicitis cases and compared them to incidence of influenza, rotavirus, and gastrointestinal infections. Using 40 years of data, they noted that general trends for appendicitis and influenza tended to parallel each other through the years, although influenza obviously had more predominance in winter months while appendicitis rates remained fairly constant throughout the year. No such correlation was found between rates of rotavirus and appendicitis.

Researchers also noted that appendicitis tended to occur in “clusters” – with several citations to appendicitis outbreaks.

Most interesting to me was that “perforating appendicitis” – where the appendix ruptures – and “nonperforating” appendicitis – where the appendix becomes inflamed but does not rupture – had no correlation to each other or to any of the infectious diseases studied. The researchers stated that “our epidemiologic findings suggest that patients who have perforated appendicitis have a different disease entity than those with nonperforating disease.”  The problem now is figuring out which ones will rupture and which ones won’t.

This study makes me wonder whether the lack of elevated WBC count in so many appendicitis cases may be due to the viral effects of the disease. It also makes me wonder whether there is a correlation between elevated WBC counts and “perforating disease.”

I wonder how many physicians have been successfully sued for being negligent in “delaying surgery” and “allowing a patient’s appendix to rupture” when the ruptured appendix may have been due to factors beyond the physician’s control. Cerebral palsy litigation comes to mind. Until we can distinguish between the two types of appendicitis – if two types of appendicitis really do exist – the emphasis will on be removing a patient’s inflamed appendix regardless of the cause of inflammation. If there are really two types of appendicitis, how many unnecessary surgeries are being performed to avoid liability for missing an appendix that perforates? Very interesting starting point for more studies.

Also of interest is that a USA Today article which cited the study mentioned a USC surgeon who reported that 70 cases of CT scan-confirmed appendicitis went away when treated with antibiotics – which screws up the whole notion of the “viral illness” theory but certainly adds to the “everyone is going to die from MRSA” theory.

Could Satisfaction Surveys Be Harming Patient Care?

Monday, December 14th, 2009

SurveysA couple of weeks ago, I posted a survey about patient satisfaction surveys. To this point, 642 people responded to the survey, which is outstanding.

Some of the responses were surprising. I’m getting the impression that the surveys really are more about satisFICTION that satisfaction, but you can judge for yourselves.

Health care providers
Of the health care providers that responded to the survey, 82% of their hospitals/employers/practices collected patient satisfaction data.
57% of those collecting data used a paid service such as Press Ganey or Rand. 23% used in-house surveys.
More than two thirds of respondents did not know their survey response rate. Of those that did know, most had a response rate between 2% and 10%.
65% of respondents said that their satisfaction scores correlated below average or poorly with the opinions of the patients they treat.
Regarding treatment, more than 40% of respondents had altered treatment due to the potential for a negative patient satisfaction survey. Of those that altered treatment, 67% gave treatment that was probably not medically necessary more than half of the time. Eleven percent of respondents described adverse outcomes from performing such treatment, including kidney damage from IV dye, allergic reactions to medications, hospital admits for “oversedation” with pain medications, and Clostridium difficile diarrhea.
Because of the effects of patient satisfaction surveys, more than 25% of respondents performed testing and gave medications that were probably not indicated, 18% admitted patients who probably did not require admission, and 20% wrote work notes for patients that were probably not warranted. Others mentioned that they did not perform patient education that they feared would anger patients and that they spent “prolonged” amounts of time in rooms selling a treatment plan.
More than 75% of respondents felt that patient satisfaction scores decreased the quality of care that they provided and nearly 90% of respondents believed that patient satisfaction scores decreased the efficiency with which they were able to evaluate and treat patients. More than half stated that patient satisfaction scores increased the amount of testing they performed.
Eighty one percent of medical providers were aware of instances in which patients intentionally provided inaccurate derogatory information on a satisfaction survey and 84% felt that patients used the threat of negative satisfaction surveys to obtain inappropriate medical care.
Nearly one in eight respondents had their employment threatened due to low patient satisfaction scores.

Administrators seemed to agree that patient satisfaction scores do not correlate well with general opinions of patients treated in their facilities. All administrators answering the question rated the reliability of patient satisfaction scores from average to below average. The importance of satisfaction scores varied. 25% felt that scores were very important while 75% felt that scores were mildly to moderately unimportant. Administrators seemed to feel that satisfaction scores had little effect on efficiency or amount of testing performed. However, in contrast to answers given by the providers, a vast majority of administrators felt that the effect of patient satisfaction scores made it significantly less likely that providers would render inappropriate medical treatment.
All administrators wanted their percentage/percentile of “excellent” scores on satisfaction surveys to be 90% or greater.
Only one administrator would discount or ignore low survey scores from specific patients and only one administrator reviewed the medical records of patients who provided low satisfaction scores.

More than half of patients responding to this survey did not fill out satisfaction surveys after visiting a hospital or medical practice.
Of those that did complete surveys, 70% did so to provide complimentary information, 23% did so to complain about care received or a specific provider, and about 7% did so to provide suggestions for improvement. 73% never received follow up after completing a satisfaction survey despite the fact that nearly 60% expressed a desire for feedback.
Of the patients that did not complete satisfaction surveys, 40% stated that the facilities they visit do not offer them. 18% felt that surveys were a waste of time. 21% did not believe that anyone would act upon their responses. Many other respondents noted that they felt the surveys were irrelevant to the care they received and that they find it “insulting” for medical professionals to be graded on their medical care by laymen knowing little about medicine.

The responses about what question respondents would add to a satisfaction survey were quite insightful. Many advocated for shorter surveys. Several suggested asking about the one best experience and one worst experience they had at the facility. One suggested asking whether the amount of money charged was worth the care received. Several asked about the one thing a facility could do to improve. Many asked about the effectiveness of the communications.
There always have to be some smart asses in the crowd. One suggested a question asking why “the porridge-bird lays his egg in the air.” I’ll leave that one to all of you to figure out.

Probably the biggest surprise to me was the number of medical providers who stated that patient satisfaction surveys caused them to provide inappropriate medical care while administrators seemed to believe that just the opposite would occur. I also found it statistically interesting that all administrators wanted their facilities to be in the 90% or above club when only 10% of a given survey population can ever be in the 90th percentile.

I plan to leave the survey open for another week or so to see whether any extra responses are generated by this post. If you haven’t completed the questions, please give them a look. It shouldn’t take more than 5 minutes.

The survey is at this link on www.esurveyspro.com

Once the survey is closed, I’ll analyze the data a little further and see whether EP Monthly will publish the results. If not, I’ll post a .pdf of the results for everyone to download.

Thanks again for participating!

New Medical Inventions

Wednesday, November 25th, 2009

lung_flute_modelThe Lung Flute.

Interesting concept. A small reed within the contraption vibrates when a patient blows into the mouthpiece and the vibrations are transmitted into the lower lungs, changing the viscosity of sputum in the lower airways. Video of the device in action is here.

Seems odd that such small device would have such a significant effect.

Call me crazy, but I’d try to come up with a better name than the “lung flute”. Maybe something cool like the “mucinator” or something scientific like a “mucociliary clearance device.”

I just couldn’t see writing an order for a stat “lung flute” to a patient’s bedside.



Also check out the Littman 3200 stethoscope. For a mere $700+, you can upload patient heart sounds via Bluetooth to a computer and use the included computer program to analyze the tones for arrhythmias and for murmur analysis. Video here. The device is reportedly much more sensitive than a physician’s ears at picking out abnormal heart sounds.

Which leads me to the question … if this device is so much better than physicians at hearing murmurs, then why do they still put earpieces on it? They just ought to sell the handle portion with its computer screen readout.

Maybe they’re planning to turn it into a hybrid device – like a telephone. Put the earpieces in your ears and talk into the bell to answer pages when you’re not listening to patients’ hearts.

Or maybe it will sync up with your iTunes account so you can pretend like you’re listening intently to a murmur when you’re really jamming to Linkin Park.

Wonder if they make a hack for it to check e-mail.

Popular Authors

  • Greg Henry
  • Rick Bukata
  • Mark Plaster
  • Kevin Klauer
  • Jesse Pines
  • David Newman
  • Rich Levitan
  • Ghazala Sharieff
  • Nicholas Genes
  • Jeannette Wolfe
  • William Sullivan
  • Michael Silverman

Subscribe to EPM