ADVERTISEMENT

Confounding Variables: Part 3 in a series

No Comments

altWhen reading the literature, do you ever wonder who really has the right answers? How can readers come to different conclusions reading the same data? While it is tempting to simply agree with an author’s conclusions, they are often invested in their own work, introducing the potential for bias. Confounding variables are one source of bias that can easily alter the conclusions.

When reading the literature, do you ever wonder who really has the right answers? How can readers come to different conclusions reading the same data? While it is tempting to simply agree with an author’s conclusions, they are often invested in their own work, introducing the potential for bias. Confounding variables are one source of bias that can easily alter the conclusions. Internal validity problems can be created by introducing bias via confounding variables. Such variables may appear to have a causal relationship, positive or negative, to the outcome. However, if they are truly confounding, such a relationship doesn’t exist and they will only serve to cause the reader to make erroneous inferences about the true cause and effect. Bias can occur when there are important differences between groups being tested in some parameter other than the one of interest. In some trials, such confounding variables occur because of poor study design, while even in elegant randomized trials they can happen just because of bad luck.

Suppose an investigator offers free medical care as an inducement to hypertensive patients to participate in testing of a new treatment and then compares their outcomes to those of historical controls on other medicines? The reported conclusion that the study drug improves outcome is unfounded, because the real reason for the benefit might be improved compliance, a factor known to be critically important and problematic in hypertension, which would be likely to be much better in a group involved in a study, interested enough to volunteer and given free care. Unless the authors could measure compliance and ensure that it was either the same or somehow control their results for it, they couldn’t exclude the strong possibility that this confounder, rather than the drug they were studying, was the real cause of the observed benefit.

ADVERTISEMENT

In a famous study of cardiac arrest, investigators concluded bretylium was superior to lidocaine since it was associated with greater survival when patients were randomized to receive one or the other as the first ACLS drug. The authors reported, but did not recognize the critical importance of, the fact that most of the bretylium patients were in V-fib, while most of those getting lidocaine were in asystole. When controlled for the confounder of initial rhythm, the results with the two drugs were not different.

There are always potential confounders in any study, and if it turned out that the most important variable in suicide prevention, treatment of AMI or anything else was the third letter in one’s mother’s maiden name, not only would all reported results be misleading, but we’d probably never figure it out, nor be able to calculate its significance when analyzing related research. So, until we know that some characteristic could potentially confound our results, we can’t possibly do anything about it. We are obligated, as investigators and as readers, to try to think of variables, not being tested as the subject of any study, that are likely to be important, and assure ourselves that they’re not confounding the results.

Thus, if dog bite victims get fewer infections when they’re sutured than when not, and suturing is not randomized, but on the basis of clinician judgment, we have to ask whether any confounders might impact the results before we conclude that suturing decreases the rate of infection. On the one hand, since clinicians are more likely to suture large wounds than small ones, and since we’d expect larger wounds to have more maceration and be more infection-prone, this bias should favor non-sutured wounds. So, even if we don’t control for wound size, we can conclude that the results favoring suturing are not weakened and perhaps, may even be strengthened.

ADVERTISEMENT

On the other hand, there are some other possible major confounders that could systematically bias in favor of sutured wounds: we can’t suture puncture wounds, which get infected far more often, while on the other hand we’re most likely to suture faces and scalps, which almost never become infected. Finally, when we suture a wound, we also debride and irrigate it. Therefore, unless there’s evidence these were performed as diligently in the non-suture group, it may be that they’re what actually decrease infection, and the results may have occurred despite an opposite, but smaller effect, increasing infection rate by the actual suturing itself.

Even if there were a higher infection rate in the entire population of people bitten by cats (even including those many who don’t present for care), it might have nothing to do with the bacterial flora of cat’s mouths, as proposed by many. Rather, this may be due to the confounding variable of the type of wound inflicted. Regardless of how a wound occurs (from animal to animal, or even comparing bites to other types of wounds), puncture wounds are more likely than lacerations to become infected, undoubtedly because they are much more difficult to adequately debride or irrigate.

This concludes our discussion on internal validity and confounding variables. Next month, we’ll discuss the importance of study analysis and the misuse of statistics.

ADVERTISEMENT

Jerome Hoffman, MD is a Professor at the UCLA School of Medicine; Faculty at the UCLA ED; Associate Medical Editor for Emergency Medical Abstracts

Leave A Reply