Lessons from Vegas can help physicians question dubious clinical trials
In 2001, Bernard et al published a piece in the New England Journal of Medicine that changed the way physicians treated severe sepsis. PROWESS asked whether, among patients with severe sepsis, activated protein C might improve survival. The study was so successful, according to researchers, that it was halted early. They’d reached statistical significance with 1690 subjects, so they said, so why waste any time getting this intervention to market?
Stopping a study early because it is proving sufficiently harmful to participants makes intuitive sense. But evidence-based medicine experts say it’s also wrong to stop a study mid-stream because it has shown great early results. Why?
It’s a bit like gambling at a casino. Take any of the big games where you play against the house – I like craps myself. I know that the odds are stacked against me. The house has something like a 51% edge. As the cliché goes, casinos are not built for me to make money. But I also know that the 51% house advantage is an average over time and there will be fluctuations over time. It’s like walking a dog. You walk in a straight line from point A to point B (overall trend); the dog walks a little this way and a little that way (variation), but still starts at A and gets to B. When I play craps, I’m hoping to catch a little variation in my favor, and quit playing before the game regresses back to the mean. And so is the drug company.
If I’m the drug manufacturer, I know that overall, I am more likely to lose than win (the drug doesn’t work). If I win some money in the first hour or two (early benefit) I know it’s probably a fluke and not loaded dice (a drug that works). I can take my money and walk away (stop trial) or I can keep playing. If I stopped with a few extra dollars in my pocket, would I conclude that I will always win at craps (blockbuster drug)?
No. I know that I just caught a little variation in the overall pattern, and that in the long run, craps will cost me money.
What about if I lose money (i.e. harm)? Maybe I can win it back, but should I keep playing? The problem is that eventually, if I keep losing money (drug is harmful), large men will come after my family (continuing the study will harm subjects) and I don’t want that.
Of course, in a clinical trial, we don’t actually know whether or not the therapy works. Clinical trials start from a position of equipoise: we don’t think the drug is harmful, but it might benefit patients, and the risk of harm vs. risk of benefit is balanced. But if we show harm early, we lose that equipoise, and we have to stop the trial before we harm the study subjects too much, knowing that we may have given up on a worthwhile drug but that it was just too risky.
Are there times when we should stop a trial early for a huge, obvious benefit? I think so, but only if the study is adequately powered at that point, which it’s very unlikely to be, because studies are designed to be powered at the end. Even if a therapy may appear very beneficial early on in a trial, such as with PROWESS, without adequate power, that appearance of benefit can easily be a statistical fluke. This is the essential question in statistics: do the differences that the study shows represent a true difference between the groups, or is it just a fluctuation in the data? In the end, PROWESS confused plenty of intelligent physicians . . . that is until PROWESS-SHOCK came along and debunked the earlier findings entirely.
Back to the casino: if you walk in and drop a quarter into a slot machine, and on the first try you win $1 million, would you conclude that the machine is a winner? What if you win $10,000 on 3 of your first 5? It would take a combination of a big enough benefit across a big enough sample to demonstrate the power needed to end the study early, and that is rare. At what point do you decide that the machine might be mis-calibrated (the drug works), and you should tell your parents to cash out their 401k and spend it all playing on this machine before the casino catches on (FDA approval)?
Of course, the casino analogy isn’t perfect. The major difference is that when I walk into a casino, I already know the odds are stacked against me. When a clinical trial is started, no one knows what the answer actually is, and the study investigators and the safety monitoring board have to balance the complex questions about potential efficacy, harm, and ethics without knowing what the “true” answer actually is. Starting a trial would be like walking into a casino knowing that some of the slot machines actually are calibrated for you to win, but you just don’t know which.
Also, unlike a casino, therapeutic benefit and harm are not simply two sides of the same coin; they are two separate properties of the treatment. Treatment side effects may appear later, and may be less common than benefits but with severe consequences. Early stoppage may show benefit before harms accrue or even show up.
For whatever reason a study is stopped early, all of the results should be published. Otherwise we end up with publication bias, or the “file drawer problem.” Positive trials are much more likely to be published than neutral or negative studies, which on the whole makes therapies look much more beneficial than they are. Upon review, at least 143 “big” studies have been stopped early for benefit, most of them published in the five biggest journals, and most of them were (surprise) funded by industry.2
Seth Trueger, MD, is a Health Policy Fellow in the Department of Emergency Medicine at George Washington University. He moderates the @epmonthly twitter feed and is the author of the blog MDAware.org. Special thanks to Jeremy Faust, David Marcus, Tessa Davis, and Minh Le Cong for their input on this topic via Twitter.
1. Briel M, Bassler D, Wang AT, Guyatt GH, Montori VM. The dangers of stopping a trial too early. J Bone Joint Surg Am. 2012 Jul 18;94 Suppl 1:56-60.
2. Montori VM, Devereaux PJ, Adhikari NK, Burns KE, Eggert CH, Briel M, Lacchetti C, Leung TW, Darling E, Bryant DM, Bucher HC, Schünemann HJ, Meade MO, Cook DJ, Erwin PJ, Sood A, Sood R, Lo B, Thompson CA, Zhou Q, Mills E, Guyatt GH. Randomized trials stopped early for benefit: a systematic review. JAMA. 2005 Nov 2;294(17):2203-9.
Stop That Trial!
A quick review of clinical trials halted early for harm or benefit
STOPPED EARLY FOR BENEFIT
Piccart-Gebhart MJ, Procter M, et al.: Herceptin Adjuvant (HERA) Trial Study Team. Trastuzumab after adjuvant chemotherapy in HER2-positive breast cancer. N Engl J Med. 2005 Oct 20;353(16):1659-72.
de Bono JS, Logothetis CJ, et al.: COU-AA-301 Investigators. Abiraterone and increased survival in metastatic prostate cancer. N Engl J Med. 2011 May 26;364(21):1995-2005.
The first study of trastuzumab (Herceptin) was stopped early for large, significant benefits in treating HER2-positive breast cancers. Despite controversy over its cost-effectiveness in the UK and New Zealand, trastuzumab appears to be effective. In the de Bono study, abiraterone was effective enough at interim analysis to suggest that it was unethical to keep half of the study participants on placebo.
STOPPED EARLY FOR NO BENEFIT
Ware RE, Helms RW; SWiTCH Investigators. Stroke With Transfusions Changing to Hydroxyurea (SWiTCH). Blood. 2012 Apr 26;119(17):3925-32.
In this study comparing treatments for sickle cell anemia in children, the interim analysis showed no benefit so the study was stopped for futility.
NOT STOPPED, THANKFULLY
Abraham E, Reinhart K, et al.: OPTIMIST Trial Study Group. Efficacy and safety of tifacogin (recombinant tissue factor pathway inhibitor) in severe sepsis: a randomized controlled trial. JAMA. 2003 Jul 9;290(2):238-47.
Hoping to find another drotrecogin alfa, study investigators nearly stopped the trial of tifacogin early for promising results. They did not, and when the study was completed, the drug showed no benefit over placebo.