Secondary Logo

Journal Logo

Viewpoint: A Dangerous Game of Bias

Mosley, Mark, MD

doi: 10.1097/01.EEM.0000530456.20818.a8
Viewpoint

Dr. Mosleyis the director of quality improvement, the medical director for residency education, and the director of operations management at Wesley Emergency Center in Wichita, KS.

Figure

Figure

Medical science since the 1960s has been based on the concept of evidenced-based medicine with a hierarchy of evidence where the summit is placebo-controlled, double-blinded, randomized controlled trials. Why? The reason is bias. (“Bias in the ER.” Nautilus, Feb. 9, 2017; http://bit.ly/2CYVDNC.)

Under the best circumstances, with the best physicians, the best knowledge, and the best intent, we use tests and treatments under the illusion that we are doing more good than we actually are. This is called therapeutic illusion, and it affects all of us. This implicit bias overestimates the benefits and underestimates the harms of what we do.

A recent systematic review is the largest study to prove this. Its 48 studies involving 13,011 clinicians showed that 32 percent of the time clinicians overestimated benefit (9% underestimate) and 34 percent of the time clinicians underestimated harm (5% overestimate). (JAMA Intern Med 2017;177[3]:407.) Clinicians rarely had accurate expectations of benefit or harm.

This is confirmed in so many conditions: therapeutic hypothermia for post-ventricular fibrillation arrest, IV tPA for stroke, EGDT for sepsis treatment, and now thrombectomy for stroke up to 24 hours in selected mismatch patients. An initial study, often small and underpowered with several confounders, is present in all of these conditions and many more, and that has a small amount of positive statistical significance. (New Engl J Med 2002;346[8]:557; 1995;333[24]:1581; 2001;345[19]:1368; 2017 Nov. 11; doi: 10.1056/NEJMoa1706442.) Many other studies before or after the positive article are almost always negative. Astonishingly, the small positive study gets elevated to become the standard of care (with billions of dollars spent nationwide in all kinds of medical emergency settings), and we implement these very select findings from small studies done in unique and large medical research facilities.

Meanwhile, all the negative studies, even those much larger and better conducted, are simply ignored. Therapeutic hypothermia is simply changed to targeted temperature; NINDS and ECASS III are embraced to the exclusion of IST-3 and 12 other negative studies on thrombolytics for stroke. SIRS and EGDT are used because hospitals are paid better even though all recent studies disavow their accuracy and benefit, and the DAWN trial opens the window to 24 hours for all hospitals that do interventional radiology, though it took almost three years to find 200 patients from multiple countries among centers doing very large volume interventional radiology for stroke. And then the local specialist uses the small unique study and extends the boundaries (called indication creep). The experts claim they can “tell who they think can do well.” We tend to confirm what we want to believe. And under the worst of intentions, we only want to confirm what makes our hospital money.

This is exactly why we must not view any disease process on a case-by-case basis, even if it is by the expert we are consulting. None of us is immune from implicit bias. It lacks humility to say that we know who needs treatment and who doesn't based on our own experience. We should heed the words of Leo Tolstoy, whose writing about art is just as applicable to science: “I know that most men, including those at ease with problems of the greatest complexity, can very seldom discern even the simplest and most obvious truth if it be such as to oblige them to admit the falsity of conclusions they have formed, perhaps with much difficulty—conclusions of which they are proud, which they have taught to others and on which they have built their lives.”

Even if we embrace all of the scientific evidence as fairly as we can, these studies are done under the very best circumstances, and are unlikely to have as much benefit in our own practice, to say nothing of the patients' desires and expectations. This is why all therapies, even when they're seemingly obvious, should be done with full informed consent and a discussion of the magnitude of benefit and the magnitude of harm.

This is not only true for studies regarding particular therapies; this same approach must be considered with clinical scores, guidelines, and protocols. For any purported benefit they may bring by standardizing an approach, the science of benefit is rarely high quality, and one must consider the magnitude of harm using an ABCD2 score, an IDSA community-acquired pneumonia guideline, or a chest pain protocol. There is unnecessary cost, false-positives, and downstream harm that must be subtracted from the small percentages of benefit. Therapeutic illusion has exploded under the guise of quality, protocols, and order sets.

The bottom line: Truth is complex and difficult to come by. Science is a controlled attempt at approximating truth. Good science attempts to minimize bias. The individuals who trust themselves or the expert or the protocol as much as or more than the weight and total balance of scientific evidence play a dangerous game of bias called therapeutic illusion when their personal beliefs overestimate benefit and underestimate harm.

Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.