In the United States, alone, an astounding 20,000 practice guidelines have been developed by more than 500 specialty societies and organizations.1 Their proponents would have us believe that unless you have at least 100 perfect, prospective, randomized, double-blind trials to meta-analyze into a lengthy but coherent treatise on any clinical problem — everything from hemorrhoid care to heart attacks — that your decision-making is so seriously flawed that a toss of the coin would prove more judicious.
They would have us believe that the art of medicine is dead, and that true medical practice can be achieved only by consulting a statistician every time you want to prescribe an aspirin or apply a Band-Aid to ensure the best cost-benefit ratio. Such peering-down-the-nose snobbery belies the fact that we as emergency physicians should be the first to realize that our patients just don't “read the book” on many presentations, and that the Neanderthal concept of — do I dare even mention the word for fear of being exiled to the frozen tundra of medicine? — clinical experience is still alive and well for many of us practicing in the trenches.
Don't get me wrong. Ten years ago during my impressionable years of residency, when I was proud to admit that I had read every study ever done on superior vena cava syndrome (but had yet to actually run into one in the flesh), I held in highest regard the ivory tower denizens that were my professors and mentors. Even though they rarely touched a patient in full view of witnesses, I felt that their ability to rattle off endless studies, including footnotes and references, proved they were the best clinicians around, and in fact, many were.
In turn, however, I regarded any physician, usually one coming from lowly private practice, who even uttered the slightest reference to their personal experience with a clinical entity with about the same respect as the Roadrunner gave Wiley Coyote. Beep, beep, and I was gone in a trail of dust, even though that same private physician had probably seen in his 20 years of experience about a thousand cases of any clinical problem along with all their myriad presentations.
When I would present what appeared to be a classic case of anything, I would back up my stuttering clinical acumen with pronouncements of studies done at top medical centers, with author's names and dates of publications, expecting the private attending to grin with approval at my wisdom, far in excess of my years. After all, I figured he hadn't read a single paper since about two years after he was board certified, and probably let his subscription to The New England Journal of Medicine lapse a few years later only to be replaced by Forbes, which is what I thought all private practice physicians read.
Needless to say, I found myself disappointed more than once when, instead of approval, my presentation and diagnosis were met with skepticism as the private attending would argue my points, and not with competing literature on the subject culled from obscure yet respected journals even more obscure than mine. (A basic rule of academic medical one-upsmanship is that the more obscure the journal the better, both because it implies a more extensive reading of the literature, and it's more difficult to counter an argument that originates from a journal written in a language only one person in the county understands.)
Rather, he would counter with a quiet, introspective, and concise review of his own experience of similar cases he had seen in his practice. Once those words were spoken, “in my experience,” my academically-trained brain, no longer being fed the wholesome, intellectual diet of randomized clinical trials, quickly proceeded to shut down to save valuable resources until the seemingly worthless babble that came out of the mouth of the private attending stopped, at which point I would nod respectfully, and do whatever I was going to do anyway with the patient.
Due to what only can be accounted for by an as-yet-undiagnosed inborn error of metabolism, I chose a career in — you guessed it — private practice. Perhaps I thought I could bring some academic light into the dark caverns of emergency medicine, convert the unconverted, save the unsaved, and while I was at it impress a whole lot of people with knowledge of diseases so arcane and articles so obtuse that the lowly, knuckle-dragging inhabitants of private practice would embrace academic emergency medicine, learn to stand upright, and wonder how they ever managed a case of congestive heart failure without it. And for a while it felt pretty good.
I was able to increase my clinical acumen by the novel approach of actually seeing patients, while at the same time utilizing those clinical trials I had learned in residency to augment my skills. That was until I started working in an ED that sees 75,000 patients a year. When you personally see about 7,000 of those patients, it's funny how many unique and atypical presentations of disease you encounter. I remember the 30-year-old man with no cardiac risk factors, no abnormal physical findings, no abnormalities on EKG or laboratories, and unfortunately, no pulse when he went into Torsades after his circumflex artery totally clogged while he was waiting to be discharged from the ED. (He survived after I first shocked myself, and then put the paddles on him.)
I remember the 36-year-old female with sudden onset of severe right lower quadrant pain that came on just after her Thanksgiving meal. Classic kidney stone. Had to be. Classic ruptured ovarian cyst. No doubt about it. Not so classic appendicitis, which is what was eventually removed from her body in the operating room. Then there was the 70-year-old woman who looked healthier than I did, who complained of this gradual-onset, nagging headache for three weeks and had a normal CT scan. I called her private physician, and although he didn't say it, he implied that in his experience she should probably have an LP. Although I didn't say it back, in my academic experience, she was an old lady with a muscle tension headache who needed to go home. Reluctantly, I put a spinal needle in her back, and I'll be darned, a continuous drip of pink fluid emerged. She had her anterior communicating artery aneurysm fixed the next day.
Then there was the 10-year-old boy with a new onset non-febrile seizure, who was now alert and awake with a huge grin on his face, giving anyone high fives who would walk by his bed. Before the academic side of my brain could lecture my PA on why not to order that CT she had just gotten (all the studies say it is useless in such cases), she came to me with the results: a posterior fossa astrocytoma the size of a small Third World country. I went back to the patient for one more high five.
I could go on forever with both my and my colleague's stories of “in my experience….” What all this has convinced me of is that no wealth of clinical trials can supplant the knowledge base of one experienced clinician. Although the body of knowledge of medicine is based on research, the practice of medicine is one that is made at the bedside, one patient at a time. Statistics can assist the clinician in his decision-making but cannot replace experience and a deep concern for the patient's well being. Even good baseball coaches understand the limits of statistics. Despite years of stats saying the other team's clean-up hitter usually pulls to left field with men on base, at any one time he may just decide to hit it up the middle. In that case, the worst outcome is a run scored. The outcome could be much graver if one only relies on the most recent clinical trial to deny a patient a CT scan or a spinal tap or an EKG.
Indeed, a recent article in The Lancet has cast doubt on the reliability and independence of practice guidelines.2 Of 432 guidelines studied, fully 88 percent gave no information on searches for published studies, 82 percent gave no explicit grading on the strength of the recommendations, and 67 percent did not report any description of the type of stakeholders involved in the creation of the guidelines. All three criteria were met in only five percent of the guidelines, prompting a call for more common standards for reporting such guidelines.
Two articles by different teams of investigators in the September issue of Annals of Emergency Medicine focused on meta-analyses of the use of magnesium in acute asthma.3,4 Although in general their results were similar, the two meta-analyses did differ in many ways, well elucidated in an accompanying editorial by Robert Wears, MD, of the University of Florida.5 First, the two meta-analyses, although posing the same questions and criteria, utilized slightly different sets of studies to make their points. And even though the bulk of the studies were similar, they were often interpreted differently by the investigators. Finally, as with any review of literature, only published studies were included, and unpublished studies with equal significance but with negative results were not included. Thus, all meta-analyses and practice guidelines are not created equal, and too much reliance on them can be hazardous to one's and one's patient's health.
From all of this I have concluded that, similar to Piaget's theory of child development, physicians go through developmental stages as well. The first is the Academic Stage (1 to 5 years experience, 0 to 500 patients seen): He doesn't know a rectal from a reflex hammer, but has memorized every JAMA article on kuru since 1929. Then there is the Budding Clinician Stage (6 to 10 years experience, 501–10,000 patients seen): He has actually used a reflex hammer, and knows that the glove goes on first when doing a rectal. Also has forgotten most of those kuru articles prior to 1970.
Finally, after getting burned by atypical presentations and misguided meta-analyses, and after about 10,000 patients and 10 years experience, there is the Mature Clinician Stage: He still subscribes to JAMA but only commits to memory the three most important articles published in the past five years, tends to look at Forbes at the grocery store check-out stand but puts it back before paying, and has found an additional use for the reflex hammer to break up walnuts and pecans.
One corollary I have found to this theory that seems to go counter to popular opinion concerns the use of ancillary tests. It is assumed that as one's clinical skills increase, the need for testing decreases because most diagnoses can be made with a good history and physical. This certainly would be true as one progresses from the Academic Stage to the Budding Clinician stage, during which time one can never be quite sure what it is he is palpating in the abdomen because he has only palpated two of them, and so as not to miss anything, one needs the comforting results of numerous laboratory tests and radiographs.
During the Budding Clinician stage, however, there is a fine balance between clinical acumen (“Yes, that is a negative straight leg raising test”) and academic knowledge (“Ergo, I do not have to order lumbar radiographs”). However, in the Mature Clinician Stage, one has seen so many atypical presentations, that despite the best published data and the most experienced laying on of hands, one can never be completely sure how useful the information is in the history and physical, and so the renewed reliance on testing thought no longer necessary a few years prior.
Before writing this commentary, I searched the world medical literature to find academic support for this hypothesis, but unfortunately no one has yet to examine this important topic. I am hoping that someone in academia will pick up a grant proposal and try to study it, but until then I fully believe that it is true as best as I can tell, at least in my experience.