When the harried technician put an EKG in front of me for a patient in the waiting room, I asked for some history that might help me put the abnormal pattern into some context. Were the abnormal findings new or old? What were the age and gender of the patient? What was the chief complaint? I learned that the patient was an 80-year-old woman who had been called at home and told to go to the ER because of an abnormal test result.
“What was the abnormal test result?” I asked.
“I don’t know. I just do EKGs,” said the technician.
I walked back to our triage area to find the woman. She had long gray hair, dark eyes with black eyeliner, and smooth pale cheeks. She was sitting in a chair in our emergency department hallway and smiled at me when I called out her name. When I asked her about the abnormal test, she had no idea what it was. “A blood test,” she said, as if that should be good enough. As I attempted to get an abbreviated history from her, she explained that she had gone to the clinic that day because her legs had become swollen over the past two weeks to the point that she could no longer walk. She also had been having trouble breathing, and I noticed that she was breathing faster than normal, about 28 times per minute. At the clinic, the provider, a relatively new physician’s assistant (PA), had given her some kind of inhalation treatment, ordered tests, and sent her home. After she arrived home, a nurse from the clinic called and told her that one of her tests had been abnormal and that she should go to the ER right away.
Fortunately, we have electronic medical records, and I was able to access the clinic note where the patient’s visit was recorded. The discharge diagnosis had been chronic lung disease. The PA had ordered tests to rule out congestive heart failure, and it was the result of one of those tests that had been abnormal—a brain natriuretic peptide higher than I had ever seen. I also found a report of the chest X-ray that showed pulmonary edema. So one could say the system worked well. We had diagnosed the problem by following up on an abnormal test. However, that was not what I concluded.
After I spoke with the woman and examined her, it was clear to me that she had all the signs and symptoms of new-onset congestive heart failure; her cardiac exam was abnormal, with a new loud systolic heart murmur; her legs were swollen; her lungs were congested with fluid. I suspected valvular heart disease causing heart failure. This was not a woman who should have been sent home to await the results of tests. Fortunately, she had suffered no consequences of the initially incorrect clinic diagnosis, but several errors had occurred. After getting her admitted to the cardiology service, I wondered what else I should do: Call the PA? Write a patient safety incident report? Contact the clinic director?
I considered how the error of sending her home might have been made. There were a few findings that could have caused confusion. The woman had chronic foot infections. The leg swelling could have been attributed to the infections. The lungs had wheezes rather than rales. That might have led to a diagnosis of bronchospasm rather than pulmonary edema. She had shown up at the end of the day. That may have caused the visit to be rushed. Her regular doctor was not available, and this PA had never seen her before. That might have caused miscommunication. The more I thought about it, the more I began to think that anyone might have made this error. In addition, the PA was relatively inexperienced and may have seldom seen the signs of congestive heart failure. How would I sort all this out? Did I have time to sort it out? What kind of reaction would the PA have if I called her on the telephone? Would she become angry and defensive? Would she be punished if I called the clinic director? How supportive was the institutional culture for medical error reporting? Since most medical errors are due to systems issues, shouldn’t I focus on the systems problems that led to the error rather than the focusing on the individual provider? These questions bring me to the topic of this month’s editorial: medical error and patient safety.
The history of patient safety can be traced back to complications with anesthesia and surgery1,2 that resulted in research to improve care3 and ultimately in improvements in anesthesia safety through standardization, improved monitoring, and better equipment.4 Highly publicized adverse patient events associated with unsupervised, sleep-deprived residents ultimately led to national standards to limit duty hours and increase supervision. The Harvard Medical Practice Study,5 published in 1991, categorized the occurrence of adverse events associated with medical care, demonstrating that such events occur with disturbing frequency. The Institute of Medicine published a report in 1999, To Err Is Human: Building a Safer Health System,6 which quantified the deaths associated with medical error on a national scale and led to engagement of the medical community in the prevention of harm due to medical error. Although the progress on prevention of medical error has been slow, there has been an increasing integration of patient safety into the training of residents and accreditation standards for hospitals.7,8
Much of the work in patient safety has borrowed from the fields of public health and airline safety, which emphasize that systems improvements are typically more effective than education of individuals in changing behaviors and preventing injuries. This is not to say that we should abandon curriculum development or didactic and experiential education of students. We just cannot depend on these as our only approaches. There is too much complexity in health care to expect that health care providers will remember every detail of every procedure or every dosage.
Some of the most effective improvements in patient safety have come from product and system redesign—for example, elimination of certain medication vials that look the same but have different concentrations of drugs, defibrillators that have their critical dials in different places, or development of monitors that warn of a complication of a procedure before injury occurs. Changes in design of equipment that essentially prevent errors from occurring allow providers to concentrate their thinking on difficult diagnostic choices or vital patient care activities. Checklists supplement the human mind by ensuring that rote elements of routine procedures occur the same way every time and that distractions or interruptions that can affect memory will not affect the proper conduct of a procedure. We have learned some of these approaches from the airlines industry, which has used error identification, checklists, and equipment standardization to reduce air travel crashes.
Many of you will likely respond that medicine is not like airline travel. Those of us who are physicians do not solve the same medical problems every day; we do not take care of the same patients every day. A treatment that worked perfectly on our last patient may not work on the next one. Or you may say that knowing about systems improvement is fine, but no one will listen to you when you make suggestions for improvement. Or you will not have been trained in systems redesign, so someone else must be more qualified than you to address the problem.
Here are my thoughts about these concerns:
Yes, medicine is different from the airlines. It probably is more complex. Our patients and their problems change every day. That makes the risk for our patients greater, and the need to build in safeguards to identify and prevent medical errors greater.
Yes, the culture of medicine has often ignored those on the front lines who had ideas that would make our systems safer. We need to change the culture so that communications about safety and error prevention are encouraged and rewarded. In this issue of Academic Medicine we present four articles that describe different ways of improving the culture of medicine to make it one that encourages communications between providers, both during transitions in care and also when concerns arise about the safety of care; fosters standardized ways of analyzing and responding to problems in care during mortality and morbidity conferences; and facilitates ways to disclose error to patients.
Bowman et al9 suggest that the majority of students in their study would not speak up when witnessing a possible adverse event and were afraid to ask questions if things did not seem right. This is an example of how the culture of medicine must change if we are to get the full benefit of the observations of our students and residents.
Pincavage et al10 discuss graduating residents’ handoffs of their patients to other residents. The standardization and enhancement of the process improved the information transmitted and the follow-up of tests that had been ordered during previous visits.
Mitchell et al11 highlight the opportunity to improve care through the standardization of the morbidity and mortality conference so that recommendations for improvement are routinely built into the discussion.
Stroud et al12 review the literature on error-disclosure education for trainees and the gaps and opportunities to better address this critical area of training.
All these articles suggest changes that are neither expensive nor complicated. A culture that accepts that errors will occur can begin to address them before they happen and communicate about them when they do happen. Those of us who are leaders of academic health centers need to set an example of encouraging and rewarding communications about errors. Leape et al8 identified five key concepts for patient safety: transparency, care integration, patient/consumer engagement, restoration of joy and meaning in work, and medical education reform. I believe that culture is a common factor for all of these concepts and that we should consider how our own institutional cultures either support or undermine these concepts.
As for the woman I described earlier, her case reminds us of the need to improve care through a focus on our systems rather than solely through a focus on the pathophysiology of disease that we teach in medical school. I do not have answers for all the questions I posed, and I continue to struggle with how best to prevent future errors. But I suspect the woman’s case is not unusual. We have the opportunity to ask questions and learn from her case and cases like it. How can we take those lessons and share them in our medical education system? As the students puzzle over the woman’s heart murmur—Is it aortic stenosis or mitral regurgitation?—and the residents puzzle over her echocardiogram—Is the dark shadow a pericardial effusion?—I puzzle over how to bring our conversations back to how we create a culture in which we discuss our errors, learn from them, and design care systems so that when an error occurs, we catch it before it harms a patient.
David P. Sklar, MD
Editor’s Note: The opinions expressed in this editorial do not necessarily reflect the opinions of the Association of American Medical Colleges or of its members.
1. Beecher HK, Todd DP. A study of the deaths associated with anesthesia and surgery: Based on a study of 599,548 anesthesias in ten institutions 1948–1952, inclusive. Ann Surg. 1954;140:2–35
2. Moses LE, Mosteller F. Institutional differences in postoperative death rates. Commentary on some of the findings of the National Halothane Study. JAMA. 1968;203:492–494
3. Cooper JB, Newbower RS, Kitz RJ. An analysis of major errors and equipment failures in anesthesia management: Considerations for prevention and detection. Anesthesiology. 1984;60:34–42
4. Eichhorn JH, Cooper JB, Cullen DJ, Maier WR, Philip JH, Seeman RG. Standards for patient monitoring during anesthesia at Harvard Medical School. JAMA. 1986;256:1017–1020
5. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324:370–376
6. Kohn LT, Corrigan JM, Donaldson MS To Err Is Human: Building a Safer Health System. 1999 Washington, DC National Academy Press
7. Leape LL, Berwick DM. Five years after To Err Is Human: What have we learned? JAMA. 2005;293:2384–2390
8. Leape L, Berwick D, Clancy C, et al.Lucian Leape Institute at the National Patient Safety Foundation. Transforming healthcare: A safety imperative. Qual Saf Health Care. 2009;18:424–428
9. Bowman C, Neeman N, Sehgal NL. Enculturation of unsafe attitudes and behaviors: Student perceptions of safety culture. Acad Med. 2013;88:802–810
10. Pincavage AT, Dahlstrom M, Prochaska M, et al. Results of an enhanced clinic handoff and resident education on resident patient ownership and patient safety. Acad Med. 2013;88:795–801
11. Mitchell EL, Lee DY, Arora S, et al. Improving the quality of the surgical morbidity and mortality conference: A prospective intervention study. Acad Med. 2013;88:824–830
12. Stroud L, Wong BM, Hollenberg E, Levinson W. Teaching medical error disclosure to physicians-in-training: A scoping review. Acad Med. 2013;88:884–892