Purpose: Diagnostic errors in medicine are common and costly. Cognitive bias causes are increasingly recognized contributors to diagnostic error but remain difficult targets for medical educators and patient safety experts. The authors explored the cognitive and contextual components of diagnostic errors described by internal medicine resident physicians through the use of an educational intervention.
Method: Forty-one internal medicine residents at University of Pennsylvania participated in an educational intervention in 2010 that comprised reflective writing and facilitated small-group discussion about experiences with diagnostic error from cognitive bias. Narratives and discussion were transcribed and analyzed iteratively to identify types of cognitive bias and contextual factors present.
Results: All residents described a personal experience with a case of diagnostic error that contained at least one cognitive bias and one contextual factor that may have influenced the outcome. The most common cognitive biases identified by the residents were anchoring bias (36; 88%), availability bias (31; 76%), and framing effect (23; 56%). Prominent contextual factors included caring for patients on a subspecialty service (31; 76%), complex illness (26; 63%), and time pressures (22; 54%). Eighty-five percent of residents described at least one strategy to avoid a similar error in the future.
Conclusions: Residents can easily recall diagnostic errors, analyze the errors for cognitive bias, and richly describe their context. The use of reflective writing and narrative discussion is an educational strategy to teach recognition, analysis, and cognitive-bias-avoidance strategies for diagnostic error in residency education.
Dr. Ogdie is instructor of medicine, Division of Rheumatology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania.
Dr. Reilly is assistant professor of medicine, Division of Nephrology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania.
Ms. Pang was research assistant, Division of Rheumatology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania. She is currently a medical student, Royal College of Surgeons, Dublin, Ireland.
Mr. Keddem is manager, Mixed Methods Research Lab, Department of Family Medicine and Community Health, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania.
Dr. Barg is associate professor of family medicine and community health, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania.
Dr. Von Feldt is professor of medicine, Division of Rheumatology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania.
Dr. Myers is associate professor of medicine, Division of General Internal Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania.
Correspondence should be addressed to Dr. Ogdie, 816 Penn Tower, 1 Convention Ave., Philadelphia, PA 19104; telephone: (215) 615-4375; fax: (215) 662-4500; e-mail: email@example.com.
Patient safety has received enormous attention since the 1999 Institute of Medicine report “To Err Is Human.”1 Although the target of most patient safety initiatives is the identification and improvement of poorly designed systems of care, the “human” or cognitive side of patient safety has received much less attention.2 Diagnostic errors are common and costly, constituting 17% of preventable adverse events analyzed in the landmark Harvard Medical Practice Study3 and frequently ranking as the most common type of error in medical malpractice claims.4 Diagnostic errors are sometimes caused by systems errors but are equally likely to be caused by individual cognitive errors.5 Cognitive errors have been described in medicine within the context of clinical reasoning,6 diagnostic error,7,8 and the patient safety movement.9 Unlike systems errors, which are easily identifiable and actionable, how doctors think remains a more difficult target to understand and influence.
It is difficult to solve the problem of diagnostic error in medicine. Systems improvements, such as the use of information technology to facilitate follow-up and tracking of test results by physicians and patients, have been suggested.10–12 However, education on the cognitive underpinnings of diagnostic error appears equally important.13,14 Although strategies to teach clinical reasoning in the diagnostic process have been described,15 less is known about the inherent thinking patterns that humans use when processing information and optimal methods to teach the recognition and avoidance of cognitive bias. Because clinical reasoning habits are developed during medical school and residency, introduction of these concepts into medical training is critical.
In this study, we sought to capture internal medicine residents’ experiences with diagnostic error caused by cognitive bias, and to identify related contextual factors using reflective writing and narrative discussion as an educational strategy.
Study design and participants
Within a three-part longitudinal curriculum in diagnostic error and cognitive bias, we conducted an educational intervention among all second-year internal medicine residents at the Hospital of the University of Pennsylvania during the 2010–2011 academic year. Part 1 was a one-hour introductory curriculum on cognitive bias and diagnostic error. Part 2 occurred three months later and consisted of a reflective writing and narrative discussion session. Part 3, which occurred six months after Part 2, employed a Web-based curriculum including matching exercises, videos with short-answer questions, and a final, multiple-choice test. In this article, we discuss the results of Part 2 of the longitudinal curriculum. At the beginning of the reflective writing and narrative discussion session, residents were given a 5-minute summary of the prior curriculum and then randomly allocated into groups of four to six, each with a faculty facilitator. During the first 10 minutes, we instructed them to write about a specific case example of diagnostic error with cognitive bias that they had either witnessed or participated in during their training. They were asked to include all circumstances that contributed to the team’s diagnostic decisions and describe what they learned and how this might change their future practice. Narratives were read aloud sequentially by each resident in the group. After each narrative, discussion was facilitated by the group leader. We audiotaped the entire session and transcribed it verbatim. We then used NVivo 9.0 software (QSR International, Cambridge, Massachusetts) for coding and analysis.
Four study investigators (A.O., J.R., J.V., J.M.), two additional internal medicine residency program faculty, and three chief residents served as faculty facilitators. All facilitators participated in a one-hour training session led by an investigator (J.R.) in which the goals of the session and facilitation strategies were described. A guide that contained specific examples of how to mediate and guide conversation around sensitive diagnostic and cognitive errors was distributed during the training session.
The study was approved by the institutional review board at the University of Pennsylvania. We obtained residents’ informed consent prior to their participation.
Development of coding scheme
Prior to the curriculum, we piloted the reflective writing exercise with five internal medicine residents who were not members of the participating class in order to develop and refine the coding scheme. To develop the first version of the coding scheme, we generated a list of known cognitive biases and debiasing strategies (many of which were discussed within the curriculum) based on review of the literature and group discussion.7 Next, inductive codes were developed, expanding the preliminary coding scheme, by approaching the text-based sample with three questions: What cognitive biases were identified, in what context do residents report the occurrence of cognitive bias, and what debiasing strategies did they discuss? An operational definition for each of the codes was entered into a coding dictionary. The resulting code list included cognitive biases, debiasing strategies, and contextual factors (sorted into patient, system, and provider/team categories). We reviewed an existing taxonomy tool for diagnostic errors, but we did not choose it as the primary framework for the coding scheme because this classification system is framed around the sequential steps in the diagnostic process rather than specific types of cognitive bias and contextual factors.16,17
We performed content analysis on all textual data.18 Two coders (A.O. and W.G.P.) independently identified and highlighted in the small-group transcripts and written narratives every codable unit of text (a statement that conveyed a singular idea) and assigned the codable unit to a code. Interrater reliability was first established through coding the pilot transcript and then checked at the midpoint of coding. Interrater agreement ranged from 80% to 95% in all categories. We resolved discrepancies through discussion between the two coders and a third investigator (J.R.). Simple matrices were generated to characterize which types of biases and debiasing strategies were most commonly associated with specific contexts. We examined the statistical significance of the associations between biases and contextual factors using the chi-square test.
Forty-one second-year internal medicine residents participated in the study. Of these, 22 (53.7%) were male. All three tracks within the internal medicine program were represented: 29 (70.7%) categorical, 8 (19.5%) primary care, and 4 (9.8%) physician scientist pathway.
Group discussion results
All residents reported a case in which they cared for a patient and experienced a diagnostic error or delay in diagnosis due to cognitive bias. Eighty-five percent (35 of 41) provided strategies to prevent a similar cognitive error in the future (debiasing strategies). All residents described stories that occurred in the hospital except for two primary care residents who described outpatient diagnostic errors.
Specific examples of cognitive biases were described in the narratives, and most residents identified more than one type of bias. The most commonly described bias was anchoring (staying locked on to an initial diagnostic impression despite disconfirming evidence), which was mentioned in 36 (87.8%) of the narratives. Anchoring bias is closely related to premature closure of the diagnostic process, defined as accepting the diagnosis before full verification.7 Premature closure by definition occurs in all cognitive-based misdiagnoses and could be directly caused not only by anchoring but also by many of the other biases. Therefore, premature closure was not coded as a bias in order to allow for more specific mechanisms of closure (e.g., anchoring bias, availability bias, framing effect) to be identified. Other commonly described biases are shown in Table 1. Several other biases were mentioned in relatively few cases and are not explicitly described.
Group discussion concentrated on the contextual factors and emphasized how they contributed to the error. Within the narratives, contextual factors were identified and classified into three broad categories: patient factors (15 factors reported in 68% of cases), environment factors (19 factors reported in 81% of cases), and team or provider factors (17 factors reported in 90% of cases) (Table 2). The most commonly reported contextual factor was caring for a patient on a specialty service (e.g., oncology, cardiology, pulmonary, gastroenterology, the medical intensive care unit, geriatrics service) as opposed to a hospitalist/general medicine service. This was reported in 31 of 41 narratives (76%). Of the nonspecialty narratives, 1 described an event on a hospitalist service, 2 were from primary care settings (these were reported by primary care residents), and 1 was from the emergency room. The identity of the clinical service or location could not be ascertained in the remaining 6 narratives. Other commonly reported contextual factors relating to the care team included a lack of interest in the patient’s case, prominent hierarchy within the team, overreliance on consultants, and the resident’s lack of confidence in his or her diagnostic skills.
Environmental or system factors often contributed to the outcome in many of the narratives. These included time/workload imbalance, transfers between hospitals or from the emergency department, poor handoffs, lack of knowledge of clinical guidelines, and lack of access to past medical records. The most common patient factors that contributed to the outcome included a high degree of illness complexity, a vague history from the patient, chronic illness, a “bad reputation” from previous admissions, and a history of narcotic-seeking behavior.
Thirty-five (85%) of the residents discussed actions that could prevent similar errors in the future. Establishing a broader differential diagnosis (15; 36.6%) and being aware of one’s own tendencies or predisposition for bias (15; 36.6%) were most frequently cited, followed by gathering data in a systematic fashion (13; 31.7%). Additionally, seeking other explanations for problems (11; 26.8%), acknowledging one’s “gut feelings” (10; 24.4%) and emotions (9; 22%), and knowing when to “slow down” (9; 22%) were also mentioned as useful debiasing strategies to employ when making decisions.
Finally, we examined intersections among the biases and contextual factors. The relative frequency of cognitive bias reporting in the setting of individual contextual factors was quantified. Among the 31 narratives taking place on a specialty service, anchoring, availability (propensity for looking to the most cognitively “available” diagnoses—e.g., those seen most frequently or recently), and framing effect (allowing the way the story is framed to influence the diagnosis) were still the most commonly reported biases. Blind obedience (showing undue deference to authority or technology) was associated with caring for a patient on a specialty service. This bias was also reported in the setting of “lack of confidence” on the part of the resident making the diagnosis, cases in which consultants were integral members of the decision-making team, and cases in which team hierarchy or an intimidating attending was present. Framing effect was associated with a vague history from a patient, a report of being “too busy,” having too many patients, working overnight or fulfilling the role of “night float,” and the transfer of a patient from one service to another (or from one hospital to another). The unpacking principle (failure to elicit all relevant information in establishing a diagnosis) was often cited in the setting of a chronic illness and when a handoff occurred. Diagnostic momentum (pushing forward diagnoses from previous encounters without evaluation of their accuracy or goodness of fit with current presentation) was also commonly associated with handoffs. Visceral bias (personal feelings toward the patient which influence diagnostic conclusions) was associated with provider fatigue or provider lack of interest in the case. Finally, confirmation bias (the tendency to preferentially trust data that support the initial diagnostic impression over data that refute it) was associated with having too many patients or being “too busy.” These associations were not statistically significant (all chi-square statistics [df = 1] were < 3.84 and P > .05), which was attributed to small sample size.
Discussion and Conclusions
By performing qualitative content analysis on a reflective writing and narrative discussion session within our residency curriculum, we characterized the experiences of internal medicine residents with diagnostic error and cognitive bias. Although this was a relatively small study, it is the first study to document and analyze resident physicians’ experiences with diagnostic error and the context in which the error occurred. Two previous studies have evaluated curricula in cognitive bias,19,20 but neither used a narrative approach. Although the residents’ perceptions do not encompass all of the factors that may contribute to diagnostic error, their narratives provide valuable insight into the types of cognitive bias and associated contextual factors experienced frequently by trainees. Similar to previous research on experience with diagnostic errors among physicians after training,16 we found that residents easily recalled cases of cognitive error. Furthermore, they demonstrated thoughtful reflection on both the personal, cognitive influences as well as the external, contextual factors of the cases as they shared their experiences.
Certain cognitive biases were manifested more frequently than others in the resident narratives. Identifying a smaller core list of biases that internal medicine residents are most likely to encounter will assist medical educators in teaching about cognitive bias. The two most frequent biases elucidated from the narratives were anchoring and availability, which were identified in 88% and 76% of narratives, respectively. This finding is consistent with other studies.5 Framing effect is a bias that contributes to error when diagnosticians are strongly influenced by how a problem or diagnosis is framed. This bias was noted in 56.1% of the narratives and is interesting to consider in light of the current resident duty hours restrictions.21 These restrictions have resulted in more resident handovers and, thus, more opportunities to frame a clinical scenario or diagnosis when handing the patient over to the next physician. Because handover training is now recommended for all residency programs,22 we should seize this training opportunity to also teach about the natural tendency for and pitfalls of the framing effect and how to avoid introducing this bias during handovers, particularly in cases of diagnostic uncertainty.
Contextual factors at the level of the patient, the clinical environment, and the health care team influence the likelihood of cognitive error.23 Most of the environmental factors (e.g., time pressures, transfer, and handoffs) and patient-related factors are difficult, if not impossible, to modify in practice. However, it is possible that with instruction, residents can recognize both high-risk contextual factors as well as biased patterns of thought within themselves, in order to influence their decision making. Indeed, how the resident physician thinks about the patient and the diagnostic process is perhaps the only piece of the clinical encounter that is modifiable in real time. During our facilitated discussions, residents suggested ways to reduce bias in their decision making including reflective techniques (e.g., “knowing one’s tendencies”) and recommitting to the fundamentals of history taking and physical examination to generate appropriate differential diagnoses.
Of interest, most of the errors described occurred on specialty services as opposed to general inpatient medicine services. We believe that this finding could represent the large amount of inpatient time that residents spend on specialty services in our institution. However, there may be factors specific to specialty services that facilitate cognitive bias. For example, specialty attending physicians may be prone to availability bias and generate a more limited differential diagnosis when presented with problems outside of their specialty in the same way that a generalist attending may have availability bias for less common subspecialty conditions. For many complex reasons, specialty physicians may focus only on the aspects of the case that fit within their discipline. Attending physicians with higher levels of clinical expertise for specific medical conditions could also contribute to a team environment with a greater degree of vertical hierarchy, even unintentionally, and therefore reduce the potential for the resident to serve as a cognitive “double check” by questioning the attending. This is important to acknowledge when we identify opportunities for improvement in team-based contextual factors. Finally, it is possible that patients on specialty services in our hospital are more likely to possess some of the patient-related contextual factors, including more complex or chronic illnesses, but, given our knowledge of the complexity on the general medicine services at our institution, this seems less likely.
Although there is tremendous growth in the field of patient safety in medical education, almost all of these new curricula focus on the systems aspects of medical error24,25 as opposed to the cognitive aspects and their contributing factors. This neglect may arise from the relative difficulty in identifying and fixing diagnostic errors, but it may also stem from difficulty confronting and sharing what are perceived to be highly individualized mistakes.26,27 We believe that important components of medical education include individual reflection on one’s own diagnostic errors, increased comfort among trainees and faculty around the topic of diagnostic error, and providing a vocabulary and framework for discussion on why these errors occur.
To our knowledge, the use of narrative discussion as a vehicle for education related to diagnostic error is novel. Mamede and colleagues20,28 have shown that forced diagnostic reflection improved diagnostic accuracy among internal medicine residents in complex cases. The experience of writing facilitates reflection and may tap reservoirs of thought previously inaccessible to the writer.29 The process of reading what one has written in a small group facilitates discussion of (potential) personal failure in a psychologically safe environment. Promoting a culture of psychological safety can be fostered by retrospectively reviewing diagnostic errors and delays during morbidity and mortality conferences. This culture should also be cultivated and encouraged prospectively on teaching rounds in order to confront our cognitive bias tendencies in real time. If a teaching rounds environment is created that is “low-risk” and free from hierarchy, the initial diagnostic impression can be continually challenged and reassessed.
The qualitative nature of our study is most powerful in its ability to generate hypotheses, and its descriptive nature places limits on the conclusions we may draw. Care must be taken not to assign ordinal importance to prominent themes in the study, and the cutoff threshold for inclusion was decided during the retrospective analysis performed by the group. Though the residents described many biases that were not taught in our previous didactic sessions, it is likely that our teaching allowed residents to develop comfort with the biases that we did teach and, therefore, to mention them more frequently in the narrative sessions. Finally, we did not formally assess resident satisfaction at the conclusion of the narrative session. Informally, however, many residents reported that they enjoyed the experience. We plan to continue this activity as part of our patient safety curriculum.
Resident physicians, like all physicians, are prone to the effects of cognitive bias when making diagnoses. We have demonstrated that residents can easily identify these biases in retrospect and are willing to discuss why the error occurred and how to prevent similar errors in the future. We hope that the use of reflective writing and narrative discussion will generate interest among educators as a way to engage faculty and trainees in conversations about diagnostic error and to enhance learning from one another’s mistakes.
Acknowledgments: The authors wish to thank Lisa Bellini, program director for the Internal Medicine Residency Program at the University of Pennsylvania, for supporting this educational research. The authors also thank David Asch and Steve Gluckman for their comments on earlier versions of this manuscript and Judy Shea for her assistance in study design.
Funding/Support: This work was supported by the Sam Martin Education Pilot Award from the Division of General Internal Medicine at the Perelman School of Medicine at the University of Pennsylvania. Dr. Ogdie is supported by the American College of Rheumatology Research and Education Foundation Rheumatology Investigator Award. During this project, Dr. Reilly was supported by NIH Institutional Training Grant T32-DK 07006-37 and the Center for Healthcare Improvement and Patient Safety at the University of Pennsylvania. Dr. Myers is supported in part by a grant from the Josiah Macy Jr. Foundation.
Other disclosures: None.
Ethical approval: This study was approved by the institutional review board of the University of Pennsylvania.
1. Kohn L, C o, rrigan J, Donaldson M To Err Is Human: Building a Safer Health System. 1999 Washington, DC National Academies Press
2. Newman-Toker DE, Pronovost PJ. Diagnostic errors—The next frontier for patient safety. JAMA. 2009;301:1060–1062
3. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324:377–384
4. Chandra A, Nundy S, Seabury S. The growth of physician medical malpractice payments: Evidence from the National Practitioner Data Bank. Health Aff (Millwood). 2005;24:w5-240–w5-249
5. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165:1493–1499
6. Elstein AHiggs J, Jones M. In: Clinical reasoning in medicine. Clinical Reasoning in the Health Professions. 1995 Woburn, Mass Butterworth-Heinemann:49–59
7. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775–780
8. Groopman J How Doctors Think. 2007 New York, NY Houghton-Mifflin Company
9. Wachter RM. Patient safety at ten: Unmistakable progress, troubling gaps. Health Aff (Millwood). 2010;29:165–173
10. Singh H, Graber ML, Kissam SM, et al. System-related interventions to reduce diagnostic errors: A narrative review. BMJ Qual Saf. 2012;21:160–170
11. Miller RA. Computer-assisted diagnostic decision support: History, challenges, and possible paths forward. Adv Health Sci Educ Theory Pract. 2009;14(suppl 1):89–106
12. Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: A systematic review. JAMA. 1998;280:1339–1346
13. Trowbridge RL. Twelve tips for teaching avoidance of diagnostic errors. Med Teach. 2008;30:496–500
14. Graber ML. Educational strategies to reduce diagnostic error: Can you teach this stuff? Adv Health Sci Educ Theory Pract. 2009;14(suppl 1):63–69
15. Bowen JL. Educational strategies to promote clinical diagnostic reasoning. N Engl J Med. 2006;355:2217–2225
16. Schiff GD, Hasan O, Kim S, et al. Diagnostic error in medicine: Analysis of 583 physician-reported errors. Arch Intern Med. 2009;169:1881–1887
17. Schiff G, Kim S, Abrams R, et al.Henricksen K, Battles J, Marks E, Lewin D In: Diagnosing diagnosis errors: Lessons from a multi-institutional collaborative project. Advances in Patient Safety: From Research to Implementation. 2004 Rockville, Md Agency for Healthcare Research and Quality:225–278
18. Krippendorff K Content Analysis: An Introduction to Its Methodology. 20032nd ed Thousand Oaks, Calif Sage Publications
19. Bond WF, Deitrick LM, Arnold DC, et al. Using simulation to instruct emergency medicine residents in cognitive forcing strategies. Acad Med. 2004;79:438–446
20. Mamede S, van Gog T, van den Berge K, et al. Effect of availability bias and reflective reasoning on diagnostic accuracy among internal medicine residents. JAMA. 2010;304:1198–1203
22. Ulmer C, Wolman D, Johns M Resident Duty Hours: Enhancing Sleep, Supervision, and Safety. 2008 Washington, DC National Academy Press
23. Eisenberg JM. Sociologic influences on decision-making by clinicians. Ann Intern Med. 1979;90:957–964
24. Wong BM, Etchells EE, Kuper A, Levinson W, Shojania KG. Teaching quality improvement and patient safety to trainees: A systematic review. Acad Med. 2010;85:1425–1439
25. Walton M, Woodward H, Van Staalduinen S, et al.Expert Group convened by the World Alliance of Patient Safety, as Expert Lead for the Sub-Programme. The WHO patient safety curriculum guide for medical schools. Qual Saf Health Care. 2010;19:542–546
26. Graber M. Diagnostic errors in medicine: A case of neglect. Jt Comm J Qual Patient Saf. 2005;31:106–113
27. Myers JS, VonFeldt JM. Diagnostic errors and patient safety. JAMA. 2009;302:258–259
28. Mamede S, Schmidt HG, Penaforte JC. Effects of reflective practice on the accuracy of medical diagnoses. Med Educ. 2008;42:468–475
29. Charon R, Hermann N. Commentary: A sense of story, or why teach reflective writing? Acad Med. 2012;87:5–7