Medical errors have received considerable attention since the publication of the landmark Institute of Medicine (IOM) report “To Err Is Human: Building a Safer Health System” in 2000.1 The report called specific attention to the large societal cost of medical errors. A 2015 follow-up report also from the IOM added that
[t]he delivery of health care has proceeded for decades with a blind spot: Diagnostic errors—inaccurate or delayed diagnoses—persist throughout all settings of care and continue to harm an unacceptable number of patients.2
In this follow-up report, the IOM highlighted the multiple causes of diagnostic error, including the clinician, the family, and the system. However, the description of the process of reasoning directed at making a diagnosis was entirely restricted to the cognitive processes of the clinician. One reason for this focus may be that, at the end of the day, “clinical reasoning occurs within clinicians’ minds.”2 Certainly, research in clinical reasoning, dating back to the first studies in the 1970s and 1980s,3,4 has been dominated by a psychological perspective, exploring the thinking processes of the individual clinician. The theory of clinical reasoning that emerged from these early studies described diagnostic hypotheses that are advanced early in the patient encounter, then subsequently tested through additional data gathering. Subsequent research has confirmed this model. Gruppen and colleagues5 found that primary care physicians had the correct diagnosis based on just the chief complaint in 78% of cases. More recently, Pelaccia and colleagues6 showed that emergency physicians generated 25% of hypotheses before meeting the patient and 75% of hypotheses in the first five minutes of the clinical encounter.
These findings are consistent with the findings from a large body of literature in psychology that describes “dual process models” of thinking.7–13 Although particular theories may differ, a common feature is that thinking involves two systems. The faster system, Type 1, is automatic, unconscious, and seemingly effortless, whereas the slower system, Type 2, is controlled, conscious, and effortful. As Evans and Stanovich13 described it, Type 1 is “intuitive, heuristic,” and Type 2 is “reflective, analytic.”
Such dual process theories have attracted considerable attention as models of clinical reasoning.14–17 The process for generating multiple diagnostic hypotheses does appear to map well to Type 1 intuitive reasoning; conversely, the systematic search for additional information from a history and physical exam and from lab reports and the conscious weighting of that information aligns with the explicit, rational Type 2 thinking.
Many authors have drawn attention to the heuristics associated with Type 1 thinking and to the possibility that the resulting cognitive biases may lead to diagnostic errors.18–20 The 2015 IOM report went into considerable detail about the relationship between heuristics and diagnostic errors:
Heuristics—cognitive strategies or mental shortcuts that are automatically and unconsciously employed—are particularly important for decision making. Heuristics can facilitate decision making but can also lead to errors. When a heuristic fails, it is referred to as a cognitive bias.2
Although this statement appears definitive, much of the evidence of bias in human reasoning has been derived from studies of undergraduate psychology students answering commonsense and expertise-free questions.21 The IOM report extensively cited these studies but did not examine evidence of the role of biases in clinical reasoning.2 As we will demonstrate in this article, the direct evidence of bias in clinical reasoning in medicine is far less definitive.
In this article, we will examine the relative contribution of heuristics and cognitive biases versus that of knowledge deficits in clinical reasoning errors as well as explore the related issue of the role of Type 1 versus Type 2 processes in diagnostic errors. We then will critically examine the effectiveness of interventions to reduce errors based on cognitive biases versus that of interventions to reduce errors based on knowledge deficits.
The Architecture of Memory and Dual Processing
Why does human reasoning rely on rapid intuitive processing and heuristics? The answer lies in the nature of thinking and memory. The mind contains both a working memory of limited capacity in which all computations occur and a long-term, associative memory of essentially limitless capacity, whereby memories are retrieved based on the strength of their association with the new information.22
Type 1 processing can be viewed as making a direct association between new information and a similar example in one’s memory.23 Such memory searches occur thousands of times every hour of every day; every time we interpret sensory information (e.g., this four-legged object is a chair), we are making rapid associations with our memories. These associations are effortless and do not overload our working memory.13 The likelihood of retrieving a similar example is related to the strength of the association, which can be influenced by a number of factors, such as the number of times the association has been observed in the past, the number of examples stored in one’s memory, or the number of common features, as well as related to the number of extraneous characteristics, such as recency or vividness, that may lead to errors.24
Conversely, Type 2 processing is based on computations in one’s working memory—for example, identifying the features from a diagnostic category that are present in a case and estimating the likelihood of a particular disease.23 Type 2 processing is, by its nature, abstract and normative—or consistent with logical rules—so it places a heavy burden on one’s working memory. Thus, the possibility arises that computational errors may occur as a result of the increased load on one’s limited working memory.13
What is the relationship, then, between these two processes and reasoning errors? One view is that all errors originate from the heuristics that are employed in Type 1 reasoning and not corrected by Type 2 reasoning:
Errors of intuitive judgment involve failures of both systems: System 1, which generated the error, and System 2, which failed to detect and correct it.25
An alternative view is that errors arise from both processes:
Perhaps the most persistent fallacy in the perception of dual-process theories is the idea that Type 1 processes (intuitive, heuristic) are responsible for all bad thinking and that Type 2 processes (reflective, analytic) necessarily lead to correct responses.… So ingrained is this good–bad thinking idea that some dual-process theories have built it into their core terminology.13
How can we reduce errors, then? In the first formulation above, because errors are a consequence of hardwired, cognitive biases in Type 1 processing, the solution is to learn to detect when a bias may arise and then use analytical Type 2 reasoning to correct it:
What can be done about biases?… How can we improve judgments and decisions…? The short answer is that little can be achieved without a considerable investment of effort.… System 1 is not readily educable.26
The way to block errors that originate in System 1 is simple in principle: recognize the signs that you are in a cognitive minefield, slow down, and ask for reinforcement from System 2.26
This view does not acknowledge that errors may arise from knowledge deficits or suggest that increasing knowledge in a domain (either analytical or experiential) will lead to fewer errors. Instead, the focus is on flaws in the thinking processes that may lead to errors.
Other theorists believe that the two processes reflect different kinds of knowledge. Type 1 processing involves the retrieval of individual experiences through a process of unconscious association, while Type 2 uses “symbolically represented … knowledge.”9 In contrast to a focus on processes, this perspective explicitly examines the kinds of knowledge required by each type of thinking and how that knowledge is learned.
These two formulations about the causes of errors lead to quite different predictions regarding the role of experience and education and strategies to reduce errors. If errors are a consequence of cognitive biases, then:
- There will be no relationship between increasing knowledge and experience and errors.
- Constraints such as speeded tasks or distractions will affect Type 2 analytical processing and lead to more errors.12
- Errors will be corrected by learning to explicitly recognize a cognitive bias and invoke analytical strategies to correct the error.19
Conversely, if errors are a consequence of knowledge deficits, then:
- More experience will lead to greater knowledge, both analytical and experiential, and result in fewer errors.
- To the extent that Type 1 processing underlies expertise, speeded tasks or distractions will have minimal effect on accuracy.
- Errors will be corrected by applying specific knowledge.
We now turn to the evidence from studies of clinical reasoning in medicine to reveal the extent to which errors in clinical reasoning are related to Type 1 or Type 2 processes, are a consequence of cognitive biases or knowledge deficits, and can be corrected by recognition and amelioration of cognitive biases or alternatively by extending knowledge resources.
We acknowledge in advance a limitation of this strategy. To examine the two broad perspectives on the causes of errors—cognitive bias or knowledge deficiency—we have framed our review of the literature as an either/or dichotomy. In fact, it is likely that both deficits contribute to errors. The substantive issue, then, is the relative contribution of each. However, the existing literature is simply insufficient to address the interplay of both cognitive bias and knowledge deficiency.
Evidence From Clinical Reasoning Studies in Medicine
Are errors in clinical reasoning associated with Type 1 processing?
As Evans12 indicated, one kind of evidence that errors arise from the heuristics of Type 1 processing and are corrected by the interventions of Type 2 processing is the relationship between errors and the speed of diagnosis. A longer time to a diagnosis, or instructions to slow down, to be thorough, etc., should permit more use of Type 2 processing and hence produce greater accuracy. This outcome has been observed with decontextualized tasks, but the evidence in medicine is less clear. Sherbino and colleagues27 showed that correct diagnosis was associated with less time spent on a diagnostic task. Other studies showed that when time was manipulated during the experiment and participants were cautioned to “be systematic and thorough” or to “go as fast as you can,” there was no effect on their accuracy.28–30 In another study in which the participants were given the opportunity to revise their initial diagnoses, revisions were associated with longer initial processing times and diagnoses that were more likely to be incorrect.31 All of these studies suggest that increasing time, and thus relying more heavily on Type 2 processing, does not ameliorate errors. However, one recent study showed that severely constraining time does increase novices’ errors.32
According to this evidence from the literature, more processing time is generally associated with more, not fewer, errors. If errors are caused by cognitive biases in Type 1 processing and resolved by Type 2 processing, the reverse would be true. One explanation for this finding is that errors can arise in both systems; for example, both premature closure (which has been identified as the most common contributor to diagnostic error33) and confirmation bias are phenomena that arise during the process of data gathering and synthesis, so they are more likely to be associated with Type 2 processing.34 Moreover, the resolution of errors is not simply a case of exerting additional analytical effort; without sufficient knowledge, additional processing is not likely to be helpful in resolving errors. We will elaborate on the role of knowledge in this process in due course.
Are errors in clinical reasoning caused by cognitive biases?
To date, over 100 cognitive biases have been described in the general literature and at least 38 in the medical literature.19 Given the prominence of writings associating cognitive bias with diagnostic error,14–20,35–38 it is somewhat surprising that there are relatively few empirical demonstrations of cognitive bias in diagnostic reasoning. A recent systematic review identified 213 studies of cognitive bias in health care; however, many examined the therapeutic choices of physicians and patients, not diagnostic reasoning.39 After reviewing this article, we concluded that only 15 of the studies examined the role of cognitive bias in diagnostic error, and only 7 biases were examined.
The evidence of the role of cognitive bias in diagnostic reasoning that does exist is derived from two kinds of studies: (1) experimental studies in which the stimuli are specifically manipulated to illustrate a bias (e.g., by showing participants a case early in the study, then asking them to diagnose a similar case later in the sequence [availability bias]); and (2) retrospective reviews of cases where an error has occurred, to determine the possible cause of the error.
A number of experimental studies have demonstrated various cognitive biases. Using superimposed images of artificial pulmonary nodules, Berbaum and colleagues40 identified examples of “satisfaction of search” bias (equivalent to premature closure), where the radiologist identified a lesion and failed to notice a second lesion. Several other studies have demonstrated “availability” bias, where participants’ recent experience with a similar case but a different diagnosis led to errors. A study of electrocardiogram interpretation, in which participants were shown two cases with similar demographics but different diagnoses, showed that availability bias was sufficient to reduce accuracy substantially.41 In addition, Schmidt, Mamede, and colleagues42,43 conducted two studies in which physicians were first shown a series of cases or a disease description and asked to perform a task requiring detailed inspection of the cases. Subsequently, they were shown a set of new cases, some of which were similar to a previously viewed case but with a different diagnosis; the physicians were more likely to erroneously identify the cases that were similar to those from the first round as having the previous but now incorrect diagnosis. However, availability bias seems to work both ways—two studies in dermatology showed that a prior example from the same category can facilitate accurate diagnosis.44,45
Not all studies were able to induce a cognitive bias. Christensen and colleagues46 examined framing bias in prognostic decisions made by medical students, residents, and physicians. They found that minimal bias was present in only two of the eight cases used in the study. Weber and colleagues47 found little evidence of base rate neglect among experts. They also found a strong relationship between self-reported experience with similar cases and the likelihood and speed of generating a correct diagnostic hypothesis, which they interpreted as a positive effect of availability.
Thus, although a number of studies that used materials specifically designed to induce bias showed that clinicians can exhibit cognitive biases, particularly availability bias, they provided no insight into the extent to which these biases may arise in practice. Moreover, some studies showed that availability bias may both reduce and improve accuracy.44,45
The second class of studies that examine the role of cognitive bias in diagnostic reasoning are retrospective reviews of actual errors. Graber and colleagues33 studied 100 cases of diagnostic errors committed in the emergency department. They found that about 68% of cases were associated with a cognitive bias, primarily premature closure (i.e., terminating the encounter without getting the critical information). However, another study found no evidence of a causal role of cognitive biases.48
Other reasons to challenge the assumption that diagnostic errors arise primarily from cognitive biases also exist. A recent study asked experts in diagnostic errors to identify cognitive biases in case workups that were chosen because they did not exemplify a particular bias but had two equally probable diagnoses.49 Agreement on the presence or absence of specific biases among the experts was close to zero. Moreover, when the test revealed that the clinician chose the “wrong” diagnosis (i.e., the specific test was normal), experts identified twice as many biases, although the process was the same. Furthermore, the study showed that retrospective review is itself vulnerable to hindsight bias, where an incorrect diagnosis resulted in the identification of twice as many biases as a correct diagnosis in an otherwise identical case workup.
This is a clear example of hindsight bias in the clinicians. If experts cannot agree on specific biases and are vulnerable to hindsight bias, how can teaching definitions of biases lead to error reduction?
Are errors in clinical reasoning caused by knowledge deficits?
Not surprisingly, there is substantial evidence that additional education and knowledge are associated with reduced error rates. Much of this evidence is derived from studies of postgraduate trainees that show that practicing clinicians and/or senior residents have lower error rates than junior residents,41,50,51 although this is not always the case.52 It is less clear from the literature that experience in practice leads to improved practice; most studies show a small negative relationship between diagnostic accuracy and age.53,54
In one study of actual clinical diagnostic performance, Zwaan and colleagues48 conducted a retrospective chart review of successive cases of chronic obstructive pulmonary disease, where errors did and did not occur, thereby avoiding hindsight bias. They found evidence of insufficient knowledge as a basis for “suboptimal clinical acts.”
However, these studies do not identify the kind of knowledge that is related to errors and expertise. A positive association between accuracy and measures of experiential knowledge would suggest that expertise is related to Type 1 processing; an association with measures of formal knowledge would imply that expertise resides in part in a more extensive analytical knowledge base, and thus would contribute to Type 2 processing.
We found evidence for both relationships in the literature. Sherbino and colleagues27 showed moderate positive correlations between accuracy on a series of written cases and two measures of knowledge: the national written licensing examination (analytical knowledge) and self-reported experience with individual cases (experiential knowledge). Weber and colleagues47 also showed that accuracy was strongly related to previous experience with similar cases. Groves and colleagues50 showed that family physicians were less accurate than residents when it came to data gathering and interpretation but more accurate when it came to hypothesis generation and overall accuracy, suggesting that experienced clinicians’ accuracy is derived primarily from generating the right hypothesis—a Type 1 phenomenon.
Strategies for reducing errors in clinical reasoning
Interest in diagnostic errors is stimulated primarily by the assumption that an understanding of the source of these errors will lead to effective interventions to reduce them. At least three classes of interventions have been described in the literature: general strategies, heuristic-based strategies, and knowledge-based strategies.55
General error reduction strategies.
Perhaps the simplest strategy to reduce errors is to admonish clinicians to be careful and systematic and to explore all alternatives, following Kahneman’s26 directive to “slow down and ask for reinforcement from System 2.” A typical research strategy to examine this type of intervention is to use a two-group design in which one group is told to be “careful, systematic, thorough” and the other group is told to “go as quickly as possible,” with a view to modifying the time available for Type 2 processing. Three studies using this strategy showed no difference in diagnostic accuracy.29,30,56 On the other hand, Mamede and colleagues57 used similar strategies with some success. In one study, they found that simply telling participants that faculty found the cases difficult was sufficient to increase their accuracy. A more recent study showed the negative effect of time pressure on the accuracy of novices.32 However, the time pressure in this study was extreme: All participants were repeatedly admonished that they were falling behind. It may be that the intervention also induced anxiety, which has been shown to have a negative impact on diagnostic reasoning.58
Heuristic-based error reduction strategies.
Strategies directed at reducing the effect of cognitive biases are designed to educate participants about possible biases, with the assumption that this awareness will reduce diagnostic errors. Several studies have focused on simply identifying biases. Reilly and colleagues59 implemented a one-year curriculum on cognitive bias for internal medicine residents. The intervention group was better able to define and identify biases on a written test. They also viewed video scenarios and identified an average of 2.56 biases (our calculation) of the 8 present in the videos; control participants were not tested. Bond and colleagues60 taught a course on cognitive biases to emergency medicine residents. Although residents perceived that they had learned about biases, again, the effect of the intervention on diagnostic errors was not tested. Ogdie and colleagues61 implemented a course where residents completed reflective writing and discussion assignments about their experience with cognitive bias and diagnostic error. Although the residents were able to recall an episode where they exhibited a cognitive bias, there was no independent confirmation of that bias and no measurement of its effect on their diagnostic accuracy.
Only three studies examined the effect of an educational intervention designed to teach participants to recognize specific cognitive biases in diagnostic reasoning.62–64 Two of these studies tested an educational intervention on clerkship students in an emergency medicine rotation, followed by a written case-based posttest.62,63 There was no reduction in errors from the intervention. The third of these studies taught debiasing strategies to family medicine residents.64 A preceptor rating of residents’ ability to recognize biases and accuracy of the diagnosis was unchanged by the workshop. One other study directly compared the effect of a debiasing checklist (with questions like “Did I consider the inherent flaws of heuristic thinking?”) versus the effect of a case-specific checklist of 25 possible differential diagnoses.65 The authors found no significant benefit to the debiasing probe; however, the case-specific checklist resulted in a significant improvement in diagnostic accuracy.
Thus, although a number of studies have demonstrated that residents can learn to define cognitive biases, and there is weak evidence that they may be able to recognize biases in written and video case workups that are designed to illustrate specific biases, surprisingly few studies have examined the link between their ability to identify cognitive biases and a decrease in diagnostic errors. Those that did study this relationship showed no benefit to such interventions.
Knowledge-based error reduction strategies.
Perhaps the most widely studied intervention is deliberate reflection, a technique developed by Mamede, Schmidt, and colleagues.42,43,66–70 Their strategy is based on the assumption that the key to recognizing that a working diagnosis is not correct is identifying evidence that is inconsistent with that diagnosis. This technique requires going back to the case, writing down all of its features, and then identifying any features that are discordant with the working diagnosis. Next, participants identify any other likely hypotheses, examine whether the data support or refute these alternative hypotheses, and finally change their mind if warranted. Although this strategy encourages a corrective action based on analytical (Type 2) thinking, it is clearly focused on identifying the appropriate knowledge, not on identifying the cognitive heuristic at fault, and may best be described as a structured approach to the retrieval and reorganization of diagnostically relevant information.
One study contrasted the effects of reflection and those of undirected reasoning on diagnosing simple versus complex cases.69 The positive effect of reflection was primarily noted in diagnosing complex cases. Another study was designed to induce availability bias and then to determine whether reflection would reduce its effect.43 Participants first evaluated control cases, then they saw new cases, some of which were similar to the control cases but with a different diagnosis. They were instructed to use nonanalytic reasoning to review the new cases. Finally, they reviewed the control cases a second time using structured reflection. This reflection resulted in a consistent improvement in diagnostic accuracy. A third study contrasted a conscious thought condition designed to “induce an elaborate analysis of case information” using the methods described earlier with a “deliberation without attention” and “immediate decision” condition.52 For residents solving simple cases, the authors found no difference in their diagnostic accuracy; for those solving complex cases, the results of the conscious thought condition were superior. Conversely, the intervention benefited medical students solving simple cases but not those solving complex cases. This finding suggests that the intervention mobilized analytical knowledge, which for novices resulted in improvement in solving simple cases and for more advanced clinicians was effective for solving complex cases.
In a study of salient distracting features (i.e., features deliberately included to distract the clinician from the correct diagnosis), residents encouraged to use reflective reasoning were more accurate than when they were told to use exemplar-based reasoning.68 However, the authors found no difference for medical students. Again, this finding may reflect residents’ more extensive analytical knowledge.
Two studies showed that additional time for reflection resulted in a large increase in diagnostic accuracy, among both residents and experienced physicians.71,72 However the reflection phase also involved the presentation of additional clinical data, which may be a major contributor to participants’ increased accuracy.
Some studies have found negative results for such interventions as well. A large study29 with participants from three levels (medical students, residents, and faculty from emergency medicine and internal medicine) used cases drawn from the same case bank employed by Mamede and Schmidt. Participants were instructed to either “trust familiarity” or use a “directed search” when solving the cases. The directed search condition, using an approach similar but not identical to the Mamede studies, was designed to elicit additional diagnoses and information about the relationship between case features and diagnoses. The authors found no overall effect of this strategy on participants’ accuracy, possibly because of details in the application of the intervention, which did not explicitly require that participants consider case features that supported alternative hypotheses.
From these studies, we conclude that reflection is fairly consistently beneficial, although the level of the learners and the difficulty of the cases do affect its impact. This mitigation appears to be a consequence of the availability of appropriate knowledge to resolve the problem; to the extent that reflection is effective, it achieves these results by encouraging participants to identify and reconfigure their knowledge. This strategy is ineffective for junior clinicians solving complex cases because they simply do not have sufficient knowledge to resolve the case from the start. Conversely, experts do not benefit from reflection in solving simple cases, because they can solve such problems easily from the start.
There is one further consideration regarding these interventions. The studies we reviewed above are based on experimenter-induced reflection, often with selected cases that are manipulated to be misleading. Further, the reflection is usually required for all cases as a second pass, with the original written case available. To be useful in practice, the clinician must, as Kahneman26 says, “recognize [she is] in a minefield” and initiate additional review when she believes she has committed an error and does not have a case description to which to return. The question then is: Can clinicians recognize that a situation is problematic and correct the error on their own?
The answer appears to be “a bit.” A large study by Friedman and colleagues73 involving practicing internists, residents, and medical students showed that participants’ self-assessed confidence was higher for cases they diagnosed correctly than for those they diagnosed incorrectly. Increasing expertise led to greater accuracy and confidence, although the authors identified many instances of both over- and underconfidence. Mamede and colleagues57 showed that participants who saw an ambiguous version of a case were slower and recalled more information than those who saw a straightforward version, suggesting that participants had some awareness of case difficulty.
More recently, Monteiro and colleagues31 conducted a study similar in design to those described above, in which residents first diagnosed cases as quickly as possible and then, instead of reviewing all the cases again, were given the option of reviewing them. Participants initially processed cases with errors more slowly and were more likely to choose them for additional review. However, although half the cases contained errors, only 8% of these resulted in any change in diagnosis, which led to a very small increase in overall accuracy.
In conclusion, the evidence suggests that interventions directed at error reduction through the identification of heuristics and biases have no effect on diagnostic errors. By contrast, a number of studies using various strategies that encourage clinicians to mobilize and reorganize their knowledge or to reflect on the content of the case showed some benefit, which is presumably a consequence of directing participants to identify additional knowledge that is relevant to the problem.
Conclusions
In this review, we have examined the literature in psychology and medicine that is related to dual process models of reasoning, and we have identified two issues arising from this literature: (1) whether errors in clinical reasoning arise primarily from one processing strategy (Type 1) as suggested by many authors, and (2) whether these errors are a consequence of cognitive biases or knowledge deficits. In this review, we have also reviewed error reduction strategies based on these approaches.
The evidence we found in the literature appears consistent. The theoretical position that errors in clinical reasoning arise primarily from Type 1 processing and are corrected by Type 2 processing is simplistic. Errors can arise from both kinds of processing. As to what causes errors and what can be done to reduce them, the literature to date aligns with Graber and colleagues’55 conclusion that there is “a major discrepancy between the breadth and enthusiasm for these interventions … but a paucity of actual interventions [and] very limited evidence addressing diagnostic accuracy or errors.” Nevertheless, some conclusions are possible from the studies we reviewed. First, the assumption that most errors are a consequence of cognitive biases and could be reduced by training physicians to recognize biases is not borne out by the evidence. Similarly, general admonitions to slow down, reflect, or be careful and systematic likely have minimal effect beyond slowing the diagnostic process. By contrast, knowledge deficits are a significant contributor to diagnostic error, and strategies to induce some reorganization of knowledge appear to have small but consistent benefits.
While we have examined various educational strategies directed at error reduction at an individual level, other approaches at different levels also have been suggested. Group decision making is one such possibility that has been studied using simulated trauma cases.74 Computer decision support systems have a long history; however, the benefit of such systems to physicians is not large,75 and, as we discussed earlier, physicians often are not aware of their errors, so they do not seek out decision support.73
One thing is clear though. However attractive the assumption is that diagnostic errors originate in cognitive biases, and the implication that relatively simple and quick strategies directed at identifying and eliminating biases can reduce errors, the evidence is consistent in demonstrating that such strategies have no or limited effectiveness.76 Knowledge matters. Even if some proportion of errors arise from cognitive biases, the resolution of errors also involves the application of clinical knowledge, which may underlie the initial mistake.77 If there is a science of error reduction, it is in its infancy, and we have far to go.
Finally, we must caution that, while there is some uncertainty in the actual rate of diagnostic errors, there is far greater uncertainty in the extent to which these errors are preventable. Ambiguity is a constant in clinical practice; it is inevitable that some errors will arise simply because there is insufficient information to make a definitive diagnosis. The assumption that a magic bullet will emerge to eliminate all errors is likely nothing more than wishful thinking.
References
1. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. 2000.Washington, DC: National Academies Press.
2. Balogh EP, Miller BT, Ball JR. Improving Diagnosis in Health Care. 2015.Washington, DC: National Academies Press.
3. Elstein AS, Shulman LS, Sprafka SA. Medical Problem Solving: An Analysis of Clinical Reasoning. 1978.Cambridge, MA: Harvard University Press.
4. Barrows HS, Norman GR, Neufeld VR, Feightner JW. The clinical reasoning of randomly selected physicians in general medical practice. Clin Invest Med. 1982;5:4955.
5. Gruppen LD, Woolliscroft JO, Wolf FM. The contribution of different components of the clinical encounter in generating and eliminating diagnostic hypotheses. Res Med Educ. 1988;27:242247.
6. Pelaccia T, Tardif J, Triby E, et al. How and when do expert emergency physicians generate and evaluate diagnostic hypotheses? A qualitative study using head-mounted video cued-recall interviews. Ann Emerg Med. 2014;64:575585.
7. Evans KK, Georgian-Smith D, Tambouret R, Birdwell RL, Wolfe JM. The gist of the abnormal: Above-chance medical decision making in the blink of an eye. Psychon Bull Rev. 2013;20:11701175.
8. Evans JS. Dual-processing accounts of reasoning, judgment, and social cognition. Annu Rev Psychol. 2008;59:255278.
9. Sloman SA. The empirical case for two systems of reasoning. Psychol Bull. 1995;119:322.
10. Smith ER, DeCoster J. Dual process models in social and cognitive psychology: Conceptual integration and links to underlying memory systems. Pers Soc Psychol Rev. 2000;4:108131.
11. Evans JS. The heuristic-analytic theory of reasoning: Extension and evaluation. Psychon Bull Rev. 2006;13:378395.
12. Evans JS. In two minds: Dual-process accounts of reasoning. Trends Cogn Sci. 2003;7:454459.
13. Evans JS, Stanovich KE. Dual-process theories of higher cognition: Advancing the debate. Perspect Psychol Sci. 2013;8:223241.
14. Croskerry P. Clinical cognition and diagnostic error: Applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract. 2009;14(suppl 1):2735.
15. Monteiro SM, Norman G. Diagnostic reasoning: Where we’ve been, where we’re going. Teach Learn Med. 2013;25(suppl 1):S26S32.
16. Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ Theory Pract. 2009;14(suppl 1):3749.
17. Croskerry P. A universal model of diagnostic reasoning. Acad Med. 2009;84:10221028.
18. Redelmeier DA. Cognitive psychology and medical judgment: Some downfalls of studying pitfalls. Med Decis Making. 1991;11:169170.
19. Elstein AS. Thinking about diagnostic thinking: A 30-year perspective. Adv Health Sci Educ Theory Pract. 2009;14(suppl 1):718.
20. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775780.
21. Lopes LL. The rhetoric of irrationality. Theory Psychology. 1991;1:6582.
22. Mayer RE. Applying the science of learning to medical education. Med Educ. 2010;44:543549.
23. Logan GD. Toward an instance theory of automatization. Psychol Rev. 1988;95:492527.
24. Shiffrin RM, Schneider W. Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychol Rev. 1977;84:127190.
25. Kahneman D, Frederick S. Gilovich T, Griffin DW, Kahneman D. Representativeness revisited: Attribute substitution in intuitive judgment. In: Heuristics and Biases: The Psychology of Intuitive Judgment. 2002.Cambridge, UK: Cambridge University Press.
26. Kahneman D. Thinking, Fast and Slow. 2011.New York, NY: MacMillan.
27. Sherbino J, Dore KL, Wood TJ, et al. The relationship between response time and diagnostic accuracy. Acad Med. 2012;87:785791.
28. Ilgen JS, Bowen JL, Yarris LM, Fu R, Lowe RA, Eva K. Adjusting our lens: Can developmental differences in diagnostic reasoning be harnessed to improve health professional and trainee assessment? Acad Emerg Med. 2011;18(suppl 2):S79S86.
29. Ilgen JS, Bowen JL, McIntyre LA, et al. Comparing diagnostic performance and the utility of clinical vignette-based assessment under testing conditions designed to encourage either automatic or analytic thought. Acad Med. 2013;88:15451551.
30. Norman G, Sherbino J, Dore K, et al. The etiology of diagnostic errors: A controlled trial of system 1 versus system 2 reasoning. Acad Med. 2014;89:277284.
31. Monteiro SD, Sherbino J, Patel A, Mazzetti I, Norman GR, Howey E. Reflecting on diagnostic errors: Taking a second look is not enough. J Gen Intern Med. 2015;30:12701274.
32. ALQahtani DA, Rotgans JI, Mamede S, et al. Does time pressure have a negative effect on diagnostic accuracy? Acad Med. 2016;91:710716.
33. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165:14931499.
34. Stiegler MP, Gaba DM. Decision-making and cognitive strategies. Simul Healthc. 2015;10:133138.
35. Klein JG. Five pitfalls in decisions about diagnosis and prescribing. BMJ. 2005;330:781783.
36. Redelmeier DA, Ferris LE, Tu JV, Hux JE, Schull MJ. Problems for clinical judgement: Introducing cognitive psychology as one more basic science. CMAJ. 2001;164:358360.
37. Elstein AS. Heuristics and biases: Selected errors in clinical reasoning. Acad Med. 1999;74:791794.
38. Croskerry P, Singhal G, Mamede S. Cognitive debiasing 1: Origins of bias and theory of debiasing. BMJ Qual Saf. 2013;22:ii58ii64.
39. Blumenthal-Barby JS, Krieger H. Cognitive biases and heuristics in medical decision making: A critical review using a systematic search strategy. Med Decis Making. 2015;35:539557.
40. Berbaum KS, Schartz KM, Caldwell RT, et al. Satisfaction of search from detection of pulmonary nodules in computed tomography of the chest. Acad Radiol. 2013;20:194201.
41. Hatala R, Norman GR, Brooks LR. Impact of a clinical scenario on accuracy of electrocardiogram interpretation. J Gen Intern Med. 1999;14:126129.
42. Schmidt HG, Mamede S, van den Berge K, van Gog T, van Saase JL, Rikers RM. Exposure to media information about a disease can cause doctors to misdiagnose similar-looking clinical cases. Acad Med. 2014;89:285291.
43. Mamede S, van Gog T, van den Berge K, et al. Effect of availability bias and reflective reasoning on diagnostic accuracy among internal medicine residents. JAMA. 2010;304:11981203.
44. Allen SW, Brooks LR, Norman GR, Rosenthal D. Effect of prior examples on rule-based diagnostic performance. Res Med Educ. 1988;27:914.
45. Brooks LR, Norman GR, Allen SW. Role of specific similarity in a medical diagnostic task. J Exp Psychol Gen. 1991;120:278287.
46. Christensen C, Heckerung P, Mackesy-Amiti ME, Bernstein LM, Elstein AS. Pervasiveness of framing effects among physicians and medical students. J Behav Decis Making. 1995;8:169180.
47. Weber EU, Böckenholt U, Hilton DJ, Wallace B. Determinants of diagnostic hypothesis generation: Effects of information, base rates, and experience. J Exp Psychol Learn Mem Cogn. 1993;19:11511164.
48. Zwaan L, Thijs A, Wagner C, van der Wal G, Timmermans DR. Relating faults in diagnostic reasoning with diagnostic errors and patient harm. Acad Med. 2012;87:149156.
49. Zwaan L, Monteiro S, Sherbino J, Ilgen J, Howey B, Norman G. Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups [published online January 29, 2016]. BMJ Qual Saf. doi: 10.1136/bmjqs-2015–005014.
50. Groves M, O’Rourke P, Alexander H. Clinical reasoning: The relative contribution of identification, interpretation and hypothesis errors to misdiagnosis. Med Teach. 2003;25:621625.
51. Norman GR, Rosenthal D, Brooks LR, Allen SW, Muzzin LJ. The development of expertise in dermatology. Arch Dermatol. 1989;125:10631068.
52. Mamede S, Schmidt HG, Rikers RM, Custers EJ, Splinter TA, van Saase JL. Conscious thought beats deliberation without attention in diagnostic decision-making: At least when you are an expert. Psychol Res. 2010;74:586592.
53. Norcini JJ, Lipner RS, Kimball HR. Certifying examination performance and patient outcomes following acute myocardial infarction. Med Educ. 2002;36:853859.
54. St-Onge C, Landry M, Xhignesse M, et al. Age-related decline and diagnostic performance of more and less prevalent clinical cases. Adv Health Sci Educ Theory Pract. 2016;21:561570.
55. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: A narrative review. BMJ Qual Saf. 2012;21:535557.
56. Monteiro SD, Sherbino JD, Ilgen JS, et al. Disrupting diagnostic reasoning: Do interruptions, instructions, and experience affect the diagnostic accuracy and response time of residents and emergency physicians? Acad Med. 2015;90:511517.
57. Mamede S, Schmidt HG, Rikers RM, Penaforte JC, Coelho-Filho JM. Influence of perceived difficulty of cases on physicians’ diagnostic reasoning. Acad Med. 2008;83:12101216.
58. Fraser K, Ma I, Teteris E, Baxter H, Wright B, McLaughlin K. Emotion, cognitive load and learning outcomes during simulation training. Med Educ. 2012;46:10551062.
59. Reilly JB, Ogdie AR, Von Feldt JM, Myers JS. Teaching about how doctors think: A longitudinal curriculum in cognitive bias and diagnostic error for residents. BMJ Qual Saf. 2013;22:10441050.
60. Bond WF, Deitrick LM, Arnold DC, et al. Using simulation to instruct emergency medicine residents in cognitive forcing strategies. Acad Med. 2004;79:438446.
61. Ogdie AR, Reilly JB, Pang WG, et al. Seen through their eyes: Residents’ reflections on the cognitive and contextual components of diagnostic errors in medicine. Acad Med. 2012;87:13611367.
62. Sherbino J, Yip S, Dore KL, Siu E, Norman GR. The effectiveness of cognitive forcing strategies to decrease diagnostic error: An exploratory study. Teach Learn Med. 2011;23:7884.
63. Sherbino J, Kulasegaram K, Howey E, Norman G. Ineffectiveness of cognitive forcing strategies to reduce biases in diagnostic reasoning: A controlled trial. CJEM. 2014;16:3440.
64. Smith BW, Slack MBThe effect of cognitive debiasing training among family medicine residents. Diagnosis. 20152117121.
65. Shimizu T, Matsumoto K, Tokuda Y. Effects of the use of differential diagnosis checklist and general de-biasing checklist on diagnostic performance in comparison to intuitive diagnosis. Med Teach. 2013;35:e1218e1229.
66. Mamede S, Schmidt HG, Penaforte JC. Effects of reflective practice on the accuracy of medical diagnoses. Med Educ. 2008;42:468475.
67. Mamede S, Schmidt HG. Reflection in diagnostic reasoning: What really matters? Acad Med. 2014;89:959960.
68. Mamede S, van Gog T, van den Berge K, van Saase JL, Schmidt HG. Why do doctors make mistakes? A study of the role of salient distracting clinical features. Acad Med. 2014;89:114120.
69. Mamede S, Schmidt HG. The structure of reflective practice in medicine. Med Educ. 2004;38:13021308.
70. Mamede S, Splinter TA, van Gog T, Rikers RM, Schmidt HG. Exploring the role of salient distracting clinical features in the emergence of diagnostic errors and the mechanisms through which reflection counteracts mistakes. BMJ Qual Saf. 2012;21:295300.
71. Coderre S, Wright B, McLaughlin K. To think is good: Querying an initial hypothesis reduces diagnostic error in medical students. Acad Med. 2010;85:11251129.
72. Bass A, Geddes C, Wright B, Coderre S, Rikers R, McLaughlin K. Experienced physicians benefit from analyzing initial diagnostic hypotheses. Can Med Educ J. 2013;4:e7e15.
73. Friedman CP, Gatti GG, Franz TM, et al. Do physicians know when their diagnoses are correct? Implications for decision support and error reduction. J Gen Intern Med. 2005;20:334339.
74. Murray DJ, Freeman BD, Boulet JR, Woodhouse J, Fehr JJ, Klingensmith ME. Decision making in trauma settings: Simulation to improve diagnostic skills. Simul Healthc. 2015;10:139145.
75. Friedman CP, Elstein AS, Wolf FM, et al. Enhancement of clinicians’ diagnostic reasoning by computer-based consultation: A multisite study of 2 systems. JAMA. 1999;282:18511856.
76. Croskerry P. When I say… cognitive debiasing. Med Educ. 2015;49:656657.
77. Dhaliwal G. Premature closure? Not so fast [published onlineMarch 15, 2016]. BMJ Qual Saf. doi: 10.1136/bmjqs-2016-005267.