Dr. Norman is professor, Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada.
Ms. Monteiro is a PhD candidate, Department of Psychology, McMaster University, Hamilton, Ontario, Canada.
Dr. Sherbino is associate professor, Department of Medicine, McMaster University, Hamilton, Ontario, Canada.
Editor’s Note: This is a commentary on Custers EJFM. Medical education and cognitive continuum theory: An alternative perspective on medical problem solving and clinical reasoning.
Correspondence should be addressed to Dr. Norman, McMaster University, MDCL 3519, 1200 Main St. W., Hamilton ON L8N 3Z5, Canada; e-mail: firstname.lastname@example.org.
In this issue, Custers1 addresses the plethora of recent articles that examine dual processing theories of clinical reasoning.2 He proposes an alternative, cognitive continuum theory, first proposed by Hammond.3 In advancing this theory, he takes a position similar to more recent thinking by Croskerry.4
The Challenges of a Dual ProcessModel
Although our own work has advanced dual process models of clinical reasoning,5 we share some of Custers’ concerns about the robustness of these models. On close reading, much of the literature, in fact, offers little evidence to support the more radical claims of dual processing theory. In particular, the position of Kahneman6 and Croskerry,2 which proposes that all reasoning errors derive from uncorrected cognitive biases that originate in System 1 (the intuitive system), and are corrected by System 2 (the analytical system), is based on minimal evidence. Kahneman’s repeated demonstration that the human information processing system relies heavily on heuristics that have the potential, under some circumstances, to lead to incorrect conclusions does not, of itself, provide any evidence that these heuristics originate in System 1.
Kahneman takes pains early in his book6 to explain that this dual system view is simply a metaphor and has no psychological reality. But he then proceeds to describe one bias after another that is blamed on System 1. The approach rapidly degenerates to a tautology—The theory says that all errors result from System 1 reasoning, and since his experimental manipulations that expose the shortfalls of an otherwise adequate heuristic show how easy it is to induce reasoning errors, then System 1 reasoning must lead to errors.
Examining the Arguments Against a Dual Process Model
Custers correctly challenges this simplistic view. However, we find the evidence he offers opposing dual process models somewhat imprecise, and more philosophical than empirical. For example, he introduces the “homunculus” problem and questions who or what controls the two processes. But is that an issue? By analogy, working memory is a well-established and thoroughly investigated aspect of cognition and is proposed to contain a number of components—a phonological loop, a visuospatial sketchpad, an episodic buffer, and a central executive.7 There is no particular problem with the notion of a controller. And while it is proposed that System 1 is not under conscious control, no such automaticity applies to “System 2” (the analytical system of reasoning). Although we accept that the conditions under which a person chooses to revert to analytical thinking are not well known, that does not mean that they are in principle unknowable.
Custers advances the argument that since the human body is one integrated organism, how could we have two mind systems? But is this a problem? We have multiple sensory systems (e.g., the parasympathetic and sympathetic nervous systems) that are both distinct and integrated. Why can we not allow for two processing modes? Finally, Custers claims that it is not always the case that System 1 processes are rapid and System 2 slow. Of course, any measure of processing time is inherently highly variable, but it is not unreasonable to presume that an automatic system (System 1) is faster than an analytical system (System 2).
The Evidence for a Dual ProcessModel
Before we dismiss dual processing models out of hand, we might well examine more critically the evidence for these models. There are various kinds of evidence from anecdotal to physiological to anatomical. Anecdotally, we all have the experience that when we know the solution to a problem, it arises very quickly. It feels like the answer is “right there.” Moreover, we can often just as quickly know that we don’t know the answer. All this is verified by studies conducted by Eva and Regehr8 in which correct answers to multiple-choice questions arose quickly, and longer response times were associated with wrong answers. Conversely, some problems require more thought and much more effort. Thus, at one level, the System 1/System 2 dichotomy may be nothing more than an issue of recall of a solution versus reasoning it out (i.e., thinking).
However, some evidence does suggest there is more to the decision-making process. The past few years have seen a convergence between researchers exploring dual process models of thinking and a second tradition of research in psychology related to concept formation. Custers has described differences in the research focus of these two schools of thought. Concept formation researchers are more concerned with what is retrieved than with the process of how it is retrieved. In particular, exemplar models of concept formation propose that natural categories—bird, tree, mountain—are identified by a nonanalytical process of association with similar prior exemplars in memory. That is, we declare that an object is a chair because, at an unconscious level, we have retrieved from memory a similar prior example of the category “chair.” Although there is substantial evidence of exemplar-based reasoning in everyday cognition, this is also evident in medical diagnosis. For example, Hatala and colleagues9 showed that residents who were shown a series of review electrocardiograms (ECGs), with a brief noncontributory history (a 45-year-old banker with chest pain), were strongly influenced by the prior exemplar (i.e., history) when given a new ECG of a different condition (left bundle branch block versus anterior myocardial infarction). That is, matching on “45-year-old banker” reduced accuracy on the new ECG by half. The phenomenon has been replicated in studies by Mamede and colleagues.10,11 The critical point is that the errors that arise would not occur if the problem-solver was analytically decomposing the problem into relevant features; hence, there must be a System 1 component involving retrieval of prior exemplars.* Note, however, that the fact that experimenters were able to induce errors like those originating in System 1 does not imply that all, or even most, errors originate in System 1.
Another line of evidence associates System 1 and System 2 with different anatomical structures and physiological mechanisms. Lieberman and colleagues12 showed, using functional MRI, that the two processes can be localized in different parts of the prefrontal cortex. More recently, Bos and colleagues13 conducted an experiment in which one group of participants drank regular soda, and a second drank diet soda. After some delay, both were tested with a task involving System 1 reasoning and another that used System 2 reasoning. The group who ingested glucose did worse on the System 2 task, but better on the System 1 task. Analytical reasoning literally takes more metabolic energy.
Thus, there is some evidence for the existence of separable systems. Custers believes the case for two systems is weak and, citing Hammond,3 proposes an alternative, in which reasoning strategies lie on a continuum with intuition at one end, analysis at the other, and “quasirationality” in the middle. But is it the case that these two systems cannot interact and proceed concurrently? Put another way, are we left with a choice between solving a problem by System 1 or by System 2? Or can we rely on both kinds of knowledge in solving aproblem?
Alternatives to Consider
As Custers points out, the radical dual process model espoused by Kahneman places the two processes in opposition and presumes that a task can be neatly characterized as System 1 or System 2. Moreover, in his zeal to show how error-prone System 1 is, Kahneman has focused his research program entirely on devising situations in which intuition and common sense will fail, which leaves unanswered any question regarding the relative effectiveness of System 1 or System 2.
An alternative view, proposed by Jacoby over 20 years ago,14 takes a different position. He states that “problems interpreting task dissociations have arisen from equating particular processes with particular tasks and then treating those tasks as if they provide pure measures of those processes.” Jacoby presumes that the two processes act in concert and shows that it is possible to identify the contributions of System 1 (automatic) and System 2 (intentional) processes to a particular task. Jacoby does this through the use of a “process dissociation” framework, in which, under one condition, an interference task is used to load working memory and, hence, impede System 2 thinking. More recently, Ambady15 used a variant on this strategy to show that reasoning relying more on System 1 thinking to rate teacher performance on rapid “thin-slice” samples led to more accurate assessments than did judgments based on System 2 thinking.
The critical implication of these studies is not that one system is superior or inferior to the other. In fact, it supports Custers’ point that optimal performance is more likely to arise from matching the task characteristics to the processing strategy and the problem-solver’s knowledge base. However, studies show that it is not necessary to hypothesize a variety of processing strategies; the diversity of outcomes can be accommodated with an additive model that presumes that both System 1 and 2 can be used in concert during problem solving and that the relative contribution of each is under conscious control.
Finding the Right Approach for the Right Experience
The distinction between additive dual process and cognitive continuum models is likely of little more than academic interest. We have already seen that the evidence for either system is far from unequivocal. We presume that it would be difficult indeed to devise experiments that would differentiate between a continuum and an additive model. However, both models clearly conflict with a “horse race” model in which the two systems are placed in opposition and a task is presumed to be solved by either one system or the other.
However, there remains one critical distinction between this version of dual process theory and Hammond’s cognitive continuum theory model. Hammond presumes that the type of processing can be determined entirely from task characteristics (not surprising, as much of his research was based on regression modeling of decision situations and the Brunswik Lens model). As Custers states, “tasks that include a relatively small number of cues induce processing close to the analytical pole.” But a much more plausible position is that the optimal processing is an interaction between the task and the knowledge base of the problem-solver. The first time you encounter 17 × 17, it is an analytical task. But given the question an hour later, you solve the task accurately and quickly by simple retrieval. Custers does acknowledge the role of expertise in improving holistic and “intuitive” judgments eventually, but then advocates that “students should be discouraged from making intuitive judgments … and not be rewarded for producing single ‘correct’ diagnoses.”1 In fact, the evidence suggests otherwise; Ark16 shows that students do better when encouraged to use intuition.
Regardless of whether one accepts a continuum or additive dual process model, this perspective leads to a very different research agenda. It is no longer useful to determine whether System 1 leads to more or fewer errors than System 2 (or to assume that all errors originate in System 1 and get corrected by System 2). Indeed, several studies we recently completed, using large samples of cases and clinicians of all experience levels, show that instructions to go fast or slow (and therefore alter the contribution of one strategy or the other) consistently find no advantage in diagnostic accuracy for System 2 thinking. Rather, it is more profitable to investigate the circumstances, accounting for both task characteristics and expertise, which are most favorable to a predominantly experience-based solution, and those in which analytical reasoning is an asset. More critically, the central question is, in our view, just what are the conditions in which the diagnostician decides to “slow down”17 and think more analytically?
Funding/Support: The authors acknowledge the support of the Medical Council of Canada in funding some of the original research on which this commentary is based.
Other disclosures: None.
Ethical approval: Not applicable.
* In these studies, the prior exemplar (and, by implication, System 1 reasoning) led to greater error rates. Like the Kahneman studies, these were designed to induce System 1 errors. Cited Here...
1. Custers EJFM. Medical education and cognitive continuum theory: An alternative perspective on medical problem solving and clinical reasoning. Acad Med. 2013;88:1074–1080
2. Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775–780
3. Hammond KR Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice. 1996 Oxford, UK Oxford University Press
4. Croskerry P. A universal model of diagnostic reasoning. Acad Med. 2009;84:1022–1028
5. Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ Theory Pract. 2009;14(suppl 1):37–49
6. Kahneman D Thinking, Fast and Slow. 2011 London, UK Allen Lane/Penguin Books Ltd:19–30
7. Baddeley AD, Hitch GBower GH. Working memory. The Psychology of Learning and Motivation: Advances in Research and Theory. 1974;Vol 8 New York, NY Academic Press:47–89
8. Eva KW, Regehr G. Exploring the divergence between self-assessment and self-monitoring. Adv Health Sci Educ Theory Pract. 2011;16:311–329
9. Hatala R, Norman GR, Brooks LR. Impact of a clinical scenario on accuracy of electrocardiogram interpretation. J Gen Intern Med. 1999;14:126–129
10. Mamede S, Schmidt HG, Rikers RM, Custers EJ, Splinter TA, van Saase JL. Conscious thought beats deliberation without attention in diagnostic decision-making: At least when you are an expert. Psychol Res. 2010;74:586–592
11. Mamede S, van Gog T, van den Berge K, et al. Effect of availability bias and reflective reasoning on diagnostic accuracy among internal medicine residents. JAMA. 2010;304:1198–1203
12. Lieberman MD, Jarcho JM, Satpute AB. Evidence-based and intuition-based self-knowledge: An FMRI study. J Pers Soc Psychol. 2004;87:421–435
13. Bos MW, Dijksterhuis A, van Baaren R. Food for thought? Trust your unconscious when energy is low. J Neurosci Psychol Econ. 2012;5:124–130
14. Jacoby LL. A process dissociation framework: Separating automatic from intentional uses of memory. J Mem Lang. 1991;30:513–541
15. Ambady N. The perils of pondering: Intuition and thin slice judgments. Psychol Inq. 2010;21:271–278
16. Ark TK, Brooks LR, Eva KW. Giving learners the best of both worlds: Do clinical teachers need to guard against teaching pattern recognition to novices? Acad Med. 2006;81:405–409
17. Moulton CA, Regehr G, Mylopoulos M, MacRae HM. Slowing down when you should: A new model of expert judgment. Acad Med. 2007;82(10 suppl):S109–S116