Skip Navigation LinksHome > August 2013 - Volume 88 - Issue 8 > Medical Education and Cognitive Continuum Theory: An Altern...
Academic Medicine:
doi: 10.1097/ACM.0b013e31829a3b10
Perspectives

Medical Education and Cognitive Continuum Theory: An Alternative Perspective on Medical Problem Solving and Clinical Reasoning

Custers, Eugène J.F.M. PhD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Custers is medical education researcher, Center for Research and Development of Education, University Medical Center, Utrecht, The Netherlands.

Editor’s Note: A commentary by G. Norman, S.Monteiro, and J. Sherbino appears on page 1058.

Correspondence should be addressed to Dr. Custers, PO Box 85500, 3508 GA, Utrecht, The Netherlands; e-mail: ecusters@umcutrecht.nl.

Collapse Box

Abstract

Recently, human reasoning, problem solving, and decision making have been viewed as products of two separate systems: “System 1,” the unconscious, intuitive, or nonanalytic system, and “System 2,” the conscious, analytic, or reflective system. This view has penetrated the medical education literature, yet the idea of two independent dichotomous cognitive systems is not entirely without problems.

This article outlines the difficulties of this “two-system view” and presents an alternative, developed by K.R. Hammond and colleagues, called cognitive continuum theory (CCT). CCT is featured by three key assumptions. First, human reasoning, problem solving, and decision making can be arranged on a cognitive continuum, with pure intuition at one end, pure analysis at the other, and a large middle ground called “quasirationality.” Second, the nature and requirements of the cognitive task, as perceived by the person performing the task, determine to a large extent whether a task will be approached more intuitively or more analytically. Third, for optimal task performance, this approach needs to match the cognitive properties and requirements of the task. Finally, the author makes a case that CCT is better able than a two-system view to describe medical problem solving and clinical reasoning and that it provides clear clues for how to organize training in clinical reasoning.

Recent literature from psychology and behavioral science presents the human cognitive architecture that provides for reasoning, problem solving, and decision making as consisting of two independent systems or modes of processing, designated as “System 1” and “System 2.” System 1 is variously described as unconscious, automatic, intuitive, rapid, holistic, parallel, tacit, or a combination of these features; System 2 as conscious, deliberate, slow, analytic, reflective, or controlled.1–4 System 1 is viewed as an evolutionary old system, involved mostly in everyday problem solving and decision making, whereas System 2 is of more recent origin, more uniquely human, and more geared toward solving formal and scientific problems. The idea that analysis and intuition are distinct processes resonates with most people’s everyday experiences, and this notion has been discussed by philosophers and scientists for at least two millennia.5

Recently, this “two-system view” has appeared in the medical education literature as well.6–11 The claim is that clinical problem solving can occur either analytically, that is, in a process of feature-by-feature comparison of patient findings with probable diseases, or nonanalytically (intuitively or holistically), for which different conceptions are offered: first impressions, scripts, schemas, memories of previous patients, gut feelings, etc. To be applicable to the medical education domain, however, one must assume that in practice the two systems operate in concert, and that little, if any, problem solving is performed by either system in isolation. However, as authors usually do not specify in detail how the two systems interact, the two-system view does not allow predictions of the results on concrete tasks or problems.

The purpose of this article is to argue that our conception of clinical reasoning and problem solving can be advanced by replacing the two-system view with the idea of a cognitive continuum. First, I examine the idea that the concept of two dichotomous systems cannot adequately explain a number of empirical phenomena. Next, I describe cognitive continuum theory (CCT) as an alternative to the two-system view and show how it can be applied to clinical reasoning. Finally, I outline some implications for medical education.

Back to Top | Article Outline

Some Problems With the Two-System View

Some actions may seem to be decidedly either “intuitive” or “conscious.” On closer inspection, however, a strict dichotomy may not stand up to more stringent tests. First, at a fundamental level, the age-old “homunculus problem” crops up immediately when it comes to the coordination or cooperation of System 1 and System 2: Who controls the coordination? Or, in other words, if System 1 and System 2 simultaneously produce responses (e.g., problem solutions or choices) that are incompatible, which system prevails? The trap here is to implicitly assume a third system, the homunculus, which makes the final decision. To avoid this trap, a resolution mechanism must be proposed which is not a system in itself. Alternatively, as Kahneman4 does, it can be assumed that System 2 dominates System 1 in the sense that it is able to overrule System 1 responses, but often fails to do so for reasons of lack of effort. This solution avoids the homunculus trap, but it is also not without problems—for example, how to deal with possible errors generated by System 2.

Second, the human body has evolved as one integrated organism, and though this does not fundamentally preclude the existence of “two minds,” it requires relatively strong evidence to defend this view. At a more practical level, a large body of evidence shows that people use a wide range of processes in everyday problem solving, reasoning, and decision making, processes which cannot be fully captured by either System 1 or 2.12–14 For example, many forms of adaptive behavior, such as “satisficing”—seeking an acceptable, but not necessarily the best, solution for a complex problem—are not fully analytical, but also not fully intuitive.15 This applies even more to the ubiquitous form of reasoning and judgment that we know as “common sense.”16 Further complicating the dichotomous view, the binary features that are used to characterize System 1 and System 2 cannot be unequivocally mapped onto the two systems. For example, computer programs are completely analytical devices, yet there is no evidence that computers are conscious. Similarly, automatic processes cannot be uniquely identified with short response times.17

Third, people have the ability to deliberately apply analytic processing to questions that require intuitive cognition to be answered (e.g., “Which of your colleagues do you like most?”). Doing so demonstrates that key features of System 1 (unconsciousness, intuition, speed) as well as their System 2 counterparts (consciousness, analysis, slowness) do not fully coincide, which raises doubts about the validity of the distinction.

Finally, the two-system view is difficult to align with some major theories about the architecture of human cognition, such as ACT (adaptive control of thought)18 and SOAR (a unified theory of cognition developed by Alan Newell19), which describe cognition in much more molecular terms, leaving little room for the concept of intuition. Thus, several attempts have been made to save a two-system cognitive architecture by showing how two separate systems can produce a range of different processes.1,9,20 In fact, CCT goes one step further, by assuming an underlying continuum rather than two separate systems.

Back to Top | Article Outline

CCT

Intuition and analysis are poles on a continuum

The first of CCT’s three important claims16,21 is that intuition and analysis are not two distinct processing modes or systems but, rather, represent the poles (extremes) on a continuum. At the intuitive pole, we find phenomena such as pattern recognition. Simon22 even claims pattern recognition and intuition are identical. Pure intuitive cognition is rapid and unconscious and usually associated with a feeling of conviction, often implicitly (i.e., its outcome is simply not questioned). Hammond23 calls intuition “unjustified cognition,” for the intermediate steps in an intuitive problem-solving process cannot be identified, let alone retraced. Consequently, intuition is characterized by high confidence in outcome but low confidence in method.21

At its opposite, the analytical pole, we find algorithms, such as calculations or mathematical formulae. Analytical cognition in humans is featured by slow, step-by-step, and effortful processing; only small amounts of information can be processed simultaneously. Unlike intuition, analytic cognition is featured by high confidence in method but low confidence in outcome: Mistakes are easily made, but if the appropriate procedure is correctly applied (which is by no means a trivial assumption), the result is beyond questioning. Two criteria are used to determine the extent to which a reasoning or judgment process is analytical: first, retraceability (every step in the reasoning process can be identified and retraced) and second, justification (every step in the process can be justified, or defended). To the extent that retraceability and justification can be considered a matter of degree, the whole problem-solving or decision process can be mapped onto a continuum with a large middle ground, called “quasirationality.”16 At the analytical pole of this continuum, cognition is characterized by full retraceability and justification of every individual step in the reasoning process (like the application of an algorithm); at the intuitive pole, retraceability and justification are nil. In fact, it can even be argued that full intuition is not a process at all, only an outcome (“It just came to me”).23

All problem solving other than “pure” intuition or “pure” analysis can be described as quasirational. The notion of quasirationality easily transfers to the domain of medicine: Most clinical and medical problem solving occurs somewhere between pure intuition and pure analysis. For example, the common description of the diagnostic process as “hypothesis generation and testing” implies that both more intuitive (hypothesis generation) and more analytical (hypothesis verification) cognition are involved.24,25

Although it may be tempting to assume that quasirationality is the outcome of an interaction or cooperation between System 1 and System 2,26 it is not clear in the two-system view how this interaction could be brought about. It should also be noted that dual-system theorists do not assume that the two systems interact; rather, Kahneman, in particular, goes to great lengths to show that cognitive errors occur because System 1 and System 2 provide different, often incompatible or inconsistent, responses to a single problem or question.

Though carefully designed, the relevance of studies on human reasoning for actual practice, such as clinical problem solving, has yet to be demonstrated. For example, the experimental conditions often prevent participants from taking an analytical approach. They either do not know which algorithm to apply or, if they do, are prone to make mistakes because time is restricted and facilities, such as paper and pencil, are not provided. If participants apply heuristics, such as the “availability heuristic,” which delivers a response based on the ease with which an example comes to mind, the study is designed in such a way that the solution based on the heuristic is incorrect. In practice, however, including clinical practice, it may more often be a helpful heuristic, because diagnoses that are easily retrieved from memory may also be more common, and the corresponding diseases may be seen more frequently. In a strict dual-process model, heuristics, as intuitive rules of thumb, are part of System 1’s cognitive repertory; in practice, however, they can also be used in a more deliberate and rule-based way.27 According to CCT, heuristics are a form of quasirational cognition—more analytic if the rules of thumb can be retraced and justified by evidence, more intuitive if they are used implicitly (e.g., if they are embedded in an individual’s personal experience).

Back to Top | Article Outline
The cognitive continuum includes tasks as well as processes

If two distinct systems do not exist, then how does CCT predict or explain the participant’s mode of cognition in solving a particular problem or making a judgment? This is to a large extent determined by the nature of the task. More specifically, CCT’s second important claim is that not only cognitive processes but also cognitive tasks can be mapped on the cognitive continuum. Hammond et al21 outline a number of task features that jointly determine the position of a task on the cognitive continuum. Table 1 shows these features.

Table 1
Table 1
Image Tools

Tasks that contain a large number of features which are of a perceptual nature, contribute equally to the solution, are simultaneously presented under time pressure, are unreliably related to the solution, and can be redundant (i.e., they may overlap) induce processing close to the intuitive pole of the continuum. In contrast, tasks that contain few features which can be objectively and reliably measured (e.g., quantitative values), which are not redundant (i.e., they cannot be substituted for each other), which may have to be combined nonlinearly according to an organizing principle, and which are presented sequentially without time pressure induce processing at the analytic pole of the continuum. See Table 1.

To the extent that clinical problems vary on these aspects, they also vary with respect to their position on this cognitive task continuum. A clinical problem that consists of only a few laboratory values, presented in a table, for example, will typically induce analytical processing: The values are usually reliable, can be processed sequentially, and need to be individually weighted and assessed.28 An important implication is that if individual task features change, the task moves toward either the analytical or intuitive pole, depending on the direction of the change. For example, if the time available to solve a particular clinical problem is restricted, a more intuitive cognitive mode will be induced.

Back to Top | Article Outline
The cognitive approach must be appropriate for the task

CCT’s third claim is that, for optimal performance, the cognitive approach the problem solver adopts must be appropriate for the properties of the task.21 Thus, CCT favors neither analysis (as do, for instance, Hastie and Dawes13 and Zarin and Pauker29) nor intuition (as do, for instance, Gladwell30 and Dreyfus and Dreyfus31). Depending on the nature of the task, some tasks can best be approached more analytically, and others more intuitively. A mismatch between task properties and cognitive approach occurs if the problem solver uses an analytic approach on a task that requires a more intuitive approach, and vice versa.

This claim certainly applies to the task features presented in Table 1, but another important issue is the criterion (i.e., the gold standard) against which a solution will be judged. If, for example, the joint decision of an expert panel is the standard, and some or all of the experts use a more intuitive (i.e., quasirational) approach to arrive at the solution, then an individual who works in a fully analytical mode can never outperform the expert panel, even if the task features would make this the most appropriate mode. If, on the other hand, the criterion is determined in a fully analytical way (e.g., by a calculation or a set of explicit statistical rules), then an individual trying to solve this problem more intuitively can never outperform the rule-based solution. Of course, estimating the outcome of a complex calculation can never yield a better solution than actually performing the calculation. Similarly, with a statistical rule, optimal performance can only be measured over a range of problems (or, in medicine, cases), and in this respect the rule outperforms every more intuitive solution. Statistical rules, however, do not necessarily yield a better solution than a more intuitive approach on any individual problem; this is why problem solvers believe they can intuitively improve on such rules. This belief, however, is not justified.32

It is my view that this explicit connection between specific task features and processing mode enables CCT to provide better explanations for problem-solving performance than views that assume the superiority of one system over the other, irrespective of the task features.33–35 Moreover, this framework can also accommodate differences between experts and novices. Experts’ experience, or “educated intuition,”36 enables them to perceive clusters of features holistically and, hence, approach the diagnostic task in a more intuitive way, either by pure intuition (direct pattern recognition) or by a form of quasirationality close to the intuitive pole (e.g., by generating a likely hypothesis which needs some further confirmation). Novices, who lack this experience, cannot do this and have to revert to a much more analytical approach, which does not match the task requirements, because there are too many features to deal with in the limited amount of time available, and there is no “formula” to calculate an outcome. Alternatively, they might make a guess at the solution. CCT would predict that cases with few features which do not fit into a neat pattern would be relatively hard for experts but relatively easy for novices because these cases can only be solved analytically.

Back to Top | Article Outline
Application of CCT to Medical Diagnosis and Decision Making
The analytical side of clinical problem solving

Although research strongly suggests that experts primarily employ educated intuition (pattern recognition, illness scripts, or memories of previous patients), clinical problem solving is often explicitly described as an analytical task.10,37–42 Over the years, many attempts have been made to “mechanise” the diagnostic process and to formulate systems of diagnostic rules to assist practioners.43 In the early 1980s, these efforts culminated in the construction of automatized diagnostic systems.44,45 Yet, except for some highly limited domains, these expert systems have not lived up to the expectations.

Following this view of clinical problem solving as an analytical task, a number of practically oriented analytical approaches to diagnostic problem solving have been described in the literature.46–50 These approaches also are of limited applicability, probably because they instruct students exactly what to do (e.g., group findings under preliminary headings) but not how to do this. In other words, the educational possibilities of strong analytic problem solving appear to be limited. On the other hand, the fact that human experts usually approach medical diagnosis in a much more intuitive way does not imply that it could not be done more analytically. For example, descriptions of expert diagnostic reasoning as based on nonanalytic prototypes or memories of individual patients can often be expressed analytically as well. That is, the features of a current case can be identified, weighted, and compared with analytic descriptions to arrive at a diagnosis.51,52 Furthermore, explicit analytic diagnostic procedures can be found in domains like psychiatry, where many conditions are defined in terms of to-be-met criteria, of the type “at least two symptoms in group A and three in group B.” CCT predicts that if diagnosis is basically an analytical task, rule-based systems will perform at least equally well as human diagnosticians, if not better, for they are less liable to make mistakes in analytical procedures. Indeed, some classic studies have reported that in the domain of psychiatry, such systems can outperform human diagnostic experts, at least in the statistical sense describedabove.53

Finally, analytical problem solving may be inevitable when no likely diagnostic hypothesis “pops up” in the mind of the diagnostician and a differential diagnosis might be as far as one can get. Novices, in particular, will often be in this position, as will be experienced physicians facing cases in which they cannot with confidence identify a single diagnosis—for example, a patient with suspected comorbidity. This situation may be similar to what Patel and Groen54 call “loose ends” (i.e., unexplained findings that remain after an initial diagnostic hypothesis has been generated).

Back to Top | Article Outline
The intuitive side of clinical problem solving

Intuition has always played an important role in clinical problem solving, and “every medical school has its folklore about the wondrous intuitive diagnostic powers attributed to certain physicians and surgeons.”16(p74) According to CCT, diagnosis will usually be largely intuitive because the properties of most diagnostic problems match those of tasks that induce a more intuitive approach (seeTable 1).

A good example of “pure intuition” is pattern recognition.22 Clinicians who have experience with a particular disease may immediately recognize the diagnosis without being able to explain how they do this.55 Such recognition is associated with a strong feeling of confidence. In domains outside medicine, it has been demonstrated that experts’ “snap judgments” are usually accurate.30 In some settings, such as the emergency department, where postponement of treatment may not be an option, snap judgments may be the default process. Yet, quick recognition of a clinical picture is not entirely without risk. No matter how many typical signs and symptoms are present, there is always the possibility of erroneous recognition. In this respect, the expert’s rare recognition error is not a failure of intuition but, rather, a consequence of the way the process of recognition operates.56

Back to Top | Article Outline
Why most clinical problem solving is a quasirational process

CCT is not unique in its belief that in most cases, diagnostic problem solving will be quasirational because it occurs somewhere between full analysis and full intuition.10,41,56,57 Yet, in dual-system approaches, clinical problem solving is not connected to the features of the diagnostic task but, rather, interpreted in terms of one system—usually System 2—correcting or modifying the output of the other (e.g., “slowing down when you should”).10 On the other hand, the potential errors of System 2—mistakes in selecting or carrying out an analytic procedure, such as reasoning errors—are virtually ignored. Croskerry,58 for example, only mentions equipment failure in this respect, whereas Kahneman59 limits the discussion to a two-line footnote.

In my view, to advance understanding of clinical problem solving, the process should be discussed in terms of the match between features of the task and the approach the clinician takes, whether more intuitive or more analytical, rather than on “systems” in the human mind that are responsible for correct or erroneous problem solving. Most clinical problems are solved by a quasirational approach, involving hunches that need to be checked, partial solutions, possibilities that need to be ruled out, and implicit or explicit heuristics, all of which are forms of quasirationality.16,60 Finally, according to Hammond,16(p174) there is also a more fundamental reason for medical problem solving to be quasirational: Physicians face the difficult task of applying their scientific, analytically derived knowledge to a specific patient here and now, which “requires a mix of both analytic reasoning and intuitive judgment in the face of uncertainty that cannot be reduced.”

Back to Top | Article Outline
Implications for Medical Education

The assumption that human cognition should be conceived as a continuum seems to hold more promise for elucidating clinical problem solving than the “dichotomous systems view.”61 It is worth emphasizing that a description of clinical problem solving as the result of two interacting systems is not necessarily incorrect, but it gives few clues to predict what will happen when a student encounters a practical clinical problem, or what the best approach to teaching clinical problem solving will be. I believe that CCT fares a little better, in the sense that it can be used to give recommendations about how to structure clinical tasks in an educational context.

In line with theories in cognitive psychology,19,62,63 it is often assumed that a medical student’s intuition develops as a consequence of repeated analytical reasoning.41,57 Yet, novices in clinical diagnosis are faced with a dilemma: Their intuition does not provide an acceptable diagnostic hypothesis, often not even a hunch, while on the other hand they do not know exactly how to proceed analytically. In such cases, some form of quasirationality will be demanded—for example, taking a subset of findings and searching for a common cause, or venturing a specific diagnosis and seeing whether it can be corroborated. CCT suggests that novices are best served with clinical tasks that cover a wide range of the continuum, and that the aim of training should be to align students’ approaches to the nature of the task. Students should learn to be more analytic on tasks that have the features of analytical tasks (as outlined in Table 1) and more intuitive on tasks that have the opposite features.

Teaching or training analytical problem solving in a clinical context should emphasize how to construct appropriate differential diagnoses on the basis of relatively few, but salient, clinical findings. “Authentic cases” may not be the first choice here because they are often complex, with too many findings to be amenable to a more analytical approach. Problem solving might benefit from small-group learning, with sufficient time available to put forward arguments in favor of or against a particular solution and to correct flawed reasoning. Students learning to solve problems that induce analysis should be discouraged from making intuitive judgments that go beyond the known facts of a problem, and they should not be rewarded for producing single, “correct” diagnoses. Rather, students should be taught to justify all of the steps they take in the diagnostic process. As the cases are not authentic and the emphasis is on the clinical reasoning process, the teacher or supervisor should not have more information about a case than the students do. Students should be instructed to use the findings in the case to reason toward possible diagnoses rather than to defend or attack an intuitive “diagnostic guess.” This form of clinical training closely resembles what Ericsson64 calls “deliberate practice.”

In contrast, tasks that induce a more intuitive approach, such as real patients with many task features (complaints, symptoms, findings on examination, etc.), require a rather different form of training. To develop students’ clinical intuition, training should emphasize exposing students to a number of patients with contrasting conditions, featured by a multitude of symptoms and findings, in a relatively short time span, as suggested by Norman and Brooks.52(p182) Students are not expected to benefit from a thorough analysis of such cases, for this is not the optimal approach to develop a clinical view. Rather, they should be trained to get a “feeling” for how a patient with a particular disease looks and how this picture can be differentiated from a different disease with a similar appearance. An important feature of teaching clinical problem solving with tasks of this type is immediate feedback, for this is the only way to correct intuitive errors.

The general purpose of all clinical training is to extend the range on the continuum where students or novice practitioners can apply a form of cognition that matches the demands of the task. By repeatedly analyzing cases with relatively few features, students may eventually intuitively recognize groups of features that are caused by the same disease. That is, the analysis becomes “frozen into habit.”63 In contrast, by repeatedly approaching complex cases in a nonanalytical way, students may gradually become sensitive to subtle differences between clinical pictures, a skill that cannot be developed by analytical problem solving but which lies at the heart of clinical intuition.

Back to Top | Article Outline
Finding Compromise on the Cognitive Continuum

According to Hammond, “of all the disciplines of applied science, medicine can claim to have the most difficult task of separating or integrating intuition and analysis.”16(p74) In other words, good medical practice is all about finding a proper balance between the art of medicine (intuition) and the science of medicine (analysis). The concept of quasirationality suggests that a perfect balance will be forever beyond reach and that medical practice always involves some form of compromise. Analytic problem solving, for example, is precise and justifiable, but it is also slow and fragile (i.e., error-prone), at least when performed by humans. On the other hand, intuitive problem solving is quick, robust, and flexible but also imprecise and irretraceable. Although experts’ intuitive errors are rare, they occur unpredictably and are often difficult to detect, for a fully intuitive response is subjectively very convincing. In other words, intuition cannot correct its own errors.23 Performing an analytic check, even a partial one, on every intuitive response is not feasible, if only because almost all of an expert’s intuitive responses will be correct or at least within the limits of tolerance,65 which implies that almost all of these checks are a waste of time, energy, and money. Therefore, “reflection in action”40,42,66 cannot be consistently put into practice.

In sum, clinical problem solving is hardly ever fully intuitive or fully analytical, but it is almost invariably quasirational. Depending on the features of the clinical task, and also on the experience of the clinician, the processing mode will be more intuitive or more analytical, and changing the task will move this processing in the direction of either the intuitive or the analytical pole on the cognitive continuum. Further exploring the relationship between specific task features and processing mode and possible changes over the educational continuum might lead to a better understanding of students’ clinical development and provide clues as to the optimal design of clinical education.

Funding/Support: None.

Other disclosures: None.

Ethical approval: Not applicable.

Previous presentations: Part of this work was presented as a short communication at the AMEE Conference 2011, August 27–31, 2011, Vienna, Austria.

Back to Top | Article Outline

References

1. Evans JS. Dual-processing accounts of reasoning, judgment, and social cognition. Ann Rev Psychol. 2008;59:255–278

2. Sloman AS. The empirical case for two systems of reasoning. Psychol Bull. 1996;119:3–22

3. Stanovich KE, West RF. Individual differences in reasoning: Implications for the rationality debate. Behav Brain Sci. 2000;23:645–726

4. Kahneman D Thinking, Fast and Slow. 2011 London, UK Allen Lane / Penguin Books Ltd:19–30

5. Hogarth RMBetsch T, Haberstroh S. Deciding analytically or trusting your intuition? The advantages and disadvantages of analytic and intuitive thought. The Routines of Decision Making. 2005 Mahwah, NJ Lawrence Erlbaum Associates

6. Ark TK, Brooks LR, Eva KW. Giving learners the best of both worlds: Do clinical teachers need to guard against teaching pattern recognition to novices? Acad Med. 2006;81:405–409

7. Ark TK, Brooks LR, Eva KW. The benefits of flexibility: The pedagogical value of instructions to adopt multifaceted diagnostic reasoning strategies. Med Educ. 2007;41:281–287

8. Croskerry P. Critical thinking and decisionmaking: Avoiding the perils of thin-slicing. Ann Emerg Med. 2006;48:720–722

9. Eva KW, Hatala RM, Leblanc VR, Brooks LR. Teaching from the clinical reasoning literature: Combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ. 2007;41:1152–1158

10. Moulton CA, Regehr G, Mylopoulos M, MacRae HM. Slowing down when you should: A new model of expert judgment. Acad Med. 2007;82(10 suppl):S109–S116

11. Norman G. Dual processing and diagnostic errors. Adv Health Sci Educ Theory Pract. 2009;14(suppl 1):37–49

12. Osman M. An evaluation of dual process theories of reasoning. Psychon Bull Rev. 2004;11:988–1010

13. Hastie R, Dawes RM Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making. 20102nd ed London, UK Sage Publications

14. Tversky A, Kahneman D. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychol Rev. 1983;90:293–315

15. Simon HA. Rational choice and the structure of the environment. Psychol Rev. 1956;63:129–138

16. Hammond KR Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice. 1996 Oxford, UK Oxford University Press

17. Jacoby LL. A process dissociation framework: Separating automatic from intentional usesof memory. J Mem Lang. 1991;30:513–541

18. Anderson JR. ACT: A simple theory of complex cognition. Am Psychol. 1996;51:355–365

19. Newell A Unified Theories of Cognition. 1990 Cambridge, Mass Harvard University Press

20. Stolper E, Van de Wiel M, Van Royen P, Van Bokhoven M, Van der Weijden T, Dinant GJ. Gut feelings as a third track in general practitioners’ diagnostic reasoning. J Gen Intern Med. 2011;26:197–203

21. Hammond KR, Hamm RM, Grassia J, Pearson T. Direct comparison of the efficacy of intuitive and analytical cognition in expert judgment. IEEE Trans Syst Man Cybern. 1987;17:753–770

22. Simon HA. What is an “explanation” of behavior? Psychol Sci. 1992;3:150–161

23. Hammond KR. Intuition, no! … Quasirationality, yes! Psychol Inq. 2010;21:327–337

24. Elstein AS. Thinking about diagnostic thinking: A 30-year perspective. Adv Health Sci Educ Theory Pract. 2009;14(suppl 1):7–18

25. Elstein AS, Shulman LS, Sprafka SA Medical Problem Solving: An Analysis of Clinical Reasoning. 1978 Cambridge, Mass Harvard University Press

26. Thompson C. A conceptual treadmill: The need for “middle ground” in clinical decision making theory in nursing. J Adv Nurs. 1999;30:1222–1229

27. Gigerenzer G, Todd PMABC Research Group. Simple Heuristics That Make Us Smart. 2000 Oxford, UK Oxford University Press

28. Norman GR, Trott AD, Brooks LR, Smith EKM. Cognitive differences in clinical reasoning related to postgraduate training. Teach Learn Med. 1994;6:114–120

29. Zarin DA, Pauker SG. Decision analysis as a basis for medical decision making: The treeof Hippocrates. J Med Philos. 1984;9:181–213

30. Gladwell M Blink: The Power of Thinking Without Thinking. 2005 Boston, Mass Little, Brown, & Company

31. Dreyfus HL, Dreyfus SE Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. 1986 Oxford, UK Basil Blackwell Ltd.

32. Einhorn HJDowie J, Elstein AS. Accepting error to make less error. Professional Judgment: A Reader in Clinical Decision Making. 1988 Cambridge, UK Cambridge University Press:181–189

33. Pretz JE. Intuition versus analysis: Strategy and experience in complex everyday problem solving. Mem Cognit. 2008;36:554–566

34. Epstein S, Pacini R, Denes-Raj V, Heier H. Individual differences in intuitive-experiential and analytical-rational thinking styles. J Pers Soc Psychol. 1996;71:390–405

35. Pacini R, Epstein S. The relation of rational and experiential information processing styles to personality, basic beliefs, and the ratio-bias phenomenon. J Pers Soc Psychol. 1999;76:972–987

36. Hogarth RM Educating Intuition. 2001 Chicago, Ill University of Chicago Press

37. Gilhooly KJ, McGeorge P, Hunter J, et al. Biomedical knowledge in diagnostic thinking: The case of electrocardiogram (ECG) interpretation. Eur J Cogn Psychol. 1997;9:199–223

38. Kleinmuntz B. Diagnostic problem solving by computer. Jap Psychol Res. 1965;7:189–194

39. Ledley RS, Lusted LB. Reasoning foundations of medical diagnosis; symbolic logic, probability, and value theory aid our understanding of how physicians reason. Science. 1959;130:9–21

40. Mamede S, Schmidt HG, Rikers R. Diagnostic errors and reflective practice in medicine. J Eval Clin Pract. 2007;13:138–145

41. McLaughlin K, Rikers RM, Schmidt HG. Is analytic information processing a feature of expertise in medicine? Adv Health Sci Educ Theory Pract. 2008;13:123–128

42. Schön D The Reflective Practitioner: How Professionals Think in Action 1983. New York, NY Basic Books

43. NASH FA. Differential diagnosis, an apparatus to assist the logical faculties. Lancet. 1954;266:874–875

44. Clancey WJ. The epistemology of a rule-based expert system—A framework for explanation. Artif Intell. 1983;20:215–251

45. Miller RA, Pople HE Jr, Myers JD. Internist-1, an experimental computer-based diagnostic consultant for general internal medicine. NEngl J Med. 1982;307:468–476

46. Custers EJ, Stuyt PM, De Vries Robbé PF. Clinical problem analysis: A systematic approach to teaching complex medical problem solving. Acad Med. 2000;75:291–297

47. Eddy DM, Clanton CH. The art of diagnosis: Solving the clinicopathological exercise. N Engl J Med. 1982;306:1263–1268

48. Eva KW, Hatala RM, Leblanc VR, Brooks LR. Teaching from the clinical reasoning literature: Combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ. 2007;41:1152–1158

49. Evans DA, Gadd CSEvans DA, Patel VL. Managing coherence and context in medical problem-solving discourse. Cognitive Science in Medicine: Biomedical Modeling. 1989 Cambridge, Mass MIT Press:211–255

50. Mamede S, Schmidt HG, Penaforte JC. Effects of reflective practice on the accuracy of medical diagnoses. Med Educ. 2008;42:468–475

51. Bordage G, Zacks R. The structure of medical knowledge in the memories of medical students and general practitioners: Categories and prototypes. Med Educ. 1984;18:406–416

52. Norman GR, Brooks LR. The non-analytical basis of clinical reasoning. Adv Health Sci Educ Theory Pract. 1997;2:173–184

53. Goldberg LR. Man versus model of man: A rationale, plus some evidence, for a method of improving on clinical inference. Psychol Bull. 1970;73:422–432

54. Patel VL, Groen GJEvans DA, Patel VL. Cognitive frameworks for clinical reasoning: Application for training and practice. Advanced Models of Cognition for Medical Training and Practice. 1992 New York, NY Springer-Verlag GmbH:193–211

55. Norman G. Building on experience—The development of clinical reasoning. N Engl J Med. 2006;355:2251–2252

56. Wigton RS. What do the theories of Egon Brunswik have to say to medical education? Adv Health Sci Educ Theory Pract. 2008;13:109–121

57. Croskerry P, Norman G. Overconfidence in clinical decision making. Am J Med. 2008;121(5 suppl):S24–S29

58. Croskerry P. Clinical cognition and diagnostic error: Applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract. 2009;14(suppl 1):27–35

59. Kahneman D. A perspective on judgment and choice: Mapping bounded rationality. Am Psychol. 2003;58:697–720

60. Gigerenzer G, Gaissmaier W. Heuristic decision making. Annu Rev Psychol. 2011;62:451–482

61. Thompson C. A conceptual treadmill: The need for “middle ground” in clinical decision making theory in nursing. J Adv Nurs. 1999;30:1222–1229

62. Anderson JR. Skill acquisition: Compilation of weak-method problem solutions. Psychol Rev. 1987;94:192–210

63. Simon HA. Making management decisions: The role of intuition and emotion. Acad Manag Exec. 1987;1:57–63

64. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10 suppl):S70–S81

65. Peters JT, Hammond KR, Summers DA. Anote on intuitive vs analytic thinking. Organ Behav Hum Perform. 1974;12:125–131

66. Mamede S, Schmidt HG. The structure of reflective practice in medicine. Med Educ. 2004;38:1302–1308

Cited By:

This article has been cited 1 time(s).

Academic Medicine
Is Clinical Cognition Binary or Continuous?
Norman, G; Monteiro, S; Sherbino, J
Academic Medicine, 88(8): 1058-1060.
10.1097/ACM.0b013e31829a3c32
PDF (213) | CrossRef
Back to Top | Article Outline

© 2013 by the Association of American Medical Colleges

Login

Article Tools

Images

Share