Secondary Logo

Journal Logo

Articles

Management Reasoning: Implications for Health Professions Educators and a Research Agenda

Cook, David A. MD, MHPE; Durning, Steven J. MD, PhD; Sherbino, Jonathan MD, MEd; Gruppen, Larry D. PhD

Author Information
doi: 10.1097/ACM.0000000000002768
  • Free

Abstract

Clinical reasoning—the cognitive processes by which clinicians integrate clinical information (history, exam findings, and test results), preferences, medical knowledge, and contextual (situational) factors to make decisions about the care of an individual patient1—is central to the daily activities of nearly all health care professionals.2 Understanding how clinical reasoning works is essential to efforts to prevent errors in clinical practice and to optimize instruction that supports the development of these processes.3,4

Substantial research over several decades has helped illuminate the clinical reasoning processes involved in arriving at a diagnosis (diagnostic reasoning)5–8 and identified implications for teaching and ongoing research.9–12 Far less is known about the clinical reasoning processes entailed in patient management (management reasoning), including decision making about treatment, further testing, follow-up visits, and allocation of limited resources.1,10,13,14 Yet despite its prominence in conceptual frameworks and empiric research, diagnostic reasoning may be less important in caring for patients than management reasoning. Making the correct diagnosis is only a means to an end—namely, the implementation of a management plan appropriate for that diagnosis. Moreover, a fully correct diagnosis is often not required to implement a defensible management decision, as when an emergency physician sends home a patient with “noncardiac chest pain” without knowing the exact source of pain.

We could not identify any reviews of management reasoning, and we found few empirical studies directly related to management reasoning.14–17 Clarification of the concept of management reasoning will set the stage for future research in this field and identify potential applications in health professions education. The purpose of this article is to describe management reasoning as distinct from diagnostic reasoning, consider potentially insightful theoretical lenses, outline educational implications, and propose areas of needed research.

Contrasting Management and Diagnostic Reasoning

Diagnosis is primarily a classification activity18 in which clinicians (through the cognitive processes of diagnostic reasoning) assign labels to a pattern of symptoms, signs, and test results.19 These labels (diagnoses) reflect the clinician’s understanding of the illness and typically denote an underlying cause or pathology. Labels (diagnoses) do not have value in themselves; rather, they help with meaning making by shaping the clinician’s understanding of and approach to a problem, facilitating communication among members of the health care team, and influencing how the team views and interacts with the patient.20 A given label (e.g., “fibromyalgia”) may connote (often inadvertently) very different meanings to different clinicians and caregivers.

A diagnosis is useful to the extent that the label or classification has implications for action (e.g., the label “ischemic cardiomyopathy” might prompt hospital admission, cardiac catheterization, and prescription of angiotensin-converting enzyme inhibitor). In many situations, a superficial, provisional, or nonspecific classification (“noncardiac chest pain” or “upper respiratory infection”) proves adequate for definitive management. Indeed, management decisions typically drive the level of diagnostic specificity required. An insufficiently specific label could lead to suboptimal management, but some labels reflect superfluous detail and suggest inefficient use of resources (i.e., overtesting). For example, although magnetic resonance imaging (MRI) can provide detailed information about the cause of “acute low back pain” and thereby facilitate a specific diagnostic label, this information rarely changes initial management, and hence the test is commonly considered wasteful.

Management, in contrast to diagnosis, involves negotiation of a plan of action and ongoing monitoring and adjustment of that plan. Management reasoning encompasses the cognitive processes associated with these negotiations, observations, and adjustments. Below, we identify several ways in which management reasoning differs from diagnostic reasoning (summarized in Chart 1).

Chart 1
Chart 1:
Differences Between Diagnostic and Management Reasoning

No single correct plan

A given diagnosis can usually be established as correct or incorrect. We acknowledge that different labels can be assigned to the same condition (i.e., the same illness or disease), depending on the context and the intended subsequent use of the diagnosis. For example, labels can focus on illness severity (“acutely ill”), symptom (“chest pain”), disease (“acute coronary syndrome”), anatomic abnormality (“occluded coronary artery”), or pathology (“myocardial necrosis”). Nonetheless, although multiple labels can appropriately be applied to the same medical condition, each diagnosis can—at least in theory—be judged as correct or incorrect in absolute terms. A patient either does or does not have an occluded right coronary artery. Some alternate labels may be equally correct (“myocardial infarction”), but others would be incorrect (“pericarditis”). From a practical standpoint, to be interchangeably correct, all alternate diagnoses should have the same implications for action and should suggest a similar underlying cause. For example, the diagnoses of “upper respiratory infection” and “acute sinusitis” could be construed as interchangeably correct, because both suggest a similar underlying cause (viral infection) and management approach.

By contrast, there are usually multiple reasonable management approaches, comprising varying combinations of diagnostic testing, patient education, treatment, and follow-up. “It depends” is common in management. Patient preferences, logistical constraints, cultural norms, and resource availability all influence management decisions, as do clinician factors such as tolerance for uncertainty and risk.21,22 Even the potential risks and benefits of specific treatment options vary across situations: Surgeons in one clinic may be more skilled in one approach, while surgeons in another clinic may be more skilled in another. In short, there are usually multiple paths to a successful outcome, and there will even be multiple acceptable outcomes in many situations.23,24 Thus, it is difficult to speak of a single “correct” or “best” management plan (even in an idealized or theoretical state); rather, we must speak of more or less “reasonable” or “defensible” plans.

Preferences and social context

Patient preferences, clinician attitudes, clinical settings, and logistical constraints should not influence a diagnosis. A patient with pneumonia has pneumonia regardless of the patient’s preferences or social context. A diagnosis of fibromyalgia or anaplastic thyroid cancer does not depend on whether the patient wants that diagnosis or can access needed treatments.

Management decisions, in contrast, almost always involve prioritization among competing preferences, values, and situation-specific constraints such as probable benefits, potential risks, resource availability, and financial costs.25 If a patient says, “I don’t want (or cannot afford) to get that test (or take that medication, or return for that follow-up visit),” the management plan will change. Relevant values and constraints include not only those of the patient but also those of the clinician, other members of the health care team, administrators, insurers, other patients, and society in general. Some aspects of diagnostic reasoning may involve preferences, such as the patient’s or the clinician’s desired level of specificity and certainty in the diagnosis (i.e., the sufficiency of the label), but these are arguably management decisions.

For example, one patient might be happy with a diagnosis of “mechanical back pain” based on history and exam, while another might expect an MRI in hope of obtaining a more detailed explanation of the underlying cause of pain. One clinician/care team might be satisfied with a diagnosis of “pneumonia,” while another might prefer to specify the anatomic location (“right lower lobe pneumonia”) or causative pathogen (“pneumococcal pneumonia”). A clinician deciding whether to empirically treat an upper respiratory infection as influenza, or to obtain a test to confirm that diagnosis, makes value judgments regarding the benefits (information), costs, and risks (discomfort) of performing the test. In these examples, recognizing that additional information is needed to further clarify a diagnosis is diagnostic reasoning, but deciding whether to actually obtain that information entails management reasoning (i.e., consideration of preferences and context).

Shared decision making

Diagnostic classifications do not necessarily require direct discussion or interaction with the patient. Information about history, exam, and test results obtained from another source, such as another clinician or the patient chart, can be interpreted and a diagnosis rendered. Indeed, this is the expectation when clinicians solve a “diagnostic unknown” case, a common exercise in all stages of clinical training.

By contrast, management prioritizations require communication and negotiation. The multiplicity of acceptable options and the need to integrate various values require that clinicians engage the patient and other stakeholders in the decision process—that is, engage in shared decision making. Management decisions are inherently social interactions between the clinician, patient, care team, and others.

Change over time

Diagnoses are temporally fixed: At a given moment in time, and with adequate information, a definitive label can usually be assigned. A diagnosis can change over time, but changes do not necessarily mean that the original label was wrong. First, many medical conditions evolve over time—that is, they get better or get worse. We might speak of a “resolving upper respiratory infection,” “progressing cancer,” or “postinfarction ventricular tachycardia”; yet such evolution reflects a change in the illness itself, and often a new diagnosis, rather than an incorrect original classification. Second, the diagnosis often becomes more specific as the case evolves and more information becomes available (additional history, test results, evolution of illness, or response to treatment). For example, a suspected pneumonia with a very subtle infiltrate on a chest X-ray could be confirmed if a repeat chest X-ray 48 hours later shows a dense right lower lobe infiltrate, or the microbiological etiology could become apparent if the patient develops bacteremia. The initial diagnosis of pneumonia remains correct, but it can now be specified with additional, potentially useful detail. (Of course, sometimes new information or revised interpretations lead to the recognition that the initial diagnosis was incorrect.) Finally, labels can take on different meanings in different regions and cultures (e.g., social and ethnic groups, medical specialties). As patients transition from one context to another, the preferred label may shift accordingly.

By contrast, management decisions are rarely defined conclusively at a single point in time but, rather, are made with the expectation that they will evolve and change. Experienced clinicians can often anticipate future management decisions—“Start with lifestyle measures to treat the hypertension, and if that doesn’t work, then add hydrochlorothiazide and then lisinopril”—but these are only possibilities. Typically, the management plan is initially framed in tentative terms and then revisited with each subsequent patient encounter. For example, drug therapy for hypertension is commonly adjusted after initiation of treatment based on therapeutic response, side effects, and evolving patient preferences. Such changes do not necessarily imply that the original management plan and the reasoning behind it were wrong. (This contrasts with diagnostic decisions, which should not change unless the diagnosis was wrong or purposely provisional.) The task of monitoring and deciding when and how to adjust a management plan is a critical aspect of management reasoning. We note that diagnosis and management usually occur concurrently and often influence one another, as when successful treatment of a pulmonary infiltrate using antibiotics affirms the diagnosis of pneumonia.

Complex, situation-specific, and uncertain

Finally, clinical decisions—both diagnostic and management—are almost always made with incomplete information and without considering all possible diagnoses or management approaches. However, the number and complexity of interacting factors and potential solutions are almost always greater in management than in diagnosis.

For example, in establishing the diagnosis of pneumonia, there is a finite number of symptoms (cough, fever, malaise), signs (fever, tachypnea, tubular breath sounds), lab findings (leukocytosis, renal insufficiency, acidosis), and imaging studies to consider. While the diagnosis may not be easy, management is likely more challenging, with choices to be made regarding diagnostic testing (chest radiograph or computed tomography), treatment location (outpatient, hospital ward, intensive care), antibiotic selection, medication adjuncts (steroids, bronchodilators, thromboembolism prophylaxis), and supportive care (nursing, respiratory therapy, physical therapy, spiritual therapy), plus adjustments to the management of comorbid conditions. All these options must be weighed against the preferences and constraints of the patient, care team, insurer, and others; and choices must anticipate the unpredictability of treatment response (i.e., foresee the future). Moreover, uncertainties in diagnosis can often be ameliorated by using less specific labels (“shoulder pain” rather than “partial rotator cuff tear”). By contrast, uncertainties in management usually mandate plans of greater scope and complexity, such as concurrent treatment of multiple possible illnesses, anticipatory management of possible side effects or adverse events, and more frequent monitoring.

Theoretical Lenses

Several theories and conceptual frameworks enrich our understanding and study of diagnostic reasoning and management reasoning.26 Diagnostic reasoning and management reasoning likely share many common mental phenomena, including fundamental components of knowledge organization, problem representation, and cognitive processing.13 When faced with a diagnostic or management task, the clinician consciously or subconsciously integrates his or her own biomedical and clinical knowledge with initial patient information to form a case representation of the problem (e.g., illness script27), uses this problem representation to guide the acquisition of additional information, revises the problem representation based on the new information, and repeats the information-gathering/representation revision cycle until the representation is perceived as sufficient to support a final diagnosis and/or management action.11,18,28–30 This likely involves a mixture of nonanalytical or “system 1” reasoning processes (automatic, fast, and reliant on pattern recognition) and analytical or “system 2” reasoning processes (deliberate, effortful, and slow).11,28–31 (We elaborate on implications of system 1 and system 2 processes in our discussion of research priorities, below.)

Situated cognition theory32 offers further insights, emphasizing that clinical reasoning, and especially management reasoning, does not occur in isolation; rather, it is “situated” in a dynamic biopsychosocial context.23,24 Ideally, management decisions emerge not from knowledge of the various factors individually (patient, diagnosis, clinician, care team, care system, etc.) but through consideration of the interactions (negotiations) among these and other environmental features.

The threshold approach proposed by Pauker and Kassirer33 allows clinicians to quantitatively combine the probability of disease; the inaccuracy, risk, and cost of diagnostic tests; and the probability and utility of treatment benefits. Theories of decision making and economics—such as decision theory,34,35 game theory,36 prospect theory,37 and libertarian paternalism (nudge theory)38–40—may also have relevance to management reasoning. These theories explain and predict how humans (in this case, both patients and health care providers) differentially value gains and losses (benefits and risks) and how framing, default options, social comparisons, and constrained resources might influence choices (management decisions).41

Implications of a Management Reasoning Paradigm for Health Professions Education

Given the differences between diagnostic reasoning and management reasoning, we speculate that these activities may require different educational approaches to optimally promote and assess their development and maintenance throughout a clinician’s career.

Teaching

Most of what we know empirically about teaching and assessing clinical reasoning is based on experience and research in diagnostic reasoning. Yet management reasoning focuses on skills and subtasks that are likely distinct from, or required with different frequencies than, those of diagnostic reasoning. These management reasoning competencies include:

  • involving patients in the decision process;
  • integrating the potentially competing priorities and preferences of various stakeholders;
  • considering contextual constraints;
  • using distinct knowledge domains (treatment options, risks/benefits/costs, and local resources and constraints);
  • tolerating uncertainty, including the need to make decisions based on incomplete information and without exhaustively considering all possible alternatives (“satisficing”);
  • accepting the multiplicity of acceptable solutions;
  • monitoring treatment response over time;
  • recognizing deviations from therapeutic goals; and
  • accepting complexity.

Additional competencies, such as communication skills and knowledge of test and treatment costs, are required for effective management. We further suggest that learning management reasoning requires greater learner autonomy and hands-on practice (e.g., leading discussions with patient and family, trying out management strategies of varying efficiency, and monitoring treatment response over time). Yet such opportunities are increasingly constrained in today’s efficiency-focused, safety-conscious health care environment.

Assessment

Assessment of management reasoning is fraught with complexities. Since more than one management plan is typically defensible, defining a management error is even more difficult than defining a diagnostic error. How can performance be assessed in the absence of a single correct answer? What if a trainee comes up with an unanticipated yet defensible management plan (i.e., right reasoning but “wrong” [not listed as correct] action)? The need to assess shared decision making and monitoring/adjusting treatment over time adds further difficulty.

Some assessments such as oral exams, case-based chart reviews, and objective structured clinical examinations can be developed to allow for complex and idiosyncratic management plans, but all of these typically employ a grading scheme that presumes a correct answer. The script concordance test aspires to adjust scoring to accommodate uncertainty and variation in clinicians’ approaches42; however, concerns have been raised regarding the validity of its scores.43 Work-based assessment may be required to capture the full complexity of many management skills.44–47

Additionally, a seemingly acceptable plan could be proposed based on faulty reasoning (i.e., right action, wrong reason). Thus, identifying and assessing the cognitive processes that underlie a given management plan would complement an assessment of the plan itself. Concept maps48,49 and “microanalytic” techniques that probe learners to articulate their interpretations and rationale24,50 might help in the assessment of the cognitive processes at play in management reasoning.

Finally, since appropriate management often involves monitoring and adjusting plans as the case evolves, capturing the time element represents a particular challenge in assessing management reasoning. Although paper cases and computer-based virtual patients can simulate temporal evolution, these approaches accelerate the time dimension in ways that may not reflect the prolonged observations and deliberations that occur in real-world management situations.

Clinical variation

Both training in and assessment of management reasoning will require a sample of patients and situational features sufficient to provide an appropriate spectrum of problems. Educators often question whether learners are seeing “enough” patients with a given diagnosis (i.e., the patient mix51). The management paradigm extends this concern to include not only a full spectrum of diagnoses but also “enough” values, preferences, communication styles, contextual variations, system constraints, and multiple solutions.

Areas of Needed Research

Most clinical reasoning research to date has focused on diagnostic reasoning, and our current understanding of management reasoning remains limited. We identify the following 6 research areas as particularly high priority (List 1).

First, although research will benefit from methods already used to study diagnostic reasoning, we believe that answering many of the pressing questions about management reasoning will necessitate substantially new research paradigms and techniques. Research must allow for integration of patient preferences and for the temporal evolution of the patient’s condition; this might be accomplished using combinations of traditional (static) vignettes, computerized virtual patients, standardized patients, and real patients.52–55 Measurement of management reasoning outcomes and underlying cognitive processes will require novel approaches that examine the acceptability of management decisions, the effectiveness of shared decision making, and how plans are monitored and adjusted over time (i.e., longitudinal care). Quantitative experimental methods will need to be complemented by qualitative methods, nonlinear quantitative approaches (complexity science56), and other emerging research paradigms. Retention of management reasoning skills, and application in real-life practice, will be key outcomes; to date, there is little evidence documenting the clinical impact of management reasoning. Since patient outcomes are more directly influenced by management actions than by diagnostic decisions, investigations of management reasoning might overcome some of the limitations common to education research that uses clinical outcomes.57

List 1

Research Priorities for Management Reasoning

  • Develop and use new research paradigms and techniques
  • Understand the cognitive processes that underlie management reasoning and how these differ from diagnostic reasoning
  • Clarify content- and context-specific skills versus general approaches relevant to management reasoning
  • Explore how to perform and how to teach shared decision making and integration of values
  • Identify methods to teach and assess management reasoning
  • Elucidate how to support management reasoning in clinical practice

Second, we presume that management reasoning reflects a balance of nonanalytical processes (automatic; system 1) and analytical processes (deliberate, effortful; system 2), yet the relative contributions remain unknown. Research in diagnostic reasoning suggests that novice trainees rely more on analytical reasoning, whereas experts typically use more nonanalytical reasoning.5,9,11 However, it seems plausible that management reasoning may be inherently more analytic (deliberate, planned, and systematic) than diagnostic reasoning. Explicit consideration of treatment costs and benefits, use of rubrics to guide management decisions, and thoughtful shared decision making all suggest a slow, deliberate process. Moreover, each patient’s unique circumstances and preferences may make patterns less readily discerned and compiled in management than in diagnosis. Modern management reasoning often involves interactions between humans and computers (e.g., point-of-care knowledge resources and decision support systems58–60), which add further layers of complexity. The extent to which these suppositions are true, and how these effects vary across clinical contexts and are influenced by clinicians’ preferences, unconscious biases, and levels of expertise, merits further exploration.

Third, diagnostic reasoning in a given field is tightly linked with knowledge of that domain; that is, diagnostic ability is content- and context-specific rather than a general skill. We presume that this is largely true for management reasoning as well. However, it is possible that some aspects of the management task generalize across content domains (clinical problems and settings). These might include general approaches to shared decision making, cost-conscious care, monitoring of follow-up, and accepting uncertainty and a “good enough” diagnosis and plan. Of course, good diagnosis and good management both require good information. The field of evidence-based medicine has clarified approaches to identifying, appraising, and applying empirical evidence in patient-centered care. Our conceptualization of management reasoning presumes achievement of the first 2 steps (identifying and appraising) and elaborates upon the last (applying).

Fourth, shared decision making is an area of active research in both clinical medicine61–63 and medical education,64–66 and our understanding of management reasoning will be enriched by the insights that emerge from such studies. The personal preferences of the clinician are also important,67–69 yet how to identify and appropriately accommodate such preferences remains incompletely understood.70–72 The same is true for accommodating the values and priorities of the health care institution and of society.

Fifth, we do not know how to optimally teach or assess management reasoning. Training might entail increased attention to skills such as shared decision making, integrating stakeholder preferences, monitoring treatment response, accepting complexity, and acting on incomplete information. Both instructional strategies and timing of instruction within the training continuum will need to be thoughtfully considered and studied. As we suggested above, assessment of management reasoning will require innovative approaches that accommodate multiple defensible solutions and that assess shared decision making and the ability to monitor and adjust treatment over time. Options identified in a recent review of methods for assessment of clinical reasoning may prove useful.73

Finally, our understanding is incomplete regarding how to support effective, efficient management reasoning in clinical practice. If management is indeed more analytic than diagnosis, and if cognitive patterns are slow to develop, then the cognitive load of many management tasks likely exceeds the level for optimal performance. Cognitive overload, in turn, may result in inefficiency (slow performance), cognitive shortcuts and errors, and/or frustration for both clinicians and patients. Research and innovations in clinical practice have already identified both problems and potential solutions in how to support clinical reasoning in practice.4,74–76 Viewing these issues through the distinct lenses of diagnostic and management reasoning may facilitate additional insights.

References

1. Cook DA, Sherbino J, Durning SJ. Management reasoning: Beyond the diagnosis. JAMA. 2018;319:2267–2268.
2. Rencic J, Durning SJ, Holmboe E, Gruppen LD. Wimmers PF, Mentkowski M. Understanding the assessment of clinical reasoning. In: Assessing Competence in Professional Performance Across Disciplines and Professions. 2016.Cham, Switzerland: Springer.
3. Institute of Medicine. To Err Is Human: Building a Safer Health System. 2000.Washington, DC: National Academies Press.
4. National Academies of Sciences, Engineering, and Medicine. Improving Diagnosis in Health Care. 2015.Washington, DC: National Academies Press.
5. Norman G, Young M, Brooks L. Non-analytical models of clinical reasoning: The role of experience. Med Educ. 2007;41:1140–1145.
6. Elstein AS, Shulman LS, Sprafka SA. Medical Problem Solving: An Analysis of Clinical Reasoning. 1978.Cambridge, MA: Harvard University Press.
7. Barrows HS, Norman GR, Neufeld VR, Feightner JW. The clinical reasoning of randomly selected physicians in general medical practice. Clin Invest Med. 1982;5:49–55.
8. Gruppen LD, Woolliscroft JO, Wolf FM. The contribution of different components of the clinical encounter in generating and eliminating diagnostic hypotheses. Res Med Educ. 1988;27:242–247.
9. Eva KW, Norman GR. Heuristics and biases—A biased perspective on clinical reasoning. Med Educ. 2005;39:870–872.
10. Norman G. Research in clinical reasoning: Past history and current trends. Med Educ. 2005;39:418–427.
11. Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S. The causes of errors in clinical reasoning: Cognitive biases, knowledge deficits, and dual process thinking. Acad Med. 2017;92:23–30.
12. Brush JE Jr, Sherbino J, Norman GR. How expert clinicians intuitively recognize a medical diagnosis. Am J Med. 2017;130:629–634.
13. Gruppen LD. Clinical reasoning: Defining it, teaching it, assessing it, studying it. West J Emerg Med. 2017;18:4–7.
14. McBee E, Ratcliffe T, Goldszmidt M, et al. Clinical reasoning tasks and resident physicians: What do they reason about? Acad Med. 2016;91:1022–1028.
15. Gruppen LD, Margolin J, Wisdom K, Grum CM. Outcome bias and cognitive dissonance in evaluating treatment decisions. Acad Med. 1994;69(10 suppl):S57–S59.
16. Juma S, Goldszmidt M. What physicians reason about during admission case review. Adv Health Sci Educ Theory Pract. 2017;22:691–711.
17. McBee E, Ratcliffe T, Picho K, et al. Contextual factors and clinical reasoning: Differences in diagnostic and therapeutic reasoning in board certified versus resident physicians. BMC Med Educ. 2017;17:211.
18. Gruppen LD, Frohna AZ. Norman GR, van der Vleuten CPM, Newble DI. Clinical reasoning. In: International Handbook of Research in Medical Education. 2002:Dordrecht, The Netherlands: Kluwer Academic Publishers; 205–230.
19. Custers EJ, Regehr G, Norman GR. Mental representations of medical diagnostic knowledge: A review. Acad Med. 1996;71(10 suppl):S55–S61.
20. Ilgen JS, Eva KW, Regehr G. What’s in a label? Is diagnosis the start or the end of clinical reasoning? J Gen Intern Med. 2016;31:435–437.
21. Mercuri M, Sherbino J, Sedran RJ, Frank JR, Gafni A, Norman G. When guidelines don’t guide: The effect of patient context on management decisions based on clinical practice guidelines. Acad Med. 2015;90:191–196.
22. Weiner SJ, Schwartz A, Yudkowsky R, et al. Evaluating physician performance at individualizing care: A pilot study tracking contextual errors in medical decision making. Med Decis Making. 2007;27:726–734.
23. Durning SJ, Artino AR Jr, Schuwirth L, van der Vleuten C. Clarifying assumptions to enhance our understanding and assessment of clinical reasoning. Acad Med. 2013;88:442–448.
24. Durning SJ, Lubarsky S, Torre D, Dory V, Holmboe E. Considering “nonlinearity” across the continuum in medical education assessment: Supporting theory, practice, and future research directions. J Contin Educ Health Prof. 2015;35:232–243.
25. Weiner SJ, Schwartz A. Contextual errors in medical decision making: Overlooked and understudied. Acad Med. 2016;91:657–662.
26. Bordage G. Conceptual frameworks to illuminate and magnify. Med Educ. 2009;43:312–319.
27. Schmidt HG, Rikers RM. How expertise develops in medicine: Knowledge encapsulation and illness script formation. Med Educ. 2007;41:1133–1139.
28. Evans JS. Dual-processing accounts of reasoning, judgment, and social cognition. Annu Rev Psychol. 2008;59:255–278.
29. Evans JS, Stanovich KE. Dual-process theories of higher cognition: Advancing the debate. Perspect Psychol Sci. 2013;8:223–241.
30. Croskerry P. Clinical cognition and diagnostic error: Applications of a dual process model of reasoning. Adv Health Sci Educ Theory Pract. 2009;14(suppl 1):27–35.
31. Kahneman D. Thinking, Fast and Slow. 2011.New York, NY: Farrar, Straus and Giroux.
32. Durning SJ, Artino AR. Situativity theory: A perspective on how participants and the environment can interact: AMEE guide no. 52. Med Teach. 2011;33:188–199.
33. Pauker SG, Kassirer JP. The threshold approach to clinical decision making. N Engl J Med. 1980;302:1109–1117.
34. Elstein AS. Heuristics and biases: Selected errors in clinical reasoning. Acad Med. 1999;74:791–794.
35. Einhorn HJ, Hogarth RM. Behavioral decision theory: Processes of judgement and choice. Annu Rev Psychol. 1981;32:53–88.
36. Lescoe-Long MA, Long MJ, Amidon RL, Kronenfeld JJ, Glick DC. The relationship between resource constraints and physician problem solving. Implications for improving the process of care. Med Care. 1996;34:931–953.
37. Kahneman D, Tversky A. Prospect theory: An analysis of decision under risk. Econometrica. 1979;47:263–291.
38. Thaler RH, Sunstein CR. Libertarian paternalism. Am Econ Rev. 2003;93:175–179.
39. Arno A, Thomas S. The efficacy of nudge theory strategies in influencing adult dietary behaviour: A systematic review and meta-analysis. BMC Public Health. 2016;16:676.
40. Blumenthal-Barby JS, Burroughs H. Seeking better health care outcomes: The ethics of using the “nudge.” Am J Bioeth. 2012;12:1–10.
41. Avorn J. The psychology of clinical decision making—Implications for medication use. N Engl J Med. 2018;378:689–691.
42. Lubarsky S, Charlin B, Cook DA, Chalk C, van der Vleuten CP. Script concordance testing: A review of published validity evidence. Med Educ. 2011;45:329–338.
43. Lineberry M, Kreiter CD, Bordage G. Threats to validity in the use and interpretation of script concordance test scores. Med Educ. 2013;47:1175–1183.
44. Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE guide no. 31. Med Teach. 2007;29:855–871.
45. Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees: A systematic review. JAMA. 2009;302:1316–1326.
46. Chan T, Sherbino J; McMAP Collaborators. The McMaster Modular Assessment Program (McMAP): A theoretically grounded work-based assessment system for an emergency medicine residency program. Acad Med. 2015;90:900–905.
47. Govaerts M, van der Vleuten CP. Validity in work-based assessment: Expanding our horizons. Med Educ. 2013;47:1164–1174.
48. Ruiz-Primo MA, Shavelson RJ. Problems and issues in the use of concept maps in science assessment. J Res Sci Teach. 1996;33:569–600.
49. West DC, Pomeroy JR, Park JK, Gerstenberger EA, Sandoval J. Critical thinking in graduate medical education: A role for concept mapping assessment? JAMA. 2000;284:1105–1110.
50. Artino AR Jr, Cleary TJ, Dong T, Hemmer PA, Durning SJ. Exploring clinical reasoning in novices: A self-regulated learning microanalytic assessment approach. Med Educ. 2014;48:280–291.
51. de Jong J, Visser M, Van Dijk N, van der Vleuten C, Wieringa-de Waard M. A systematic review of the relationship between patient mix and learning in work-based clinical settings. A BEME systematic review: BEME Guide No. 24. Med Teach. 2013;35:e1181–e1196.
52. Peabody JW, Luck J, Glassman P, et al. Measuring the quality of physician practice by using clinical vignettes: A prospective validation study. Ann Intern Med. 2004;141:771–780.
53. Cook DA, Triola MM. Virtual patients: A critical literature review and proposed next steps. Med Educ. 2009;43:303–311.
54. Weiner SJ, Schwartz A, Weaver F, et al. Contextual errors and failures in individualizing patient care: A multicenter study. Ann Intern Med. 2010;153:69–75.
55. Weiner SJ, Schwartz A, Sharma G, et al. Patient-centered decision making and health care outcomes: An observational study. Ann Intern Med. 2013;158:573–579.
56. Cristancho S, Field E, Lingard L. What is the state of complexity science in medical education research? Med Educ. 2019;53:95–104.
57. Cook DA, West CP. Perspective: Reconsidering the focus on “outcomes research” in medical education: A cautionary note. Acad Med. 2013;88:162–167.
58. Cook DA, Sorensen KJ, Hersh W, Berger RA, Wilkinson JM. Features of effective medical knowledge resources to support point of care learning: A focus group study. PLoS One. 2013;8:e80318.
59. Lobach D, Sanders GD, Bright TJ, et al. Enabling Health Care Decisionmaking Through Clinical Decision Support and Knowledge Management. 2012. Rockville, MD: Agency for Healthcare Research and Quality; Evidence report no. 203.
60. Aakre CA, Pencille LJ, Sorensen KJ, et al. Electronic knowledge resources and point-of-care learning: A scoping review. Acad Med. 2018;93(11 suppl):S60–S67.
61. Stacey D, Légaré F, Lewis K, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2017;4:CD001431.
62. Murad MH, Montori VM, Guyatt GH. Incorporating patient preferences in evidence-based medicine. JAMA. 2008;300:2483.
63. Montori VM, Kunneman M, Brito JP. Shared decision making and improving health care: The answer is not in. JAMA. 2017;318:617–618.
64. Pellerin MA, Elwyn G, Rousseau M, Stacey D, Robitaille H, Légaré F. Toward shared decision making: Using the OPTION scale to analyze resident–patient consultations in family medicine. Acad Med. 2011;86:1010–1018.
65. Rusiecki J, Schell J, Rothenberger S, Merriam S, McNeil M, Spagnoletti C. An innovative shared decision-making curriculum for internal medicine residents: Findings from the University of Pittsburgh Medical Center. Acad Med. 2018;93:937–942.
66. Siriwardena AN, Edwards AG, Campion P, Freeman A, Elwyn G. Involve the patient and pass the MRCGP: Investigating shared decision making in a consulting skills examination using a validated instrument. Br J Gen Pract. 2006;56:857–862.
67. Cook DA, Pencille LJ, Dupras DM, Linderbaum JA, Pankratz VS, Wilkinson JM. Practice variation and practice guidelines: Attitudes of generalist and specialist physicians, nurse practitioners, and physician assistants. PLoS One. 2018;13:e0191943.
68. McKeown RE, Reininger BM, Martin M, Hoppmann RA. Shared decision making: Views of first-year residents and clinic patients. Acad Med. 2002;77:438–445.
69. Carlsen B, Glenton C, Pope C. Thou shalt versus thou shalt not: A meta-synthesis of GPs’ attitudes to clinical practice guidelines. Br J Gen Pract. 2007;57:971–978.
70. Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282:1458–1465.
71. Zwolsman S, te Pas E, Hooft L, Wieringa-de Waard M, van Dijk N. Barriers to GPs’ use of evidence-based medicine: A systematic review. Br J Gen Pract. 2012;62:e511–e521.
72. Hajjaj FM, Salek MS, Basra MK, Finlay AY. Non-clinical influences on clinical decision-making: A major challenge to evidence-based practice. J R Soc Med. 2010;103:178–187.
73. Daniel M, Rencic J, Durning SJ, et al. Clinical reasoning assessment methods: A scoping review and practical guidance. Acad Med. 2019;94:902–912.
74. Chaudhry B, Wang J, Wu S, et al. Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144:742–752.
75. Bright TJ, Wong A, Dhurjati R, et al. Effect of clinical decision-support systems: A systematic review. Ann Intern Med. 2012;157:29–43.
76. Wears RL, Berg M. Computer technology and clinical work: Still waiting for Godot. JAMA. 2005;293:1261–1263.