During clinical encounters, physicians engage in numerous clinical tasks, including listening to the patient’s story, reviewing his or her past records, performing a physical examination, choosing the appropriate investigations, providing advice or prescribing medications, and/or ordering a consultation. These behaviors are driven, or at least influenced, by “what” physicians think about and “how” they think. Since Elstein and colleagues’1 1978 seminal book on medical problem solving, numerous studies on clinical reasoning and expertise in medicine have been published.2 This line of research has led to important insights on how novice and expert physicians reason during clinical encounters. Eva3 categorized these various reasoning processes under two broad umbrellas: nonanalytical and analytical processes. Nonanalytical processes are automatic and unconscious, such as pattern recognition. Conversely, analytical processes are deliberate and detailed, such as hypothetico-deductive reasoning that involves the generation and testing of a set of hypotheses around a set of clinical data. Still, no one has published a comprehensive list of what physicians reason about during a clinical encounter, such as the precipitants or triggers to the current clinical problem or the impact of the available resources on the treatment options.
In an attempt to provide a common language for discussing, teaching, and researching clinical reasoning, we undertook the task of developing a unified list of physicians’ reasoning tasks during clinical encounters. We purposefully distinguished between clinical tasks, reasoning tasks (the what), and the process of reasoning (the how). A clinical task is something that a physician does, such as eliciting a history, performing a physical examination, or ordering a consultation. A reasoning task is what a physician reasons about while performing a clinical task. The process of reasoning is the cognitive process in which a physician engages for the purpose of completing a clinical task. For example, in reviewing the results of numerous investigations and the report from a consultation (i.e., clinical tasks), a physician may consider the possible impact of the patient’s comorbid conditions while at the same time also considering the resources available for his or her treatment (i.e., reasoning tasks), using pattern recognition or decision analysis (i.e., processes of reasoning). Whereas a physician’s knowledge base, skills, and values each exert an influence on how well he or she performs clinical and reasoning tasks, we considered those attributes to be separate but related issues and did not address them in the present study.
Having a unified list of physicians’ reasoning tasks for practice, teaching, and research in medicine is important for a number of reasons. For example, a large number of patients suffer from multiple comorbidities,4–6 and research suggests that medical teams do not adequately address these patients’ comorbid medical problems when admitting them to the hospital; thus, many of these patients are readmitted at a later date.7,8 Whereas many factors contribute to such diagnostic and management errors, a physician’s failure to consider his or her patient’s chronic active diseases—a reasoning task—may play an important role. During clinical patient safety audits or teaching rounds, explicitly considering this particular reasoning task could help physicians improve their practice. The literature supports that physicians frequently learn reasoning tasks tacitly through the case presentation and clinical documentation genres.9–11 However, research also suggests that, at some institutions, the apparent purpose of the case presentation may be too focused on defending a diagnosis, which may hinder the learning of other reasoning tasks.11 Although most physicians likely recognize that reasoning tasks completed during clinical encounters extend beyond diagnosis, the lack of a shared language for describing and labeling these tasks may perpetuate this misconception and hinder efforts to teach trainees the broader spectrum of reasoning tasks. As a first step in addressing this problem, the purpose of this study was to develop a unified list of what physicians reason about during clinical encounters.
In the summer and early fall of 2010, we conducted a two-stage validation study by surveying a group of 46 international experts in clinical reasoning or communications in health care. Using a thematic summary of the existing literature, we created an initial list of reasoning tasks, and then, using our two surveys, we modified it and validated the final list. We obtained ethics approval from the psychology research ethics board at the University of Western Ontario and the research ethics board at the University of Illinois at Chicago. We conducted this study with local departmental funds and guaranteed confidentiality to participants.
Initial list of reasoning tasks
Although no one publication comprehensively identifies the reasoning tasks completed during clinical encounters, many discrete sources include fragmented lists. We compiled our initial list by searching four relevant content areas from the literature. We include here only those exemplary and/or key references from these topic areas: communications,12,13 medical errors,7,14,15 clinical reasoning,1,16,17 and clinical guidelines. We chose these references because they most shaped our thinking on the topic and conveyed best our rationale for including one or more of the reasoning tasks that we identified on our list. We further modified this list and thematically organized the reasoning tasks through a formal iterative discussion process among the coauthors. We then piloted the list with four experienced physician colleagues familiar with the clinical reasoning and communications literature, and further edited the list on the basis of their feedback (see Appendix 1, Column 1, for our initial list).
During the first round of our study, we contacted 46 international experts. We chose these experts because each had published scholarly work in either clinical reasoning or communications in health care. We also included a purposeful mix of physician and nonphysician researchers in this group. In the second round of our study, we invited only half the original participants because of the high degree of consensus that they reached during the first round and to minimize the burden on them. This second group was again purposively selected on the basis of our knowledge of their areas of scholarship and to ensure that we included a mix of physician and nonphysician researchers.
During the first round of our study, we contacted each participant via e-mail and sent a reminder two weeks later. In the e-mail, we directed participants to a Web-based survey that included instructions to complete the first survey. This first page of the survey also contained an explanation of the purpose of our study and the full list of reasoning tasks, organized into three thematic sections (i.e., framing the encounter, diagnosis, and management) (see Appendix 1, Column 1). We also asked participants to review and consider printing the list of reasoning tasks to ensure that they answered each question in the context of the complete list. On the second page of the survey, we asked participants to list their professional qualifications.
For each reasoning task, we gave participants the following statement: “The reasoning task X should appear on the final list of reasoning tasks in which physicians may engage during the course of a clinical encounter.” The answer options were (a) yes as written, (b) yes with alternative wording, (c) no, or (d) unable to judge. We also provided participants with a comment box to suggest alternative wording. At the end of the survey, we asked participants to suggest additional tasks that we had not included.
During the second round of our study, we considered the suggestions made during the first round and modified our list of reasoning tasks accordingly. To improve the clarity of each survey item, we also added a relevant clinical example for each reasoning task and then sent a new e-mail to half the original participants. This second survey included the same set of questions regarding each reasoning task as the first survey.
All survey responses were anonymous.
Of the original 46 participants invited to complete our first survey, 30 replied. Of those 30 participants, 3 indicated that they were unable to participate (insufficient expertise, maternity leave, and sabbatical leave), 27 began the survey, and 24 completed it, for an overall response rate of 52% (24/46). The respondents represented a variety of backgrounds, with just over one-third (9/24) having a doctoral degree and two-thirds (16/24) having a medical degree; six participants (25%) had more than one degree. The majority of physician respondents (13/16) had trained in either internal medicine or family medicine, and just over half of the doctoral respondents (5/9) came from cognitive psychology. The percentage of respondents with doctoral and/or medical degrees closely matched that in the original group of invited participants (i.e., 38% versus 28% with doctoral degrees and 67% versus 72% with medical degrees).
The main findings from our study are included in Appendix 1, which contains the original list of 20 proposed reasoning tasks (Column 1), the final list of 24 reasoning tasks (Column 2), and a set of illustrative case examples (Column 3) intended to help readers better understand our intent behind each reasoning task as well as the distinctions between tasks. For the purpose of clarity, we listed each task under only the one category (framing the encounter, diagnosis, management, or self-reflection) that we felt best captured the overall intent of the reasoning task.
Our original list included 20 proposed reasoning tasks, organized into three categories (framing the encounter, diagnosis, and management) (see Appendix 1, Column 1). On average, 83% of respondents (20/24) agreed with the inclusion of all 20 original tasks (standard deviation [SD] = 12; range = 50–100; see Appendix 1, Column 5, for details). On average, 25% (6/24) made suggestions regarding the wording used to describe each task (SD = 14; range = 8–54). We made changes to the wording of all 20 reasoning tasks based on this feedback following the first-round survey. We followed respondents’ suggestions on the basis of their ability to add clarity to the task (see Appendix 1 for the changes to wording between the original list and the final list). We followed additional suggestions from the respondents and added two new reasoning tasks to the list for the second-round survey.
Of the 22 participants whom we invited to complete the second-round survey, 19 began the survey and 15 completed it (68% response rate). Respondents again came from a variety of backgrounds: 6 (40%) had a doctoral or master’s degree, 12 (80%) had a medical degree, and 6 (40%) had dual degrees. Again, the percentages of respondents with doctoral and medical degrees were similar for respondents and invited participants. Several of the participants contacted (3/22) who did not have a medical background felt that they could not complete the second-round survey.
During the second-round survey, 97% of respondents (14/15) agreed with the list of proposed reasoning tasks (SD = 4; range = 93–100; see Appendix 1, Column 7, for details). Twelve of the 22 tasks (55%) had unanimous approval. Compared with the first-round survey, respondents made approximately half as many suggestions regarding wording (2/15; 13%; SD = 11; range = 0–33). Moreover, of the changes suggested, many were related to the examples rather than to the reasoning tasks themselves. On the basis of this feedback, we made wording changes to 17 of the 22 reasoning tasks. The majority of these changes were more minor than the substantive changes we made as a result of the first-round survey. For example, we changed Task 6 from “identifying or determining potential alternatives for diagnostic workup (investigation)” to “determine diagnostic investigations” after the first-round survey, and to “select diagnostic investigations” after the second-round survey. The respondents’ comments and suggestions also resulted in the addition of two new tasks. Although the group of expert reviewers did not validate these new items, we felt that they were well supported by the literature and had appropriate implications for shaping and studying clinical practice. Because the two additional items did not fit under the three original categories (i.e., framing the encounter, diagnosis, and management), we added a fourth category—self-reflection. Our final list thus included 24 reasoning tasks.
The purpose of our study was to begin to develop a unified list of physicians’ reasoning tasks during clinical encounters. Our two-round validation study resulted in the identification and initial validation of 24 reasoning tasks. What is unique about this list is not the items themselves but, rather, that we compiled them into a unified list and separated reasoning tasks from reasoning processes instead of considering them as a single construct, as others often do. For example, many studies have focused on diagnostic reasoning as if it were a single entity1,3,18; our list, however, contains eight related but distinct diagnostic reasoning tasks. Past research has shown the importance of case specificity with regard to diagnosis and management.19 Case specificity may also extend to the reasoning task level (i.e., a student knowing how to determine that the most likely diagnosis is heart failure does not predict his or her ability to consider the implications of managing heart failure in the context of the patient’s chronic kidney disease). Although further validation is needed, we hope that the development of this list of reasoning tasks will allow physicians, teachers, and researchers to begin to expand their thinking about clinical reasoning. The proposed list has the potential to impact clinical practice and teaching as well as future education and clinical research.
In many jurisdictions, regulatory bodies require physicians to reflect on their practice or conduct practice audits to ensure that they maintain their competence. Whether or not it is required, many physicians still regularly spend time reflecting on their practice and identifying personal learning needs. To this end, we envision that a motivated practitioner might use our list of reasoning tasks to reflect on the extent to which he or she regularly addresses the various items, when warranted, during clinical encounters. For example, as we indicated earlier, reflecting on the reasoning tasks involved in caring for patients with multiple comorbidities would be beneficial to physicians.6,7 Several reasoning tasks directly relate to this issue and may prompt physicians to question their tacit assumptions about what and how they reason when seeing this particular group of patients.20,21 For those who work in interprofessional teams, the list may also be useful in clarifying role expectations. For example, in caring for a patient with new-onset diabetes who needs to learn about managing a diabetic diet, Task 19 (“select education and counseling approach for patient and family”) could prompt the physician to involve the team’s dietician. Although some physicians might be tempted to turn our list of reasoning tasks into a checklist to follow during clinical encounters, at this stage in its development we do not support this use.
We envision several uses for this list in teaching, including supporting teachers while thinking aloud during case review to foster student learning. Many physician teachers have only a tacit understanding of the reasoning tasks of a clinical encounter.20 By providing them with a standardized vocabulary and a list of the possible tasks that are part of their case-based teaching, they may be better able to reflect on and think aloud about their own reasoning18; this reflection is an essential step in practicum-based professional education.21 In addition, the list may help students both to develop a deeper understanding of the multiple purposes of the clinical encounter and to connect the process skills of communication with the purpose of communication in the context of the clinical encounter. For example, the first communication step that a student must take is to initiate the encounter.13,22 As part of framing the encounter during this initial phase, we identified three related reasoning tasks—identify active issues, assess priorities, and reprioritize on the basis of assessment. A teacher could use these reasoning tasks to explore specific issues, such as how and when a physician should introduce health maintenance, screening, or smoking cessation into a clinical encounter, or how many issues a physician can and should address during an encounter and how he or she should prioritize them. Finally, the list also can be used to help students categorize the information that they learned in the classroom or from textbooks in such a way as to enhance their ability to use that information in the clinical setting.23 For example, students could connect the content of a didactic lesson on the pharmacological principles of drug interactions with their consideration of “the consequences of management on co-morbid illnesses” (Task 15).
Each of the teaching suggestions above also should be studied in terms of its impact on learning and practice. In addition, other important, related research questions include “Do particular history-taking and case presentation practices support or deflect attention from certain reasoning tasks?” and “Does teaching students to approach patient care with these tasks in mind lead them to provide more comprehensive care than their current level of education would otherwise do?”
Those researchers who are interested in quality assurance and reducing medical errors may use the list in their root cause analyses to explore the extent to which certain error types are related either to framing errors or to the failure of physicians or clinical teams to address particular issues, such as those related to Tasks 14 and 15 (consider the impact of comorbid illnesses on management and consider the consequences of management on comorbid illnesses). In addition, in a recent article, Ely and colleagues24 proposed using a checklist to enhance physicians’ cognitive approaches to clinical encounters to reduce medical errors. Our list of reasoning tasks could help identify or refine items for such a checklist.
Further research and limitations
Whereas our results have implications for current practice, they also represent the first step in building a unified list of reasoning tasks in medicine. Our list can, and should, be expanded in a number of ways. In developing the list itself, we made no attempt to identify or define the number of times that any task, or set of tasks, might be performed during a typical encounter or the sequence and relative hierarchy of each task in a given encounter. An appropriate next step in expanding our list would therefore be to study physicians in practice both to further validate our list and to begin to identify the relative influence of particular reasoning tasks on physicians’ actions. Next, physicians may experience negative consequences when they engage in clinical encounters that require them to address specific reasoning tasks or multiple tasks at once. For example, Durning and colleagues25,26 recently examined the influence of context on clinical reasoning and cognitive load. By linking how physicians think with what they reason about, we can expand Durning and colleagues’ work and ask research questions like (1) How do physicians handle the cognitive load of multiple reasoning tasks during a clinical encounter? (2) Do physicians more frequently ignore certain tasks as a strategy for handling cognitive load? and (3) How do physicians handle the inherent multitasking and task-switching—going from one task to another—during a clinical encounter?
Although we know that physicians assume a variety of roles outside their one-on-one interactions with patients, the focus of our study was the reasoning tasks associated with those types of clinical encounters. As such, one of the limitations of our study is that our list of reasoning tasks does not represent all clinical reasoning tasks in all clinical contexts. For example, different reasoning tasks are associated with performing a surgery or running a busy emergency room than with caring for patients one on one. Therefore, we recommend that future research focus on broadening our list to include the reasoning tasks involved in these situations as well. Similarly, our list is not intended to provide a language for communicating with patients. Whereas we consider many of the reasoning tasks to be patient-centered, the language that we used represents what physicians reason about, not what they say. Next, although we developed clinical examples associated with each reasoning task to provide clarity, users should not be constrained by these examples, because each reasoning task can lead a physician down many different paths of inquiry. For example, a patient being seen for health prevention or maintenance would yield a set of different clinical examples than would a patient presenting with a new problem in the context of prior active ones.
In conclusion, the development of this unified list represents a first step in offering a language for discussing, reflecting on, and studying what physicians reason about during clinical encounters. Whereas our list has many potential uses in current clinical practice and teaching, it also provides avenues for future research in clinical reasoning.
Acknowledgments: The authors wish to express their gratitude to the survey participants for their thoughtful comments and suggestions and to Drs. Judy Bowen, Peter McLeod, and Jeroen Merrienboer for their useful commentaries on an earlier draft of this report.
Other disclosures: None.
Ethical approval: The authors obtained ethics approval from the psychology research ethics board at the University of Western Ontario and the research ethics board at the University of Illinois at Chicago.
Previous presentations: The authors presented preliminary results from this study at two medical education conferences: Goldszmidt M. Research on clinical reasoning: Time for a new agenda. Poster presented at: Research in Medical Education conference; November 2011; Denver, Colorado; and Goldszmidt M, Minda JP, Bordage G. Research on clinical reasoning: More than making a diagnosis. Paper presented at: Association for Medical Education in Europe annual conference; September 2011; Vienna, Austria.
1. Elstein AS, Shulman LS, Sprafka SA Medical Problem Solving: An Analysis of Clinical Reasoning. 1978 Cambridge, Mass Harvard University Press
2. Norman G. Research in clinical reasoning: Past history and current trends. Med Educ. 2005;39:418–427
3. Eva KW. What every teacher needs to know about clinical reasoning. Med Educ. 2005;39:98–106
4. Fried LP. Establishing benchmarks for quality care for an aging population: Caring for vulnerable older adults. Ann Intern Med. 2003;139:784–786
5. Safford MM, Allison JJ, Kiefe CI. Patient complexity: More than comorbidity. The vector model of complexity. J Gen Intern Med. 2007;22(suppl 3):382–390
6. Marengoni A, Rizzuto D, Wang HX, Winblad B, Fratiglioni L. Patterns of chronic multimorbidity in the elderly population. J Am Geriatr Soc. 2009;57:225–230
7. Hayward RA, Asch SM, Hogan MM, Hofer TP, Kerr EA. Sins of omission: Getting too little medical care may be the greatest threat to patient safety. J Gen Intern Med. 2005;20:686–691
8. Pitt B. Male gender, diabetes, COPD, anemia, and creatinine clearance < 30mL/min predicted hospitalization after heart failure diagnosis. Ann Intern Med. 2010;152:JC4-2–JC4-3
9. Lingard L, Haber RJ. Teaching and learning communication in medicine: A rhetorical approach. Acad Med. 1999;74:507–510
10. Miller C. Genre as social action. Q J Speech. 1984;70:151–167
11. Montgomery K Doctors’ Stories: The Narrative Structure of Medical Knowledge. 1991 Princeton, NJ Princeton University Press
12. Lipkin M, Putnam SM, Lazare A The Medical Interview: Clinical Care, Education, and Research. 1995 New York, NY Springer-Verlag
13. Kurtz SM, Silverman J, Draper J Teaching and Learning Communication Skills in Medicine. 20052nd ed Oxford, UK Radcliffe Publishing
14. Bordage G. Why did I miss the diagnosis? Some cognitive explanations and educational implications. Acad Med. 1999;74(10 suppl):S138–S143
15. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165:1493–1499
16. Higgs J, Jones M Clinical Reasoning in the Health Professions. 20002nd ed Oxford, UK Butterworth-Heinemann
17. Montgomery K How Doctors Think: Clinical Judgement and the Practice of Medicine. 2006 Oxford, UK Oxford University Press
18. Bowen JL. Educational strategies to promote clinical diagnostic reasoning. N Engl J Med. 2006;355:2217–2225
19. Norman G, Bordage G, Page G, Keane D. How specific is case specificity? Med Educ. 2006;40:618–623
20. Schön DA The Reflective Practitioner: How Professionals Think in Action. 1983 New York, NY Basic Books
21. Schön DA Educating the Reflective Practitioner: Towards a New Design for Teaching and Learning in the Professions. 1987 San Francisco, Calif Jossey-Bass
22. Kurtz S, Silverman J, Benson J, Draper J. Marrying content and process in clinical method teaching: Enhancing the Calgary–Cambridge guides. Acad Med. 2003;78:802–809
23. Schwartz DL, Bransford JD, Sears DJP Mestre. Efficiency and innovation in transfer. Transfer of Learning From a Modern Multidisciplinary Perspective. 2005 Greenwich, Conn IAP
24. Ely JW, Graber ML, Croskerry P. Checklists to reduce diagnostic errors. Acad Med. 2011;86:307–313
25. Durning S, Artino AR Jr, Pangaro L, van der Vleuten CP, Schuwirth L. Context and clinical reasoning: Understanding the perspective of the expert’s voice. Med Educ. 2011;45:927–938
26. Durning SJ, Artino AR, Boulet JR, Dorrance K, van der Vleuten C, Schuwirth L. The impact of selected contextual factors on experts’ clinical reasoning performance (does context impact clinical reasoning performance in experts?). Adv Health Sci Educ Theory Pract. 2012;17:65–79