We appreciate Mamede and Schmidt’s thoughtful reading of our article. While they raise interesting hypotheses about methodological distinctions across studies, we do not see such differences as “conceptual and methodological shortcomings.” Instead, we think differences between their methodology and ours represent differing perspectives regarding the extent to which one can assume participants’ reasoning strategies based upon the experimental instructions they are given. The authors emphasize the importance of identifying contradictory features prior to generating diagnostic hypotheses; we are not convinced that such sequential representations of reasoning are realistic given the many decades psychologists required to generate separable measures of analytic and nonanalytic processes.1 In medicine, a series of studies have revealed that clinical features are more likely to be seen if one has the relevant diagnosis in mind,2,3 suggesting that reasoning is rarely solely inductive or deductive, and raising questions about the extent to which a clinician could ever be prevented from generating diagnoses when contemplating the features of a case. One of the definitional properties of nonanalytic reasoning is that it is fast, automatic, and largely beyond conscious control.4
This is not to say that the differences between Mamede and Schmidt’s methodology and ours could not account for the differences in results. Rather, it is meant simply to highlight one problem with labeling experimental conditions in these sorts of studies with theoretical constructs (i.e., “automatic reasoning”) rather than strictly limiting oneself to labels based on what was observably done (i.e., instructions to offer a diagnosis based on one’s first impression). The order of instructions may be important, but that is an empirical question that would need to be tested directly. Any such head-to-head comparison of processing interventions should, however, take into account a variety of methodologies suggesting that more analytic, conscious processing—in the absence of a preceding, experimentally induced bias—does not necessarily align with greater diagnostic accuracy.5,6
More generally, in testing the influence of such differences, it will be important to design interventions that are practically meaningful. We favor experimental control, but if the control is so great as to have little real-world value, then the benefit of any instructional intervention will be questionable. Rigid adherence to a reasoning protocol that requires strict researcher oversight is unlikely to be feasible for application in a naturalistic clinical setting.
Jonathan S. Ilgen, MD, MCR
Assistant professor, Division of Emergency Medicine, University of Washington, School of Medicine, Seattle, Washington; email@example.com.
Judith L. Bowen, MD
Professor, Department of Medicine, Oregon Health & Science University, School of Medicine, Portland, Oregon.
Kevin W. Eva, PhD
Professor and director of education research and scholarship, Department of Medicine, and senior scientist, Centre for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada.
1. Jacoby LL. A process dissociation framework: Separating automatic from intentional uses of memory. J Memory Language. 1991;30:513–541
2. Hatala RA, Norman GR, Brooks LR. The effect of clinical history on physicians’ ECG interpretation skills. Acad Med. 1996;71(10 suppl):S68–S70
3. Leblanc VR, Brooks LR, Norman GR. Believing is seeing: The influence of a diagnostic hypothesis on the interpretation of clinical features. Acad Med. 2002;77(10 suppl):S67–S69
4. Norman GR, Brooks LR. The non-analytical basis of clinical reasoning. Adv Health Sci Educ Theory Pract. 1997;2:173–184
5. Mamede S, Schmidt HG, Penaforte JC. Effects of reflective practice on the accuracy of medical diagnoses. Med Educ. 2008;42:468–475
6. Norman G, Sherbino J, Dore K, et al. The etiology of diagnostic errors: A controlled trial of system 1 versus system 2 reasoning. Acad Med. 2014;89:277–284