In Reply to Reilly and Von Feldt:
We thank Drs. Reilly and Von Feldt for their interest in our study. However, their main concern seems to be based on a misunderstanding of the methodology. They write that “their conclusion appears valid in the context of a multiple-choice examination but … is poorly applicable to real-life clinical situations.” In fact, as we described, the cases we used were long, written ones (about 400 words) requiring the participant physician to type in a free-text diagnostic response.
Their commentary does raise two important questions: (1) What is the relationship between performance on written test cases and performance in “the real world”? and (2) Do “cognitive bias effects” exert any greater influence in “the real world” than in the written case?
The first question can be easily answered. Studies done by the Medical Council of Canada1 examined the validity of the two-part Medical Council licensing examinations in predicting complaints to a regulatory agency. Part 1, the written, mostly multiple-choice examination, had a substantially greater relationship to complaints than measures of problem-solving derived from the Part 2 OSCE performance examination. Moreover, the written case-based clinical decision-making component of the Part 1 examination, which resembles our methodology, had the best predictive validity.
As to the second question, the empirical evidence for virtually all cognitive biases is based entirely on written materials,2 often with multiple-choice questions, administered to undergraduate psychology students. To argue that written materials cannot identify cognitive biases negates the entire evidential basis of the cognitive bias literature. Moreover, the argument that these biases are more frequent in the clinical situation is not substantiated by the studies cited by Reilly and Von Feldt, which do not make any comparison to written cases, and Zwaan and colleagues’3 study counts “mistakes,” “violations,” “slips,” and “lapses,” which do not equate to cognitive biases.
These are not inconsequential issues. If one assumes that the majority of diagnostic errors arise from cognitive biases that originate in System I reasoning, then it would be appropriate to devise educational interventions that, in Reilly and Von Feldt’s words, “encourage our learners to … be mindful of [System I’s] potential to introduce bias to diagnostic decision making.” In our view, such an assumption would also mean that learners should be encouraged to slow down and avoid System I reasoning. But the evidence we have presented shows that rapid diagnoses are more, not less, accurate on average, so an intervention to universally discourage speed would likely be ineffective and wasteful.
Jonathan Sherbino, MD
Associate professor, Emergency Medicine, McMaster University Faculty of Health Sciences, Hamilton, Ontario, Canada.
Geoffrey R. Norman, PhD
Professor, Clinical Epidemiology and Biostatistics, McMaster University Faculty of Health Sciences, Hamilton, Ontario, Canada; email@example.com.
1. Tamblyn R, Abrahamowicz M, Dauphinee D, et al. Physician scores on a national clinical skills examination as predictors of complaints to medical regulatory authorities. JAMA. 2007;298:993–1001
2. Kahneman D. A perspective on judgment and choice: Mapping bounded rationality. Am Psychol. 2003;58:697–720
3. Zwaan L, Thijs A, Wagner C, van der Wal G, Timmermans DRM. Relating faults in diagnostic reasoning with diagnostic errors and patient harm. Acad Med. 2012;87:149–156