We appreciate the thoughtful letter from Goldberg et al in response to our original publications and the ongoing interest regarding deception in simulation that these seem to have generated.1–5 One of our primary goals in preparing these articles was to highlight and explore the complexity of the issue in the hope of generating a sustained conversation. It appears that we have been successful in this regard, and find this encouraging. As a fairly balanced group from both the “pro” and “con” sides of the issue, we have less disagreement with the content of this letter than may be readily apparent. The arguments presented by Goldberg et al4 mirror the case in favor of deception’s use made within our author group. That being said, there are significant aspects of this letter with which we disagree.
The first, and strongest, point made by Goldberg et al4 concerns the lack of empirical research supporting our article. With this, we readily agree. One of our goals was to point out the paucity of research in the area and lay out an approach to structure subsequent inquiry. We would suggest, though, that it is one thing to state that no research exists and entirely another to use this vacuum as support for one particular side of the debate. Goldberg et al4 suggest that this lack of knowledge renders the concerns voiced in our article purely theoretical and that “in reality” they do not occur. But how do we know this to be true, especially when respected educators and ethicists have voiced concerns and reported specific problematic instances with deception? We simply do not know for certain how simulations employing deception are experienced and interpreted by the entire range of participants. Thus, we do not share Goldberg et al’s certainty and do not believe that the lack of available evidence can be used to support either side of the discussion.
One issue requiring clarification is the relationship between deception in simulation to the experiments of Milgram.6 Here, there is little disagreement. The Milgram experiment and the use of deception in simulation fall into different categories, a point raised in the original article.2 A salient distinction extrapolated from the psychology literature contrasts situations in which deception is used as an element within the context of the scenario as a necessary means to address certain explicit educational goals and situations in which the entire purpose of the session is deceptive. Kelman7 calls the latter “second order” deceptions and finds them particularly problematic. As the case discussed in the 2014 IMSH debate that led to our written commentary was explicitly created to address authority gradients and the deception was intended to create an emotionally authentic situation where this could be explored, it serves as an example of the former.1,2 In our experience, much of the deception presently used in simulation (particularly the use of deceptive confederates to heighten the fidelity of a scenario) falls into this domain. In contrast, consider a simulation that learners believe is for the purpose of teaching crisis resource management skills but that is later revealed to be part of a research project measuring clinician response to authority gradients without specific opportunities for participants to learn and debrief about these responses. It is not difficult to see that, like the Milgram experiment, with its explicit misdirection as to the purpose of the activity, this simulation belongs in the latter category. And yet, whenever the question of deception in simulation is raised, someone inevitably references Milgram. This suggests that the distinction between the 2 types of deception as outlined previously is unclear in the minds of many educators, frequently leading them to falsely equate much of the deception currently employed in simulation with the Milgram experiment. It was for this reason that we discussed Milgram so thoroughly in the original article.
Additional criticisms were also raised regarding the proposed framework. In particular, it was implied that the decision to use deception (or other potentially psychologically difficult approaches) should be made without consideration of the cognitive resilience of the learner. With this assertion, we differ more substantially. Fundamental to all medical ethics is the dynamic balance between the principles of beneficence (doing good) and non-maleficence (doing no harm).8,9 In some cases, the good that can be accomplished by a procedure or technique outweighs the distress it may cause, whereas in other cases, the scale may tip in the other direction. Healthcare personnel engage in work that can be psychologically challenging, and treating critically ill and dying patients is difficult for everyone. Furthermore, individuals may carry specific psychological vulnerabilities that could magnify this difficulty. The letter of Goldberg et al implies that the use of simulations containing deception to identify these vulnerable individuals is clearly beneficial, but we are not so sure, and the potential for doing harm looms large in our minds. Although it seems likely that sorting out, even early in training, whether providers can handle the stress of the clinical environment is of value, it is much less clear that the use of a modality that carries the real possibility of causing significant psychological stress is the most appropriate method of accomplishing this. Can it be appropriate to stage simulations posing emotional challenges for the purpose of introducing individuals to this possibility and to trigger meaningful discussion of the issues? We believe it can be. Is deception needed to authentically recreate these environments? At present we do not know. Ultimately, we believe that more empirical research on how learners experience and deal with simulations involving deception is necessary before committing to this course of action.
Goldberg et al4 also state that consideration of the specific educator’s personal experience with the case material “is not a necessary aspect of successful simulation,” but the subsequent argument is based primarily on issues of clinical knowledge. We do not dispute that clinical knowledge can be transmitted through a well-crafted simulation in a way that is largely independent of instructor experience. When the simulation involves psychologically difficult and potentially emotionally charged material, however, instructor experience and skill become much more important. An example of this as it pertains to deception is the hyperkalemic arrest case referenced in the 2014 IMSH debate.1,2 As part of that case an attending confederate asks the team, once they are aware of the hyperkalemia, to administer potassium phosphate intravenously, precipitating an authority gradient issue and resulting in a negative outcome. In this situation, it is not difficult to imagine a group of learners disengaging with the debriefing process once the deception is disclosed. It is here that facilitator experience with the specific case content can have great value, because it allows the facilitator to directly and personally share the impact that the case content can have on real patients. Goldberg et al indirectly illustrates this in their discussion of medical error, where they build a persuasive case for the value of deceptive simulations by referencing real-world clinical situations. It is this type of case that we believe should be made during the debriefing of such a simulation if learners are to remain engaged, and clinical experience is a powerful tool for achieving this. Although we do not believe that personal participation in such a situation is an absolute requirement, the ability to draw on such an experience cannot help but add a degree of richness to the discussion and may give the facilitator additional insight about how best to guide the learners through their emotional responses to the simulation. Ultimately, none of the points in our model are meant as hard stops, but instead are intended to stimulate discussion among faculty while planning a session so that an optimal learning environment can be ensured.
A further issue raised by the letter regards the role of the presimulation briefing and, in particular, whether it is advisable and necessary to disclose the possibility of deception. This point was discussed at length before the debate, and among us differences of opinion and practice as to the viability of this approach still exist. This is why the approach was labeled as a possibility to be carefully considered, based on the learning situation, rather than a recommendation. To better explore this possibility, we suggest looking at the elements of our own model and, in particular, the learner background-scenario structure relationship. Consider a simulation (such as the one presented at the beginning of the debate) that uses a deceptive approach to teach appropriate response to authority. Consider also the following 2 learner groups: a) an assembly of second-year medical students, and b) an interprofessional group of practicing intensive care residents and nurses. Although it might be necessary to use deceptive simulation in both groups to generate adequate authenticity, the experience level of each group should at least be considered when planning the session. The medical students are newcomers to the clinical domain and have no background experience to draw on. We suggest that this group might well learn more if the emotional environment is somewhat defused by revealing the possibility of deception up front. In contrast, the interprofessional group is more seasoned and has likely experienced authority gradient issues before, giving it a richer base on which to draw. For this group, an initial briefing containing a specific revelation regarding the deception has a higher chance of detracting from authenticity and perhaps even distracting the group during the case. One way of addressing this more experienced group is exemplified by the “Vegas Rules” described by Goldberg et al, which seem designed to generate psychological safety and defuse stress via the use of humor.4 Even here, however, it is important to recognize that unintended consequences are real and must be carefully considered when adopting a particular approach. Is it possible that encouraging secrecy in this way could have a negative impact on the “culture of transparency” we are attempting to create in our patient safety work? Although educators may ultimately have different perspectives on this possibility, it is a consideration that should not be dismissed. One of us (D.G.), in a previous editorial, raised the example of the Stanford Prison Experiment as a cautionary tale.3 Unlike the Milgram experiment, all participants knew that this was a simulation created with the goal of improving our understanding of the social psychology of prisons. That did not, however, prevent the simulation from generating highly volatile and possibly traumatic experiences for some of the subjects. One alternative approach would be to include phrasing such as “remember that in order to generate authentic clinical situations, elements of deception might sometimes be used” in the prescenario briefing for all cases. From the perspective of Goldberg et al, this may seem to be going too far, but we have difficulty seeing how a phrase such as this would necessarily undermine authenticity.
Although we acknowledge the current lack of evidence regarding the use of deception in simulation, we fundamentally disagree that this situation argues in favor of deception’s use. Indeed, when a topic seems to generate more heat than light it is usually because we are forced to rely more on opinion (often strongly held) than established fact.10 As a young field, it is vital to be open-minded, establish an empirical base, and tread cautiously when making pronouncements on such ethically and educationally weighty matters as the deception of our learners. To that end we would renew the call for simulation researchers to undertake the empirical work needed to supplant opinion with data. In the interim (and in the less than ideal, but quite possible, situation in which unequivocal empiric data cannot finally be obtained), it is incumbent upon each of us as educators and scientists to recognize that whatever our baseline opinion on this issue, there will always be equally intelligent, thoughtful individuals who disagree. To this end, we would implore educators on either side of this divide to move from definitive statements regarding the use of deception to a more nuanced view that carefully considers the range of potentially mitigating factors before a deceptive educational modality is chosen and executed. It was for this reason that we presented our original model and hope that it can be used to advance the dialogue, shape research, and guide the ethical and educational maturation of the field.
Aaron W. Calhoun, MD
Division of Pediatric Critical Care Medicine
University of Louisville
May C. M. Pian-Smith, MD, MS
Department of Anesthesia
and Critical Care
Massachusetts General Hospital
Harvard Medical School
Robert D. Truog, MD
Institute for Professionalism
and Ethical Practice
Department of Anesthesiology
Perioperative, and Pain Medicine
Boston Children’s Hospital
Department of Social Medicine
and Global Health
Division of Medical Ethics
Harvard Medical School
David M. Gaba, MD
Center for Immersive
and Simulation-based Learning
Stanford University School of Medicine
and Simulation Center
at VA Palo Alto HCS
Palo Alto, CA
Elaine C. Meyer, PhD, RN
Institute for Professionalism
and Ethical Practice
Department of Psychiatry
Boston Children’s Hospital
Department of Psychiatry
Harvard Medical School
1. Calhoun AW, Boone MC, Miller KH, Pian-Smith MC. Case and commentary: using simulation to address hierarchy issues during medical crises. Simul Healthc
2013; 8 (1): 13–19.
2. Calhoun AW, Pian-Smith MC, Truog RD, Gaba DM, Meyer EC. Deception and simulation education: issues, concepts, and commentary. Simul Healthc
2015; 10 (3): 163–169.
3. Gaba DM. Simulations that are challenging to the psyche of participants: how much should we worry and about what? Simul Healthc
2013; 8 (1): 4–7.
4. Goldberg AT, Katz D, Levine AI, Demaria S. The importance of deception in simulation: an imperative to train in realism. Simul Healthc
2015 Nov 3. [Epub ahead of print].
5. Truog RD, Meyer EC. Deception and death in medical simulation. Simul Healthc
2013; 8 (1): 1–3.
6. Milgram S. Behavioral study of obedience. J Abnorm Psychol
1963; 67: 371–378.
7. Kelman HC. Human use of human subjects: the problem of deception in social psychological experiments. Psychol Bull
1967; 67 (1): 1–11.
8. Smith CM. Origin and uses of primum non nocere–above all, do no harm! J Clin Pharmacol
2005; 45 (4): 371–377.
9. Gillon R. Medical ethics: four principles plus attention to scope. BMJ
1994; 309 (6948): 184–188.
10. Edmondson AC, McLain Smith D. Too hot to handle? How to manage relationship conflict. Calif Manage Rev
2006; 49 (1): 6–31.