Simulation educators (and medical educators in general) have long used elements of deception as a pedagogical technique.1–4 Still, educators have concerns regarding the proper balance between the deception that is sometimes thought to be required to generate a realistic learning experience and the possible negative impact of that deception on participants. These concerns include its effect on the self-image of participants as clinicians, their perception of trust in the educational team, and their assumed values regarding truthfulness in medical practice.3,5–8 Educators have long recognized that simulation operates in a gray area between the real and the unreal, where both learners and educators enter into a “fiction contract.”9 According to Erving Goffman’s influential sociologic model, this contract exists as an understanding in which a “primary frame” (an actual environment of practice) is recreated in another context (in this case a simulated environment) for a specific purpose.9,10 Within these “modulations,” all participants have a clear understanding of where the new context of the frame begins and ends. Significantly, this schema also recognizes the existence of “deceptions,” modulations in which there is no clear agreement or knowledge among participants regarding the ground rules of the activity.9,10 In simulation, the concern is about the lack of agreement or consent. Some authors emphasize that “trust is embedded into the ground rules of many simulator programs and raise concerns that deception and deceptive tactics risk destroying this essential foundation of successful simulation.”3 These issues were highlighted in a recent debate at the 2014 International Meeting on Simulation in Healthcare, during which viewpoints in favor of and opposed to the use of deception were presented and discussed with a large audience.
A Case Scenario to Consider
The debate began with a published scenario in which a patient experiences cardiac arrest caused by a known electrolyte disturbance.1 Midway through the arrest, a senior clinician “confederate” (one who is “in league with” the simulation instructor) enters, assumes control of the situation, and orders the administration of an incorrect medication infusion, that, given the cause of the arrest, would almost certainly be lethal. This individual’s role as a confederate is not known to the learners—they believe that the attending is an actual course participant. If the drug is given, the mannequin “dies.” The intent of the case is to create a situation necessitating a challenge to an authority figure to protect the well-being of the patient. The deception and misdirection used to create this situation was the focus of discussion for the 2014 International Meeting on Simulation in Healthcare debate.
Debate and Commentary
Proponents argued that deceptive methods are sometimes necessary to generate a genuine psychological experience on the part of the learner and that simulation as a methodology presupposes a certain degree of deception as it uses artificial environments that “deceive” participants to recreate real clinical care. They argued that only this approach provides enough emotional fidelity for the learner to be able to relate the knowledge gained in the simulation to their practice.1,2,11,12 Because many of the real-world situations presented in simulation have the potential to result in significant patient harm, proponents suggested that the possibility of psychological distress in the learners is acceptable given the far greater distress to clinicians, patients, and families associated with an actual adverse event. Proponents suggested that these ethical concerns can be mitigated through careful briefing, debriefing, and appropriate limits on the extent of the deception.13 In this scenario, for example, the confederate was instructed to resist a challenge of the inappropriate order initially but to relent after an appropriate second challenge, and the deception was disclosed in an emotionally safe debriefing environment where the circumstances could be explored in a psychologically protected fashion by faculty thoroughly trained in facilitation. Although no simulation-specific evidence exists regarding the effects of debriefing on teams exposed to deception, studies of debriefing in the context of deception-based psychological research have shown a mitigating effect.14,15 Debriefing has also shown itself valuable in defusing the emotional stress generated by other difficult simulations such as those including mannequin death.6,16,17
Opponents opined, in turn, that deception constitutes a major relational transgression that can result in a sense of mistrust and betrayal between partners. By manipulating an already existing power differential between learner and teacher, such deception could potentially damage the foundation of psychological safety and trustworthiness that is necessary for effective learning to take place. Such mistrust could also spill over into the clinical environment, undermining the very teamwork and communication that the simulation was intended to foster. Recent work cautions that the use of confederates in simulation is not as simple and straightforward as it might seem.18 Opponents were also concerned that a failure to challenge the team leader could negatively affect the learner’s sense of self, especially when participants are not expecting to face probing inspection (or introspection) about their “character.”3 Simulation participants, for example, might experience self-reproach or be too shamed or reluctant to reveal the reasons why they did not challenge the confederate. It is entirely likely that the average facilitator might not have the skill set, confidence, or sensitivity to debrief such negative and complex emotional responses or their aftermath. With the increasing frequency of mandatory participation in simulation, the use of deception without such rospective disclosure to participants was presented as a possible endemic ethical breach. Opponents suggested that other approaches may exist, which could allow for sufficient psychological fidelity to achieve teaching goals about complex behaviors in difficult clinical situations without the use of deception, although a simple solution to this aspect of the problem is not clear.
To emphasize these points, one debater referenced his own participation as a learner in a deceptive situation, describing a feeling of lingering betrayal that remained long after the debriefing. In his own words, he described having “tried to do my best in an environment where, I learned, not everyone else was doing their best.” To use Goffman’s framework, he believed that he had entered a “modulation” (an adaptation of a real environment where everyone had agreed to abide by the same shared set of rules), when in fact he had entered a “deception,” and this realization led to a heightened sense of mistrust of his teammates. Similarly, an audience participant recounted a situation in which deception had serious negative impact on a learner, resulting in the program abandoning the use of deception altogether.
Previous Conceptual Work
It is important to put these observations in context, as deception is not a new technique.4 In particular, psychological research has long used this methodology.19,20 One notorious example is the often cited Milgram “obedience experiment,” in which participants were asked to deliver electric shocks to an unseen individual at the request of a proctor, ostensibly to study the effect of punishment on learning. In many sessions, the shocks delivered quickly escalated, as per the study protocol, to apparently very harmful doses. In fact, the subject learner was a confederate of Dr. Milgram, no electric shocks were actually given, and the true goal of the study was to determine at what point an individual would cease to obey an authority figure.21,22 Subsequent to the study, Milgram suggested that his subjects were not harmed by and in many cases seemed to appreciate the insights gained through their participation despite acknowledgment of the deception.23 These observations were supported by several other studies of deception in psychological research in which few subjects found the deception disturbing, and most found other aspects of the procedure more bothersome.24,25 A longitudinal follow-up of Milgram’s subjects contradicted these analyses, however, suggesting that some participants experienced long-term negative effects including lingering feelings of self-doubt and anger as well as uncertainty regarding their personal moral integrity.26 It is unclear how well investigators can collect unbiased data about the individual psychological effects of their own experiments. To bring some order to these conflicting observations, one author suggested that deception can be used as long as “active awareness” is present of the deception and its necessity, that efforts are made to minimize its long-term effects on subjects, and that researchers work to develop techniques to minimize its use.20 This author recommended that subjects not leave the research environment with more anxiety than when they entered and that they should leave the experience enriched in some positive way.20
Whereas Milgram’s (and others’) goal was only to investigate, that of most health care simulation instructors is to educate. Comparing these two kinds of simulations thus may be an error of categories.3 Nevertheless, purity of intent does not assure psychological safety. This is of particular concern for trainees, who have a natural interest in being seen by their mentors and colleagues as psychologically strong and thus might be reluctant to acknowledge negative psychological sequelae or trauma. One promising article proposes an array of briefing strategies that may enhance the perception of safety by learners, but the effect of these techniques on deception specifically is unknown.13
During the debate, an “audience response” question was asked (with anonymous electronic responses) about whether it is appropriate to deliberately deceive learners using an anchored scale from 1 (defined as the viewpoint that deliberate deception was “not ethically permissible under any circumstances”) to 10 (defined as the viewpoint that there are “no ethical constraints on deliberate deception”). The audience responded with answers clustered at the high end of the scale, implying a general sense of ease with deception. This was further amplified by the tenor of the discussion, during which many expressed surprise that any ethical issues existed in this domain at all.27 One participant explained that deception was commonly used in his setting to instill a healthy skepticism among learners about the perfection of fellow clinicians. When placed within the context of the psychological literature, these observations raised concerns that a possible disconnect exists between the current state of uncertainty regarding these issues among some experts and the current state of practice among simulation educators (assuming that those present at the debate are representative of the field). Indeed, even those arguing the “pro” position in the debate freely acknowledged the theoretical potential for significant “drift” of deceptive practice and the occasional occurrence of negative learner repercussions. It was acknowledged by nearly all that given the absence of empirical evidence on the impact of deception, further work is needed to clarify what (if any) negative effects deception carries; what factors determine that risk, when (if ever) it is needed educationally; and what standards ought to exist. To that end, we present a framework intended to guide educators and researchers in considering the myriad facets of simulation affected by this issue.
Defining a Framework
The framework was developed via discussion among the authors (a group consisting of the debate participants and including an ethicist, a psychologist, and 3 physicians with extensive experience in simulations involving deception) and was initially based on a distillation of existing literature regarding the multiple dimensions of simulation as an educational practice.28 After an iterative sequence of discussions, we attempted to categorize the factors that influence simulation sessions into 6 primary elements and their interrelationships. The overall structure of this framework is displayed in Figure 1.
Primary elements include the learners, faculty, institutional environment, educational intent of the simulation, structure of the simulation, and overall long-term desired goals of the session (Table 1). These elements are further influenced by the key relationships between them. Consider the example of a faculty member who, in general, is qualified to handle the psychological ramifications of a deceptive simulation and a learner who, in general, is ready to experience such a case. If, however, a negative experience has previously occurred between that faculty member and that learner, this might alter how that learner would perceive a deceptive technique used by that faculty member, potentially derailing the simulation. As another example, consider a resident who has participated in a simulation-based intervention intended to improve upward communication of critical information or opinion to the attending. Without a cultural and institutional environment that supports and rewards such behavior, an attempt to use these newfound communication skills may well be met with a negative response that could undercut the learning point.29 Important interactions exist between all elements (Table 2).
Key Pedagogical Questions
Given the complex nature of the relationships outlined in this model, where should we begin? Although all agree that deceptive simulations hold the potential to generate psychological distress on the part of learners, significant disagreement exists as to whether this distress actually causes ongoing psychological harm to learners and diminishes teamwork and trust among clinicians and if this outweighs the benefits of using deception to prepare clinicians for the challenges of actual patient care. Significant disagreement also exists as to whether deception is necessary to achieve these pedagogical goals. Placing these questions in the framework discussed earlier highlights the elements and relationships most in need of investigation, that is, the faculty-learner relationship and the scenario structure-learner relationship (Fig. 2).
Is Deception “Psychologically Safe”?
Psychological safety of deception refers to the possibility that deceptive simulations could engender forms of stress that negatively impact learning, generate aversion to simulation-based approaches to education, and/or diminish participant’s sense of self-worth that require mitigation.3,6,30 This last concern relates to some consequences of Milgram’s research where certain subjects may have experienced significant loss of self-esteem as they grappled with the realization that they were willing to inflict pain on another if asked by an authority figure.20 However, how can we judge the likelihood that a given simulation or scenario will cause harm of this nature? One concept to consider is the fiction contract, the often implicit mutual agreement that learners will do their best to “suspend disbelief” and treat the situation “as if” it is real despite often unavoidable breaches in environmental and equipment fidelity.9,13 This contract may be straightforward for simple scenarios, but it is difficult to predict the degree to which adding unexpected deception to a case risks transforming it into something that—unknowingly—no longer conforms (at least for some individuals) to the implicit contract perceived by participants.9,10 One relevant observation from the psychological literature divides deceptive techniques into 2 categories as follows: those that occur within the bounds of the general “as if” contract and those concerning the contract itself.20 For a given scenario, deceptive actions perceived as being within the contract are often accepted by participants, whereas deception about the nature of the contract itself is often viewed as a breach of trust and good faith and by corollary is felt to cause more distress.20 An example of the former is a scenario that uses deception to further the explicit educational goals presented to the learners upon entry into the simulated environment but that leaves those goals unaltered. An example of the latter is a scenario designed to conduct research regarding response to authority but in which participants are deceived into believing the purpose of the session is purely educational. The question then becomes how our learners categorize deception occurring within our specific training environments.
A starting point for research into this issue would be to identify simulation centers that already use deception in scenarios and invite them to function as “natural laboratories” for research.31,32 By gathering experiential data from learners who have participated in deceptive simulations, their responses, attitudes, and concerns could be investigated.31,32 A number of rigorous qualitative methodologies exist, which could be effectively used in this analysis.33–37 Although data generated at a single site might be biased by institutional culture, a number of these studies from a broad range of programs could meaningfully impact our understanding of the effect of deception on learner perspectives and psychological outcomes.
Is Deception Ever Needed?
Although it seems that all agree that learners must be adequately equipped with the communication and relational skills necessary to successfully navigate difficult interpersonal or hierarchical situations in the clinical environment, experts differ with respect to what is needed in simulation to accomplish this. Advocates of deception believe that a high level of emotional and psychological authenticity is needed to achieve these learning objectives and propose that this authenticity can be difficult to generate without using deception of some sort. Those concerned about deception, however, suggest that this high degree of emotional authenticity may not be as necessary for learning as advocates believe and further propose that the learning objectives in question can be effectively taught with either no deception at all or with a mitigated form of deception in which participants are forewarned that deception will occur. Reaching a solution will require a systematic investigation of the cognitive and emotional differences among participants who have experienced simulations containing different types and degrees of deception.
To address this, we propose a “multiarmed” study in which learners experience a variety of simulations of equivalent clinical and communication/relational challenges but containing varying degrees of deception both with and without mitigation. Table 3 outlines the types of deception that could be assessed and the possible mitigating techniques. By comparing learner performance in effectively addressing the clinical and interpersonal issues portrayed before and after each session, much light could be shed on how much emotional activation is needed to best facilitate learning. Structured interviews to assess the emotional state of learners before and after each session could also help delineate the emotional issues triggered by each type of approach and the best ways to debrief problems that may arise.
As the field of simulation grows and matures, ethically challenging issues such as the use of deception will continue to arise.38 At present, considerable controversy still exists as to the place of deception in simulation, and concerns exist among some experts that the possibility of negative psychological and ethical repercussions from deceptive techniques may be underappreciated by the larger simulation community. We offer this framework as a step toward approaching these psychological and pedagogical issues. It is incumbent upon the simulation community to carefully examine the issues surrounding deception and to conduct scholarship that allows us to navigate these waters in an evidence-based manner.
1. Calhoun AW, Boone MC, Miller KH, Pian-Smith MC. Case and commentary: using simulation to address hierarchy issues during medical crises. Simul Healthc
2013; 8 (1): 13–19.
2. Gaba DM. Simulations that are challenging to the psyche of participants: how much should we worry and about what? Simul Healthc
2013; 8 (1): 4–7.
3. Truog RD, Meyer EC. Deception and death in medical simulation. Simul Healthc
2013; 8 (1): 1–3.
4. Dailey JI. Modeling manipulation in medical education. Adv Health Sci Educ Theory Pract
2010; 15 (2): 291–295.
5. Haidet P, Stein HF. The role of the student-teacher relationship in the formation of physicians. The hidden curriculum as process. J Gen Intern Med
2006; 21 (suppl 1): S16–S20.
6. Corvetto MA, Taekman JM. To die or not to die? A review of simulated death. Simul Healthc
2013; 8 (1): 8–12.
7. Dysart-gale D. Cultural sensitivity beyond ethnicity: a universal precautions model. The Internet Journal of Allied Health Sciences and Practices
2006; 4 (1): 1–5.
8. Stern DT. Practicing what we preach? An analysis of the curriculum of values in medical education. Am J Med
1998; 104 (6): 569–575.
9. Dieckmann P, Gaba D, Rall M. Deepening the theoretical foundations of patient simulation as social practice. Simul Healthc
2007; 2 (3): 183–193.
10. Munch R. Sociological Theory: From the 1920’s to the 1960’s
. Vol 2. Chicago, IL: Nelson Hall Publishers; 1994.
11. Dror I. A novel approach to minimize error in the medical domain: cognitive neuroscientific insights into training. Med Teach
33 (1): 34–38.
12. Dror I. The paradox of human expertise: why experts can get it wrong. In: Kapur N, Pascual-Leone A, Ramachandran V, eds. The Paradoxical Braine
. Cambridge, UK: Cambridge University Press; 2011: 177–188.
13. Rudolph JW, Raemer DB, Simon R. Establishing a safe container for learning in simulation: the role of the presimulation briefing. Simul Healthc
2014; 9 (6): 339–349.
14. Holmes DS. Debriefing After Psychological Experiments I. Effectiveness of Postdeception Dehoaxing. Am Psychol
1976; 31 (12): 858–867.
15. Smith SS, Richardson D. Amelioration of deception and harm in psychological research: the important role of debriefing. J Pers Soc Psychol
1983; 44 (5): 1075–1082.
16. Rogers G, de Rooy N, Bowe P. Simulated death can be an appropriate training tool for medical students. Med Educ
2011; 45: 1061–1063.
17. Arafeh JM, Hansen SS, Nichols A. Debriefing in simulated-based learning: facilitating a reflective discussion. J Perinat Neonatal Nurs
2010; 24 (4): 302–309.
18. Nestel D, Mobley BL, Hunt EA, Eppich WJ. Confederates in health care simulations: not as simple as it seems. Clin Simul Nurs
2014; 10 (12): 611–616.
19. Christensen L. Deception in Psychological Research, When is its Use Justified? Pers Soc Psychol Bull
1988; 14 (4): 664–675.
20. Kelman HC. Human use of human subjects: the problem of deception in social psychological experiments. Psychol Bull
1967; 67 (1): 1–11.
21. Milgram S. Behavioral study of obedience. J Abnorm Psychol
1963; 67: 371–378.
22. Milgram S. Obedience to Authority: An Experimental View
. HarperCollins: New York, NY: 1974.
23. Milgram S. Issues in the study of obedience: a reply to Baumrind. Am Psychol
1964; 19: 848–852.
24. Pihl RO, Zacchia CZ, Zeichner A. Follow-up analysis of the use of deception and aversive contingencies in psychological experiments. Psychol Rep
1981; 48 (3): 927–930.
25. Smith CP. How (un)acceptable is research involving deception? IRB
1981; 3 (8): 1–4.
26. Perry G. Behind the Shock Machine: The Untold Story of the Notorious Milgram Psychology Experiments
. New York, NY: The New Press; 2012.
27. Calhoun AW, Meyer EC, Pian-Smith M, Truog RD, Gaba D. Debating the ethics of simulation: is the use of death and deception appropriate?
San Francisco, CA: International Meeting on Simulation in Healthcare; 2014.
28. Gaba DM. The future vision of simulation in healthcare. Simul Healthc
2007; 2 (2): 126–135.
29. Sutcliffe KM, Lewton E, Rosenthal MM. Communication failures: an insidious contributor to medical mishaps. Acad Med
2004; 79 (2): 186–194.
30. Joels M, Pu Z, Wiegert O, Oitzl MS, Krugers HJ. Learning under stress: how does it work? Trends Cogn Sci
2006; 10 (4): 152–158.
31. Sargeant J. Qualitative research part II: participants, analysis, and quality assurance. J Grad Med Educ
2012; 4 (1): 1–3.
32. Sullivan GM, Sargeant J. Qualities of qualitative research: part I. J Grad Med Educ
2011; 3 (4): 449–452.
33. Kuper A, Lingard L, Levinson W. Critically appraising qualitative research. BMJ
2008; 337: a1035.
34. Kuper A, Reeves S, Levinson W. An introduction to reading and appraising qualitative research. BMJ
2008; 337: a288.
35. Lingard L, Albert M, Levinson W. Grounded theory, mixed methods, and action research. BMJ
2008; 337: a567.
36. Reeves S, Albert M, Kuper A, Hodges BD. Why use theories in qualitative research? BMJ
2008; 337: a949.
37. Reeves S, Kuper A, Hodges BD. Qualitative research methodologies: ethnography. BMJ
2008; 337: a1020.
38. Ziv A, Wolpe PR, Small SD, Glick S. Simulation-based medical education: an ethical imperative. Acad Med
2003; 78 (8): 783–788.