Secondary Logo

Journal Logo

Deception and Death in Medical Simulation

Truog, Robert D. MD; Meyer, Elaine C. PhD, RN

Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: February 2013 - Volume 8 - Issue 1 - p 1–3
doi: 10.1097/SIH.0b013e3182869fc2
Editorial
Free

From the Institute for Professionalism and Ethical Practice (R.D.T., E.C.M.), Department of Anesthesiology, Perioperative and Pain Medicine (R.D.T.) and Department of Psychiatry (E.C.M), Boston Children’s Hospital; Department of Social Medicine and Global Health, Division of Medical Ethics (R.D.T.), and Department of Psychiatry (E.C.M.), Harvard Medical School, Boston, MA.

Reprints: Robert D. Truog, MD, Institute for Professionalism and Ethical Practice Boston Children’s Hospital 9 Hope Ave Waltham, MA 02453 (e-mail: robert.truog@childrens.harvard.edu).

The authors declare no relevant conflicts of interest.

Medical simulation requires, by definition, a certain degree of “make believe.” The mannequin is not a real person, the scenario is not really happening, and no patients are either helped or harmed. Successful education in the simulated setting depends on skilled facilitators who can effectively persuade the learners to “suspend disbelief,” accept the “fiction contract,” and allow themselves to become fully engaged in the simulation experience.1,2 Two articles in this issue of the Journal explore the limits of this “make believe” world—should experienced and trusted clinicians be introduced into scenarios where they feign incompetence, and can the mannequin actually die in the scenario without risking psychological harm to the learners? As the field of simulation matures, ethically laden issues such as these need to be carefully considered and addressed to assure the integrity and reputation of the simulation enterprise.3

Calhoun et al take up the former issue in suggesting approaches for dealing with one of the most vexing issues in team management—the failure of clinicians to challenge the hierarchical structure of medical practice and to speak up to authority when they believe that alternative diagnostic or therapeutic options should be considered. They describe a scenario in which a child develops pulseless ventricular tachycardia from hyperkalemia. Laboratory investigation shows a high potassium level and a low phosphate level. As the team formulates a response, a pediatric intensive care unit attending enters the room, assumes control of the case, and calls for a dose of potassium phosphate to be given. If the learners appropriately challenge the attending’s order, then the scenario is complete. If the order is not challenged, then the resuscitation proceeds down an ineffective path and the child (mannequin) dies.

These authors are responding to a serious and pervasive crisis in health care today: the failure of clinicians to challenge authority. Indeed, the scenario used by Calhoun et al was motivated by very similar events in their own institution that tragically and unnecessarily resulted in the death of a child. Nevertheless, we worry that this approach may cross a critical line that threatens important foundational principles of medical simulation and may risk serious psychological consequences for the learners.

Credible and effective medical simulation must be built upon a solid foundation of trust and safety. Even as facilitators exhort learners to “suspend disbelief” and allow themselves to become emotionally and psychologically immersed in the make-believe world of the simulated environment, they are also reassuring the learners that it is safe for them to make themselves psychologically and emotionally vulnerable. Trust is embedded into the “ground rules” of many simulator programs, as an expectation that participants can trust others to be genuine and constructive in their feedback and criticism and to create a space of collaborative—not competitive—learning. In the protocols described previously, we are concerned that this environment of honesty and safety may be threatened by a deliberate attempt to trick and deceive the learners. We worry that such deception and tactics risk destroying this essential foundation of successful simulation, that is, an environment of trust and safety.

Although we do not want to imply that these scenarios raise the same level of concern as the famous Milgram experiments, some of the parallels are worth exploring. These experiments, conducted at Yale in the 1960s, have become almost a cliché example of the potential consequences of deceiving research subjects.4 In the experiment, the subjects were told to administer electrical shocks of increasing voltage to an unseen confederate in an adjoining room whenever the confederate gave incorrect answers to a series of word pairing questions. In reality, no shocks were administered, but as the voltage increased, the confederate began to pound on the wall, pleading for the shocks to stop, finally becoming unresponsive. In the first set of experiments, 65% of the subjects administered shocks at the maximal voltage, although most were very uncomfortable doing so and manifested varying degrees of stress and tension. At the end of the experiment, Milgram debriefed his subjects, arranging a “friendly reconciliation” between the subject and the “victim,” “to assure that the subject would leave the laboratory in a state of well being.” Nevertheless, some have argued that these experiments were unethical because they exposed the subjects to psychological risks for which they had not given their consent.

The scenario described previously differed from the Milgram experiments in several respects. First, whereas Milgram’s subjects were not monitored or offered routine follow-up, the residents and nurses in these cases had the benefit of ongoing relationships with their fellow learners and instructors. Second, the purpose of the Milgram experiments was solely to advance research on one aspect of human psychology, whereas the purpose of this simulation exercise was not only to learn how doctors and nurses respond in these scenarios but also to impart learners with insights and skills to improve their capacity to stand up to authority.

Nevertheless, there are some similarities in the type of psychological risk to which the participants in all of these scenarios were exposed. In the Milgram experiments, subjects were forced to confront the question, “Am I the kind of person who would administer shocks to someone to the point of unconsciousness rather than defy an authority figure?” In the simulation scenario, participants were faced with a similar question, “Am I the kind of person who is unwilling or unable to challenge a respected colleague who I think is making bad medical judgments, even when this may result in serious injury to the patient, or even death?”

Both the Milgram experiments and the article published here show that most of us are, in fact, these kinds of people. Anyone who learns this about themselves is bound to be disturbed by this insight, and indeed, this experience has the potential to provide a powerful and enduring opportunity for self-understanding and growth. The problem, however, is that this experience also has the potential for learners to experience shame, humiliation, and self-loathing, perhaps with significant long-term consequences.

There is some evidence that we may be overstating the risk. In follow-up to his own work, Milgram reported that 84% of his subjects were either “glad” or “very glad” to have participated in the research and, in some cases, wrote him letters of thanks for the personal insights and growth that the experiments provided.5 We are concerned, however, that such cursory reassurances may hide a deeper reality. The medical environment rewards those who appear to be psychologically hardy, and clinicians can become adept at concealing their emotions. Learners may therefore be very invested in not appearing to be vulnerable or weak and may successfully conceal whatever shame or distress they might be experiencing. As such, they might avoid being recognized as someone in need of help, further magnifying the problem. We believe this concern needs to be taken seriously, and at the very least, additional research needs to be undertaken to assess the psychological effects of this type of deception in ways that are sufficiently sensitive to detect psychological harm that may not be immediately or superficially apparent.

Some might also believe that our concerns are overstated because one of the goals of medical simulation is, after all, to have learners understand and confront their limitations. There is a fundamental difference, however, between learning that one does not know the correct approach to treating an arrhythmia or learning that one does not communicate clearly with others in crisis situations and learning that one is unable or unwilling to speak up to authority, even when it may save a life. The former is self-knowledge about inadequacies in our knowledge base or skills, whereas the latter is self-knowledge about inadequacies in our character, the core of who we are and how we see ourselves as a person.

Given that failure to stand up to authority is both an enormous barrier to improving patient safety and very prevalent among clinical teams in hospitals today, one might argue that a certain risk to the psychological health of the learners can be justified to address this important safety goal. It is not clear, however, that deception is necessary to teach clinicians the importance of standing up to authority and assuring they have the skills to do so. Our recommendation, therefore, is that research be conducted to examine whether deception is necessary for this learning to occur or if the learning would be equally effective if, say, participants were clearly informed at the beginning of the session that one of the learning goals is to examine the hierarchical structure of authority and then to prepare them for the fact that they might be intentionally deceived. Although the learners would be “tipped off” to the possibility of a confederate, they would still have the opportunity to learn strategies for speaking up to authority, without the breach of trust and heightened potential for psychological harm. Such research is feasible and, in our view, should be done before deception becomes incorporated as a mainstay into simulator training.

In the second article, Corvetto and Taekman examine another interesting facet of these questions when they ask whether the mannequin should ever “die” in a scenario. We agree with Janvier’s observation that,

In films and television, cardiopulmonary resuscitation usually works. This gives families the false impression that cardiopulmonary resuscitation really works most of the time, that it brings back their loved ones from the dead. We have to remember we are also guilty, with our “real-life” mock codes, in perpetuating this myth… If we want simulation to reproduce real life, then mannequins requiring cardiopulmonary resuscitation and epinephrine would die most of the time. They do not.6

Here again, however, the question is whether the mannequin death can be done in a context that furthers a sense of trust and psychological safety. Just as in real life, if the death of a patient (or mannequin) occurs within a context of blame, criticism, isolation, humiliation, or abandonment, it is likely to be degrading and potentially devastating. On the other hand, if the death is a realistic consequence of action taken or not taken and occurs in an environment of solidarity and support, recognizing that all of us sometimes fail to perform to the level of our expectations, then it can be a positive growth-promoting experience that can build the emotional resilience we need to survive and thrive as clinicians.

Toward this end, we find the authors’ recommendations to be sound and wise, “A pre-briefing session should be held before training sessions. Pre-briefing should include a discussion of the students’ expectations, a review of simulator features, and (for every simulation) the possibility of death. This is vital for minimizing psychological distress and managing expectations.” We agree that death of the mannequin should not be an outcome with early learners. Similarly, we support the authors’ insistence that facilitators for sessions in which death is allowed be highly skilled and experienced with end-of-life issues and be fully prepared to recognize and address the psychological distress that these scenarios may invoke. It is worth noting that the simulator suite is one, but hopefully not the only, setting for participants to learn about death and end-of-life care.

Simulation is rapidly becoming recognized as the most effective way of educating adult learners across a broad range of activities, including aviation, the military, health care, and beyond. One of the key strengths of simulation is the requirement that learners be actively engaged; they cannot be merely passive participants. In addition, learners are required to suspend disbelief and willingly enter the make-believe world that the instructors and simulation setting have created. In so doing, we ask learners to open themselves to an experience that has the potential to be stressful and shameful as well as stimulating and empowering. In return, we owe learners an environment that they can trust to be grounded in honesty and safety. Although we believe that the simulation environment can be compatible with scenarios in which the mannequin dies, we believe more research needs to be conducted before accepting that deception is both necessary and can be done without violating these fundamental principles.

Back to Top | Article Outline

REFERENCES

1. Dieckman P, Manser T, Wehner T, et al.. Reality and fiction cues in medical patient simulation: an interview study with anesthesiologists. J Cogn Eng Decis Making 2007; 1: 148–168.
2. Rudolph J, Simon R, Raemer DB. Which reality matters? Questions on the path to high engagement in healthcare simulation. Simul Healthc 2007; 2: 161–163.
3. Ziv A, Wolpe PR, Small S. Simulation-based medical education: an ethical imperative. Acad Med 2003; 78: 783–788.
4. Milgram S. Behavioral study of obedience. J Abnorm Psychol 1963; 67: 371–378.
5. Milgram S. Obedience to Authority: An Experimental View. New York, NY: Harpercollins; 1974.
6. Janvier A. No time for death. Am J Hosp Palliat Care 2010; 27: 163–164.
© 2013 Society for Simulation in Healthcare