I never “worry” about action, but only about inaction
-Winston Churchill to General Dill, December 1940
This issue of the Journal sees the publication of 2 articles that focus on the ethical issues in conducting simulation involving different types of challenging and stressful scenarios. Corvetto and Taekman1 provide a review addressing very important issues such as under what circumstances should instructors allow the simulated patient to “die” and, when this is allowed to happen, how should it be handled in debriefing? Calhoun et al2 describe the use of simulation, to present cases in which clinicians need to “challenge the hierarchy” to seek the optimal care of the simulated patient, without the participants being aware that this is in fact the main intended learning objective of the scenario. Both articles describe simulations that can be very demanding on the psyche of the participants.
To put these articles in context and to flesh out further some of the thorny issues, they are accompanied by 2 editorials. The first is by an intensivist and ethicist, Robert Truog, MD, and a psychiatric nurse, clinical psychologist and psychology researcher, Elaine Meyer, PhD, RN.3 Their editorial masterfully articulates a number of issues of great import that are raised in the published articles—most especially about the potential risks of involvement to participants in simulations that deal with psychologically challenging themes. In the present editorial, I have a few additional comments and perspectives. I am in complete agreement with Truog and Meyer on 3 points as follows: first, that it is vital to appropriately consider the psychological effects on participants in simulation; second, that not enough discussion of these issues has taken place in our field and certainly not in the pages of any journal; and third, that we who conduct simulations have a duty to our participants and ourselves to act with the best interests of the participants and of patients and their families as we do this work.
Concerning “death scenarios,” Corvetto and Taekman as well as Truog and Meyer nicely lay out that a variety of factors may come into play when deciding whether and how to allow the simulated patient to die. All agree that “death” can be appropriate in some situations but less appropriate—even unwise—in others. All agree that issues of prebriefing and debriefing of such occurrences is highly important. Yet, how much advance warning that the simulated patient might “die” continues to be a matter of debate. Clearly, for those who are naïve to simulation and for early learners in health care—for whom a patient death may be a never-experienced but a much feared event—advance preparation seems warranted. On the other hand, I believe that for those with considerable experience in realistic clinical simulations and/or for those in high-acuity clinical domains who routinely wield invasive therapies or who deal with acutely decompensating patients, it may well be within their clinical sphere for a (simulated) patient to die, even quite unexpectedly or despite their best efforts at resuscitation. For such learners, I worry that providing advance warning does not replicate an important reality of actual clinical work. I worry that clinicians who might have to deal with such occurrences with real patients will not be prepared for the emotional shock that they will experience and will not act optimally either in the management of the “arresting” patient or in the disclosure and discussion of the tragic adverse events to the patient’s family. I would like to have them experience this shock in well-conducted simulations and debriefings so that they are better prepared when it happens for real. Thus, worrying about both the well-being of the simulation participant and that of patients and families seems appropriate. Corvetto and Taekman as well as Truog and Meyer articulate this balance well.
Even more question are raised about the use of simulation for psychologically demanding situations involving challenging the hierarchy. Here, although Calhoun et al thoroughly articulate their concerns and their rationales for conducting such simulations, the issues are addressed even more critically by Truog and Meyer. Clearly, they worry about the effect of such scenarios on the psyche of the participants—especially when they are presented without advance warning. Among other things, they worry that the self-esteem and personhood of a participant might be damaged, conceivably irreparably, in scenarios for which a failure to effectively challenge the orders of a senior clinician leads to apparent catastrophe for a patient. I also see this as a valid worry, at least in principle. Simulation can be a very powerful tool. For years, I have quipped that in simulation “we are allowed to mess with peoples’ heads, for a good cause,” but perhaps, the depth of this power is not fully recognized—even by those with considerable experience with the technique—and therein lies a valid caution.
Both Calhoun et al as well as Truog and Meyer reference or discuss the famous Milgram obedience experiments.4,5 (Modern reprints of Milgram’s book5 are available.) In the prototypical example, subjects were encouraged by a white-coat clad experimenter to administer apparently real, painful, and dangerous electrical shocks as punishment to what they thought were other individuals who gave wrong answers to a test of learned material. Unbeknownst to the experimental subjects, this was a complete simulation because there were no individuals actually being shocked. This situation is vastly different from health care simulation because subjects in the Milgram experiments really thought that they were harming a real human being, whereas in health care, simulation subjects know that no human patient is being harmed. In a fascinating virtual reprise of the Milgram experiments, a group of British researchers in 2006 described6 a similar experiment but where the “individual being shocked” was clearly a virtual reality avatar and not a person. Despite this fact, subjects showed various subjective and objective signs of psychological discomfort, even as most “administered” the full program of shocks to the error-prone avatar. The key lessons from the Milgram obedience experiments and the virtual reality update remain. Human beings are highly vulnerable to manipulation by situational, organizational, and interpersonal factors that can cause them to do things that they might otherwise consider unthinkable. Even the transference into the virtual, without any deception, does not discharge the emotional content of such activities.
Perhaps, an even more relevant “infamous experiment” to consider is the Stanford Prison Experiment, conducted approximately 41 years ago and only a few kilometers away from where I now write this editorial. The description and results of this experiment were published in 1973,7,8 but it is discussed in great detail in a fascinating recent book by the principal investigator, Stanford professor of psychology (now emeritus) Phillip Zimbardo (for more information on this experiment, visit http://www.prisonexp.org).9 His work has had huge impacts in social psychology and most recently in coming to grips with the shocking revelations of, among other things, the prisoner abuse by American military and civilian personnel in the Abu Gharib prison in Iraq. In a nutshell, the Stanford Prison Experiment looked at how volunteers who were randomly assigned to be either prisoners or guards would act based on these role assignments during a rather low-technology simulation of a medium security prison. Within an astonishingly short period, people assigned to be guards assumed increasingly worrisome behaviors typically attributed to guards, including an evolving culture of control and humiliation. Conversely, those assigned to be prisoners developed different sorts of behaviors commonly seen in prisons, ranging from collaboration with the system to outright rebellion to serious withdrawal, depression, and likely suicidal ideation. From a modern perspective, it is clear that the experiment was flawed and risky in design and lacked many critical safeguards. It is believed by many that it should never have been conducted even had there been better protections for subjects.
However, like from the Milgram experiments, the key lesson for health care simulation from the Stanford Prison Experiment is that human beings very rapidly respond to social cues in both real and simulated situations and quickly take on the mantle of the roles assigned to them. This makes every one of us vulnerable to having our behavior modified powerfully by such circumstances. Certain kinds of simulations “mess with subjects’ heads” more than others. For these types especially, we are quite right to be worried.
That then begs the question. How much should we be worried? About what in particular? For all health care simulations? For all learner populations? For all kinds of scenarios? Perhaps most importantly, how do we balance our worries about the risks of discomforting participants or even harming them versus our worries about inaction failing to address the suboptimal clinical performance and behavior that we are typically trying to improve through simulation education and training?
Unlike the Stanford Prison Experiment, most clinical simulations are not conducted just to study the participants but rather to teach them something. Some simulations aim to teach skills needed in life-critical patient care situations. Calhoun et al, Truog and Meyer, and I all recognize that the “challenge the hierarchy” simulations are targeting an important clinical issue within health care, just as it was found to be a problem in other domains of high intrinsic hazard (eg, aviation, maritime, nuclear power). Several infamous and many more not so famous accidents have had this issue at their roots. In health care, we know that real patients have been and still are being harmed when team members do not challenge apparently mistaken decisions made by others, usually those higher in the hierarchy. This is not a conjecture or just an academic proposition—it is a serious problem that many of us have witnessed first hand or in postcase reviews. Thus, scenarios that replicate such situations are thought to be appropriate to familiarize clinicians with such circumstances and to exercise their abilities to recognize them and take appropriate action to protect the patient by challenging the leader’s decisions if they seem to be dangerous. There seems little disagreement that it is appropriate to use simulation to probe or teach about these issues. However, there is some disagreement as to how this should be done and what lines should not be crossed.
Truog and Meyer argue that deception took place in the study of Calhoun et al because the senior clinician gave instructions that he would not typically give in that clinical situation. They worry that such deception can be perceived as a betrayal of trust by simulation participants. They suggest that we should never present situations where, without previous warning to participants, an actual or apparent respected clinician purposefully yet surreptitiously insists that a clinician or team during a simulated case should adopt a seriously mistaken diagnosis or course of action. They believe that the risk of harm to learners and of a breach of trust between teacher and pupil is too high and that such situations should be performed only if the learners are briefed ahead of time that the scenario will involve such a situation.
I am not convinced that the study of Calhoun et al used deception—at least not beyond that of simulation in general, which uses the simulated “as if” to represent an analog to the real thing. Clearly, in the kinds of simulation scenarios discussed in the study of Calhoun et al, everyone understands that this is not a real patient care situation. I accept a worry about the learners’ psyches, but I worry more that a requirement to always prebrief participants about a scenario’s objectives could seriously undermine the learning, meaning, and impact of such scenarios. It is one thing to execute sound behaviors when you know what is coming. It is another to do so either in real life or in a simulation where one is not forewarned. Of course, this balance can differ depending on many factors. For early learners (eg, students), the balance seems to me to favor prebriefing and even full presentation that challenging authority is the main point of the scenario. For experienced personnel, including most house staff, I would argue that such prewarning might prevent them from experiencing the important reality that there are natural barriers to speaking up, even when the (simulated) consequences are grave. It is not so easy to recognize or counter such tendencies when they crop up unexpectedly.
One factor in the study of Calhoun et al was that the role of the respected attending physician giving inappropriate orders was played by an actual attending physician who is much respected and beloved. Would things have been less risky to the participants’ psyches if it had been a total stranger clearly playing a role? How much trouble do participants have in separating an actor-in-role from a real clinician in such settings? How much does it bother them when there is a discordance? How much value might there be in coming to grips with the fact that even beloved respected clinicians can occasionally actually be mistaken and insistent, as they too—like anyone—can be affected by stress, fatigue, illness, prescription medications, or other vagaries of life?
Another factor to consider is the particular combination Calhoun et al used of a scenario about challenging authority coupled with an outcome of patient death. As discussed in detail by Corvetto and Taekman, there can be huge emotional baggage of patient death scenarios. When coupled with a non-predisclosed “challenge the authority” scenario, is this just too much? If so, too much for which learner populations? Could the same point about challenging the hierarchy be made with the simulated patient critically ill but not dead?
Like so many issues in this sphere, there exist very few, if any, data about participant opinion, emotion, learning, or transfer of skill to actual patient care. Clearly, more empirical work is needed about these issues, so that we can judge the degree to which our worries are founded in participants’ actual perceptions and concerns. These data, along with continued thoughtful scholarly discussion of the issues, can guide our profession to find the optimal approaches to maximize learning and optimize care for patients and families, while minimizing the risk to the psyche of participants. Some might argue that we already know enough about this topic to have “standards” promulgated by an organization like the Society for Simulation in Healthcare. I worry about taking such drastic steps. Given the extreme diversity of simulation applications, with many unique combinations of factors, I am skeptical that any meaningful and fair standards could be drafted. However, I suggest that there are general principles upon which our entire community can agree:
- Instructors must think hard about the ethical and psychological aspects of what they are doing both in advance and in the moment.
- Instructors should design and conduct simulations in ways that take into account the vulnerabilities of the learner population.
- During prebriefing, instructors should discuss and disclose relevant psychologically challenging components of simulation scenarios when it is possible to do so without adversely affecting the learning objectives.
- During debriefing, instructors should disclose any deception or scripting of scenario outcome or confederate behavior, so that learners may understand what transpired and why the scenario was conducted as it was.
- Instructors for simulations that are likely to evoke strong emotions and psychological response from participants should be highly experienced and fully prepared to deal with the issues raised.
- Instructors should consider routine follow-up with all participants after such simulations are conducted and should certainly follow-up if there is any indication of a significant psychological impact from the experience. Instructors should consider establishing referral linkages with professionals who can evaluate and treat individuals who are troubled by the simulation.
REFERENCES
1. Corvetto M, Taekman J. To die or not to die? A review of simulated death.
Simul Healthc 2013; 8: 8–12.
2. Calhoun A, Boone M, Miller K, Pian-Smith M. Case and commentary: Using simulation to address hiearchy issues during medical crises.
Simul Healthc 2013; 8: 13–19.
3. Truog R, Meyer E. Deception and death in medical simulation.
Simul Healthc 2013; 8: 1–3.
4. Milgram S. Behavioral study of obedience.
J Abnl Soc Psych 1963; 67: 371–378.
5. Milgram S.
Obedience to Authority: An Experimental View. London, England: Tavistock Publications; 1974.
6. Slater M, Antley A, Davison A, et al.. A virtual reprise of the Stanley Milgram obedience experiments.
PLoS One 2006; 1: e39.
7. Haney C, Banks C, Zimbardo P. A study of prisoners and guards in a simulated prison.
Nav Res Rev 1973; 9: 1–17.
8. Haney C, Banks C, Zimbardo P. Interpersonal dynamics in a simulated prison.
Int J Criminol Penology 1973; 1: 69–97.
9. Zimbardo P.
The Lucifer Effect: Understanding How Good People Turn to Evil. New York, NY: Random House; 2007.