Medical outcomes and patient safety depend on good teamwork among healthcare practitioners.1 However, the systematic training of healthcare teams is a recent development in medical education.2 Formal team training in crisis management in medical, dental, and nursing schools has been offered sporadically, if at all. Informal instruction has been primarily by observation, face-to-face role-playing, and clerkship experience in emergency departments, operating rooms, intensive care units, and delivery rooms. Each has its shortcomings: observation is passive learning, and interactive learning by role-playing with standardized patients or with high-fidelity patient simulators is expensive. Clerkship experience, the highest level of interactive learning, provides random learning opportunities based on the population of patients presenting for care, resulting in “training by serendipity.” Alternative methods are needed.
Other industries already make use of formal team training methods. To better prepare pilots and their crew for potential airline disasters, the airline industry developed supplemental training in Crew Resource Management (CRM) using flight simulators. Flight crews are trained in teamwork and leadership skills by managing various crises, which can be presented on demand and in a safe environment.3
Simulation-Based Team Training in Medicine
Medicine has introduced simulation technology with high-fidelity (physiologically responsive) mannequins as an effective method for team training in operating rooms. Following the pioneering work of Gaba et al.,4 anesthesiologists first advocated a simulation-based course called Anesthesia Crisis Resource Management (ACRM), modeled after the CRM training developed in the airline industry. The Anesthesia Crisis Resource Management course, implemented in simulation centers equipped with the PS, has now become a standard part of the curriculum in many US residencies.5 Successful experience in this simulation environment has led to the study of simulation in emergency medical education6,7 and development of a simulation-based Emergency Medicine Crisis Resource Management (EMCRM) course.8 Other medical environments such as Perinatal Medicine (including Obstetrics)9 and Radiology10 are following this lead.
A Virtual Environment for Team Training in Emergency Medicine
Few attempts have been reported of applying high-fidelity mannequin-based team training methods to virtual, computer-based, learning environments where users meet online and take the role of an avatar, or character, in the virtual environment. Reznek et al.11 recommend the use of virtual simulation based methods. Initial efforts in this direction include Argonaute 3D, a Virtual World for Distance consultation in Oncology12 and MATADOR, a multiuser simulator that facilitates team training in Emergency Medicine.13 Nyssen et al.14 have also compared the use of computer-based simulations with mannequin-based simulations in training of anesthesiology residents.
Project Goals
The successful preliminary study of EMCRM using mannequin-based simulations prompted us to initiate development of an alternative simulation environment: an internet-based Virtual ED with physiologically responsive patients. The goals of this project were 1) to create a virtual learning environment for training healthcare teams to apply the basic principles of team leadership and trauma management (Advanced Trauma Life Support), and 2) to evaluate the Virtual ED by comparing users’ experiences with both simulation training methods.
DEVELOPMENT
A 3D Virtual World was created to represent one trauma bay of an ED using Atmosphere. This beta version software program from Adobe Systems, Inc.15 supports multiuser real time interactions in a virtual learning environment. Learners log onto the program and take the role of a team member who is represented in the virtual environment by an avatar, or character, that can perform diagnostic and therapeutic actions on a virtual patient. Healthcare team member avatars, representing both male and female users, and 6 virtual trauma patients, or nonplayer characters, were designed using Poser from Curious Labs, Inc.,16 a software program for creating human figures for animations.
Team members enter the Virtual ED by clicking on the appropriate hyperlink on a standard HTML webpage (Fig. 1). Once inside the ED, learners use the mouse or keyboard directional keys to move their avatar in the virtual environment. They may initiate specific actions, such as using a stethoscope or performing a venipuncture, by selecting from a Controls Menu (Fig. 2). Over 30 common diagnostic and therapeutic actions were programmed into the Virtual ED using Java script.
Figure 1.:
The virtual emergency department (Virtual ED).
Figure 2.:
User controls for virtual emergency department.
Patient Responses
Virtual patients respond with appropriate physiological changes, such as an increase in heart rate, or a decrease in blood pressure, as a result of team members’ actions. These changes in patients’ vital signs are displayed on a monitor in the Virtual ED. Four physiological parameters, including heart rate, blood pressure, respiratory rate, and oxygen saturation have been programmed into the system, so virtual patients respond appropriately to team members’ clinical interventions. To facilitate the user’s sense of engagement, the virtual patient’s condition, or state, was programmed to change in an accelerated time sequence (about 50% faster), when compared with real time responses.
In the current version, the virtual patients are immobile on gurneys, but any position can be programmed, such as “log-rolling” for examination of the patient’s back. On command, the gurney can tilt into Trendelenberg position.
Avatar Interactions
Team members communicate with each other and with the virtual patient in real time by using live, voice over Internet protocol from their desktop. In this simulation, we implemented Talker from DigitalSpace,17 a multiuser voice over Internet protocol audio system. Each member uses a headset with headphones and a microphone for communicating with the patient and other team members.
The learner’s avatar may perform a selected diagnostic or therapeutic intervention only if it is positioned appropriately in relation to the virtual patient. For example, an avatar standing at the head of the patient cannot check femoral pulses or examine the patient’s abdomen. As in real life, if 2 avatars are too close together, they may interfere with each other’s actions.
Avatar actions are visible to other team members, but the information generated from an action (for example, finding that the airway is clear after an airway examination) is displayed only to the team member who performed it. Therefore team members must verbally report results of their actions to their fellow team members to function well as a team. This feature was created intentionally to replicate the real world situation in a medical emergency where effective communication among team members is critically important.
Scenario Development
Ten clinical scenarios originally developed for the PS system at the Simulation Center at Huddinge University Hospital were adapted for use in the Virtual ED. Six of these clinical scenarios were selected, representing patients with unique injuries or medical conditions, such as a ruptured spleen, chest injury, or a fractured limb (Table 1). In each training method, the trauma team must evaluate the clinical status of the virtual patient or mannequin and initiate appropriate treatment. The patient’s vital signs are programmed to demonstrate a beneficial response if the appropriate clinical actions are selected. If clinical priorities are incorrect or attention is focused on a secondary problem, the vital signs deteriorate over approximately 10 minutes until the virtual patient or mannequin “dies.” In this situation, the faculty member may wish to intervene and “stop action” during a scenario to provide immediate corrective feedback to the team members. During the scenario, all interactions, including audio communication in the Virtual World are recorded on videotape for subsequent review during a debriefing session.
Table 1: Clinical Scenarios Used in Both Arms of Study
TRAINING METHOD
The training method used for this research is based on Kolb’s experiential learning18 in which learners participate in a shared learning experience. Once the exercise is complete, they work with the instructor/facilitator to understand how they might improve their performance in the next iteration of the training scenario. To lead this style of training, the instructor’s role is to brief the trainees at the beginning of the exercise to introduce them to the “rules of the game” and to observe and listen to the interactions that occur during the scenario. After the scenario has ended, the instructor leads a follow-up discussion with the learners to elicit from them what worked well and what they may have done differently.
Debriefing Session
The debriefing is the critical part of the educational process. Guided by the faculty member in a meeting after each scenario, students are asked to critique themselves and each other in a constructive and nonthreatening manner. The faculty then uses the video data to demonstrate which behaviors are best and which could be improved. If a face-to-face meeting following the scenarios were not practical, the debriefing could be done over the headphones, allowing students and faculty to participate in the full session from distant locations. Any number of scenarios may be used during an educational session, depending on time available, faculty interest, and specific goals of the session. Consecutive scenarios alternating with debriefing allow the learner to immediately implement lessons learned during the previous scenario and debriefing.
RESEARCH PROTOCOL
This prospective, pre- and postintervention study was approved by the Administrative Panels on Human Subjects at Stanford University and written, informed consent was obtained for all subjects.
The study design was a pretest-posttest comparison of mean scores of observed team leadership skills, comparing the 2 types of simulation systems—the Virtual ED and the PS. Within each arm of the study, subjects’ scores on a pretest assessment case were compared with the subjects’ scores on a different posttest assessment case that was matched for difficulty level. Comparison of gain scores across the 2 arms of the study will reveal whether or not the scores of trainees using 1 simulation method were significantly different than those of trainees who used the other simulation method.
Subjects
Thirty subjects, including 13 recent medical graduates and 17 students in their clinical years, volunteered to participate in this “end-of-June” study (Table 2). Fifteen were men and 15 were women; 25 were students from Stanford University School of Medicine, and 5 were from a nearby medical center. All subjects were provided online instructional materials, written by the EM faculty member, which summarized the principles of trauma care, ED team leadership, and EMCRM. Subjects were randomly assigned to either the Virtual ED simulator group (n = 16) or the PS simulator group (n = 14). Both groups followed the same research protocol, studied the same patient cases in the same sequence, and were rated by the same 3 experienced clinicians, using the same behavioral rating scale (Fig. 3). After a user interface training session for their respective simulation systems, subjects performed a pretest trauma case, followed by 4 learning cases, and then a posttest case. The total time for training and assessing pairs of subjects was approximately 4 hours.
Table 2: Subjects' Demographics
Figure 3.:
EMCRM rating scale and scoring rubric.
Subjects worked through the scenarios as members of a 4-person trauma team. For learning cases, 2 subjects worked together with 2 members of the research team (an ED physician and ED nurse). However, when being assessed, each subject took the role of the team leader, and 3 members of the research team took the roles of his or her team members. The researchers played a “neutral” role (neither helping nor hindering the leader’s performance). Thus, for assessment cases, each subject was part of a 4-person team (3 physicians and 1 ED nurse), so there were a total of 30 teams performing both the pretest and posttest scenarios.
The team leader received the patient report from the ambulance crew, read it to the other team members, and began to direct their actions. The other team members performed the duties assigned to them by the leader and communicated their findings. With each new learning scenario the 2 subjects alternated between acting as the team leader or as a team member. The scenario ended when the team had effectively stabilized the patient, as assessed by the faculty member, or the patient died. Each learning case was then followed by an instructor-facilitated debriefing session to discuss the actions taken and provide feedback.
Assessing Subjects’ Attitudes
On completion of the posttest case, subjects were asked to complete an exit questionnaire. The subjects rated each item on the questionnaire, using a 5-point Likert scale (1 = low; 5 = high). Questions were designed to gather demographic data and query subjects’ opinions about the learning experience they had just completed.
Assessing Performance
The EMCRM scale was used to assess each subject’s team leadership skills (Fig. 3). The scale was an adaptation of a scale previously developed by one of the authors for use in the Emergency Medicine Residency training program.8 Three instructor/raters independently observed each subject’s performance as the team leader on both the pretest and posttest cases. The raters used a 5-point Likert scale (1 = poor; 5 = excellent) to assign scores for each of the 10 items on the scale, and for an overall item, for a possible total score of 55.
Before the study, a detailed scoring rubric describing the range of possible behaviors for each item in the scale was developed by an educational evaluator in consultation with the EM faculty member. The scoring rubric was given to each rater in a briefing session before the study and discussed adequately to ensure that each rater understood the specific behaviors that should be scored as average, below average, or above average in both the Virtual ED and the PS scenarios. Raters were told to use their judgment with regard to the extent to which a subjects’ performance was above average or below average. For example, if the rater felt that the subjects’ leadership behavior was above average, then the rater was to judge whether the subjects’ behavior demonstrated “somewhat above average” performance (a “4”) or “significantly above average” performance (a “5”). Similarly, if the rater felt that the subjects’ leadership behavior was below average, then the rater was to judge whether the subjects’ behavior was “somewhat below average” (a “2”) or “significantly below average” (a “1”). One item of the detailed scoring rubric is shown in Figure 3.
Statistical Analysis
Statistical analysis was performed using SPSS for Windows, Version 12 (Chicago, IL). Interrater reliability among the 3 instructors was calculated with an intraclass correlation coefficient. The internal consistency of the EMCRM scale was measured using Cronbach’s α. The subjects’ summary scores were compared between the pretest and posttest cases using the Wilcoxon signed rank test, a nonparametric test for statistical significance of scores between 2 groups.
RESULTS
Summary of Exit Questionnaire Data
The majority of the volunteers were not gamers; 75% of the Virtual ED group and 57% of the PS group had never played video games in the past year. Seventy-five percent of the Virtual ED group and 86% of the PS group reported the session changed their feelings/attitudes about working as a member or leader of an ED team. Their ratings on the exit questionnaire (5-point Likert scale: 1 = low; 5 = high) showed that 88% of the Virtual ED group and 93% of the PS group felt immersed “much” or “all of the time” (Fig. 4). Before the training experience, both groups rated their confidence in their “ability to lead an ER team in the management of a trauma patient” as low or “unsure”; mean value for the Virtual ED group was 2.0, and for the PS group, the mean value was 2.2). Fifty-six percent of the Virtual ED group and 78% of the PS group rated themselves as “confident or very confident” in their ability to lead an ED team after the training session; mean value for the Virtual ED group was 3.4, and for the PS group, the mean value was 3.9 (Fig. 5). All subjects in both groups felt that the simulation exercises would be either “useful” or “very useful” for learning to initially assess and manage trauma patients in the ED. Finally, 94% of the Virtual ED group and 100% of the PS group felt the simulation exercises would be either “useful” or “very useful” for learning to work as a member of an ED team (Fig. 4). Their comments indicated that they also perceived the patient physiology models and virtual environment as realistic. A summary of the exit questionnaire data is presented in Figures 4 and 5.
Figure 4.:
Participants’ ratings of immersion and usefulness.
Figure 5.:
Participants’ confidence before and after simulation exercises.
Assessment of Team Leader Performance
Learners who used either the Virtual ED or the PS system showed significant improvement in performance between pretest and posttest cases (P < 0.05). Mean scores are shown for subjects’ performance on the pretest case as compared with the posttest case in the Virtual ED group and in the PS group (Fig. 6). Analysis of variance of pretest and posttest scores on the EMCRM rating scale by simulator group showed no significant differences between the Virtual ED group and the PS group on either the pretest or the posttest summary scores (Tables 3–5).
Figure 6.:
Subjects’ mean summary scores on pre- and posttest assessments of team leadership skills for patient simulator and virtual ED simulations.
Table 3: Descriptive Statistics for Virtual ED and Patient Simulator Groups
Table 4: ANOVA of Pretest Summary Scores
Table 5: ANOVA of Posttest Summary Scores
The EMCRM rating scale had an internal consistency of 0.96 as measured by Cronbach’s α. The interrater reliability of the rating scale was calculated at 0.71.
DISCUSSION
Subjects in this pilot study demonstrated statistically significant improvement in their team leadership skills after participating in the half-day PS simulation training session or the half-day Virtual ED simulation training session. This demonstrates that both the mannequin-based simulation of ED cases and the Virtual Worlds simulation of ED cases are valid training methods that produce improvements in EMCRM team leadership skills in this target group. In addition, no statistically significant differences were found when comparing the mean summary scores for those trained with the patient simulation and the mean summary scores for those trained with the Virtual ED simulation. Although the absence of statistical significance does not prove equivalence in the 2 simulation methods, it does suggest that the new approach—team training with online Virtual Worlds simulation—yields comparable results as are achieved in the predominant method for simulation-based team training—the instrumented mannequin-based simulation in a real ED environment.
There are, nevertheless, several limitations of this study. First, the small sample size is a limitation which impacts the power of the study and thus the likelihood of a type II error—failure to reject a null hypothesis that is really false. Second, the reliance on a convenience sample of volunteer students and recent graduates limits the generalizability of results. However, the subjects were, in fact, from 2 major medical training centers in Northern California, which extends the generalizability of the results. Other limitations included the fact that raters were not blinded to whether the assessment scenarios were for pretest or posttest purposes, and additional training with videotaped scenarios of both Virtual World and PS cases may have improved interrater reliability above 0.71. In addition, a cross-over research design in which both groups receive both training methods would have been desirable; this was impossible because of limited funding and limited availability of subjects. As this pilot study did not focus on the question of transfer of skills from the training environment to the clinical environment, this question remains unanswered, and is an important focus for future studies of simulation-based training.
The results of this study are potentially important findings for several reasons. First, the use of Virtual Worlds for team training is likely more cost effective than the PS method. Although there are initial development costs for creating both training environments, the online learning environment and the interactions of the virtual hospital staff with virtual patients, offer a number of advantages that would seem to offset these initial costs. Some of the advantages include the ability to use the Virtual World with multiple trainees simultaneously over time; the reduced opportunity costs associated with trainees attending training sessions at a variety of times, and from geographically dispersed locations. In addition, a variety of patient conditions can be modeled to simulate individual trauma patients, but more significantly, multiple victims from a mass casualty incident can be presented simultaneously in the online learning environment. Although this is commonly done in many simulation programs—those using either mannequins or standardized patients—this feature of Virtual Worlds, difficult to reproduce in a Simulation Center, provides a more challenging training opportunity for EM residents and more experienced hospital staff.
The Virtual World can be designed to replicate the exact layout and location of equipment, supplies and other resources to mirror the real work setting. Once a new Virtual World is created, scenarios can be run more than once in a short period of time allowing trainees to receive feedback quickly, and thus learn from their mistakes. With the After Action Review feature, trainees’ performance during the simulation can be captured and recorded from any of the virtual team members’ perspectives for playback and assessment after the event.
In addition to the results from our exit questionnaire and performance ratings, students emphasized the emotional impact of the simulated experience in the Virtual World with comments such as “I will never forget to order blood glucose when a comatose patient is presented to me!”
CONCLUSION
Although instrumented mannequin-based simulators will continue to be useful methods for teaching healthcare teams to manage patients in acute care settings, this study demonstrates that both the PS and the Virtual World simulation learning environments improve learning outcomes in the area of team training for trauma care. The greater flexibility, practicality, and scalability of the online Virtual World environment promises to be a valuable complement to current simulation-based training methods.
Although this study reports the design, development, and evaluation of an early prototype Virtual ED, the findings are encouraging, demonstrating the potential value of training healthcare teams with Virtual World learning environments. Further studies are needed to replicate these findings and extend the research to the areas of retention and transfer of team leadership skills to real hospital settings.
As the software development platform for this learning environment is no longer available, future Virtual World developers will need to use newer platforms with improved graphics and functionalities. Future work should include the design and development of improved virtual hospital learning environments in which 1) learners’ interactions with patients and staff are more realistic; 2) virtual patients exhibit more sophisticated physiological responses to clinical interventions; and 3) users are afforded a greater focus on improving their clinical decision-making skills.
ACKNOWLEDGMENTS
The authors acknowledge the collaborators in this project, Li Fellander Tsai, MD, PhD, and Carl-Johan Wallin, MD, Huddinge Hospital, Karolinska Institutet, who contributed the 6 trauma scenarios used in both the HPS and Virtual ED exercises. The authors also acknowledge the funding source for this research—the Wallenberg Global Learning Network (WGLN), Stockholm, Sweden.
REFERENCES
1. Morey JC, Simon R, Jay GD, et al. Error reduction and performance improvement in the emergency department through formal teamwork training: evaluation results of the MedTeams project.
Health Serv Res 2002;37:1553–1581.
2. Kohn L, Corrigan J, Donaldson M.
To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999.
3. Wiener EL, Kanki BG, Helmreich RL.
Cockpit Resource Management. New York: Academic Press; 1993.
4. Gaba DM, Howard SK, Fish KJ, Yang G, Sarnquist FH. Anesthesia crisis resource management training: teaching anesthesiologists to handle critical incidents.
Aviat Space Environ Med 1992;63:763–770.
5. Gaba DM, Howard SK, Fish KJ, Smith BE, Sowb YA. Simulation-based training in anesthesia crisis resource management (ACRM): a decade of experience.
Simul Gaming 2001;32:175–193.
6. Small SD, Wuerz RC, Simon R, Shapiro N, Conn A, Setnik G. Demonstration of high-fidelity simulation team training for emergency medicine.
Acad Emerg Med 1999;6:312–323.
7. McLaughlin SA, Doezema D, Sklar DP. Human simulation in emergency medicine training: a model curriculum.
Acad Emerg Med 2002;9:1310–1318.
8. Reznek M, Smith-Coggins R, Howard S, et al. Emergency medicine crisis resource management (EMCRM): pilot study of a simulation-based crisis management course for emergency medicine.
Acad Emerg Med 2003;10:386–389.
9. Halamek LP, Kaegi DM, Gaba DM, et al. Time for a new paradigm in pediatric medical education: teaching neonatal resuscitation in a simulated delivery room environment.
Pediatrics 2000;106:E45.
10. Sica GT, Barron DM, Blum R, Frenna TH, Raemer DB. Computerized realistic simulation: a teaching module for crisis management in radiology.
AJR Am J Roentgenol 1999;172:301–304.
11. Reznek M, Harter P, Krummel T. Virtual reality and simulation: training the future emergency physician.
Acad Emerg Med 2002;9: 78–87.
12. LeMer D, Soler L, Pavy D, et al. Argonaute 3D: a real-time cooperative medical planning software on DSL network. Medicine Meets Virtual Reality 12, 2004:203–209.
13. Halvorsrud R, Hagen S, Fagernes S, Mjelstad S, Romundstad L. Trauma team training in a distributed virtual emergency room. Medicine Meets Virtual Reality 11, 2003:100–102. Available at:
http://www.telenor.no/fou/prosjekter/matador/.
14. Nyssen AS, Larbuisson R, Janssens M, Pendeville P, Mayne A. A comparison of the training value of two types of anesthesia simulators: computer screen-based and mannequin-based simulators.
Anesth Analg 2002;94:1560–1565.
15. Atmosphere software, Adobe Systems Inc., San Jose, CA, USA. Available at:
http://www.adobe.com.
16. Poser Software, Curious Labs, Inc., Santa Cruz, CA, USA. Available at:
http://www.curiouslabs.com.
17. Talker software, DigitalSpace, Inc., Santa Cruz, CA, USA. Available at:
http://www.digitalspace.com.
18. Kolb DA. Experiential learning: experience as the source of learning and development. Englewood Cliffs, NJ: Prentice Hall, 1984.