Journal Logo

Empirical Investigations

Integrated In-Situ Simulation Using Redirected Faculty Educational Time to Minimize Costs

A Feasibility Study

Calhoun, Aaron W. MD; Boone, Megan C. RN, MSN; Peterson, Eleanor B. MD; Boland, Kimberly A. MD; Montgomery, Vicki L. MD

Author Information
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: December 2011 - Volume 6 - Issue 6 - p 337-344
doi: 10.1097/SIH.0b013e318224bdde
  • Free


Medical simulation was developed as a response to the observation that centers with higher volumes of a given condition typically have better outcomes for that disorder (the so-called volume-outcome relationship).1–3 By enabling the standardized reproduction of rare events, it was hoped that outcomes for those events could be improved. Numerous studies have since confirmed this, and the value of medical simulation is now widely recognized.4–10 The relative immobility of the first generation of simulation equipment required that these sessions be held in centralized locations where the necessary equipment to enact each scenario could be concentrated and coordinated. The technological complexity of this first generation of simulators also required the presence of paid technical staff to deal with mechanical issues. Although necessary, the widespread growth of this center-based approach has at times been limited by the often high cost of construction and ongoing staffing needs. This limitation has placed the use of simulation technology beyond the reach of some smaller institutions.5

A further limitation stems from the logistical issues involved with sending large numbers of trainees to a remote site.2 One means of addressing this was proposed by Weinstock et al,11 who theorized that an on-site location would allow higher numbers of participants to engage in frequent, repetitive simulation-based activities with the premise that the higher volume of training facilitated by the center's proximity to the clinical arena would allow for a greater improvement in participant skill. After a year of operation, this center was able to engage 100% of critical care fellows, 86% of nurses, 90% of respiratory therapists, and 70% of pediatric house staff in 1548 simulated teaching sessions during their normal daily workflow.11 Although this development solved the issue of proximity, it still incurred significant spatial and personnel costs; specifically, 436 square feet of unused space, an initial cost of $472,000, and an ongoing yearly cost of $67,875 (85% of which consisted of staff salaries).11

Progress in simulator technology has, however, opened up further options to address cost. Several of the newer high-fidelity simulators are fully portable, rendering true in situ simulation a viable option for many centers.12 Using this methodology, simulators are transported to actual clinical sites, allowing for enhanced multidisciplinary involvement and improved realism.7,13,14 It has further been suggested that with the development of in situ methodology, simulation has reached a status equivalent to that of computer science in the 1970s, with mannequin technology having become user-friendly enough to be accessible to the average motivated clinician.15 This methodology effectively enables a program to reduce the space requirements to, at minimum, those needed for mannequin storage, thus addressing the spatial costs of program initiation.

When considered in the light of the space limitations faced by our institution, these developments inspired us to design a simulation program structure that uses minimal permanent space (storage only). Given concurrent institutional funding limitations, we chose to explore the use of a purely part-time faculty drawn from clinicians able to “redirect” a portion of their paid educational time toward simulation-based education and assume roles traditionally reserved to full-time simulation center staff. If this approach could be proved viable, it would open up significant opportunities for institutions currently limited in their ability to use simulation. In July 2008, we implemented this program. The purpose of this article is to report on its development and growth as a means of discussing the feasibility of this approach.


This study was reviewed and approved by the University of Louisville Institutional Review Board.

Program Initiation

Our initial programmatic goal was to designate several patient bed spaces to use as primary “simulation nodes.” The initial node was located in the Kosair Children's Hospital “Just for Kids” Critical Care Center (CCC), a 26-bed multidisciplinary pediatric intensive care unit. Two bed spaces were designated as “primary” simulation nodes, and the CCC family conference room was designated as a secondary space. To facilitate video debriefing, an audiovisual system was purchased and subsequently installed in these locations, allowing for video recording and playback into the CCC conference room, our initial debriefing site. This approach was chosen to improve the quality of the video data derived from program sessions to attain some of the benefits of the technology present in a free-standing simulation center. At the program's outset, two portable high-fidelity pediatric simulators (a 6-month old and a 7-year-old model), mobile carts for transportation and storage of the mannequins and their associated air compressors, monitors, electrical equipment, and a code cart and defibrillator identical to those used in the CCC were purchased. All other ancillary medical equipment and drugs were taken from expired stock. No permanent space was designated for the program except for the storage space necessary for the code cart and mannequins, and the area occupied by the main audiovisual system.

As these bed spaces represent active patient care areas in our CCC, they are occupied the majority of the time. Because of this, we developed a strategy in which, on days during which simulations are planned, these bed spaces are designated for surgical patients likely to arrive after 12 pm and simulation sessions are correspondingly held in the morning. This approach was coupled with the designation of a “backup” location for simulations in the CCC family conference room to minimize cancellations.

Program expansion was conducted in a phased manner, with the staged deployment of Emergency Department, Ward-based, and Recovery Room based satellite nodes using a similar process of primary and backup space designation. Given that the audiovisual system was limited to the CCC, we have not yet integrated video into these areas. Table 1 gives a breakdown of initial costs.

Table 1:
Startup and Operating Costs

Curriculum Development and Growth

Curriculum development was structured around the theme of multidisciplinary crisis management, further subdivided into medical and “relational” crises. Medical crisis management course content was initially developed from key principles found in the Crisis Resource Management (CRM) training literature and subsequently adapted to address issues present in our local institutional context.16–19 Relational crisis management course content focused on the acquisition of the interpersonal skills needed to deliver difficult information and resolve conflict within the doctor-patient relationship. This program, termed the Program for the Approach to Complex Encounters (PACE), was based on instructional methodology developed by the Institute for Professionalism and Ethical Practice and uses Standardized Patients (SPs) as surrogate family members with whom conversations take place.20–22 In addition to this, monodisciplinary nursing skills sessions were initiated by the program's nursing leadership with the goal of preparing CCC and Ward nurses for participation in the larger CRM program.

Faculty Acquisition and Training

Program faculty were drawn from an interested group of CCC and General Pediatrics Faculty and Nurses willing to redirect part of their previously scheduled educational time toward simulation in a cost-neutral manner. At our institution, physicians are university faculty, while nursing staff are employed by a not-for-profit, private healthcare organization. Physician faculty were asked to redirect a percentage of their educational time previously used for nonsimulation-based educational activities (such as didactics) toward simulation with the approval of their division chief, while simulation program nurse educators underwent a similar process in which a fraction of their educational time was redirected to simulation-based activities with the approval of their supervisor.

For each staff member, we attempted to work within the boundaries of their original job description, looking for ways that previous educational goals could be accomplished more effectively using simulation. By doing this, we attempted to assure that there would be no residual educational activities that would then require reassignment. Typical changes to job descriptions resulting from this process were minimal, largely consisting of the addition of “using simulation” or a similar phrase to existing statements of expected educational productivity. This process enabled us to easily secure supervisor and division chief buy-in once the educational benefits of program involvement were presented, because it was clear that no additional staff funding would be required to attain those benefits. SPs used during the PACE program were provided by the University of Louisville Standardized Patient program, which serves both undergraduate and graduate medical communities. Although an hour of SP time typically costs $20, we were able to obtain SP support for free because of the program's graduate focus, further reducing costs. No permanent simulation center staff were hired.

The program director initially received training at the Center for Medical Simulation in Cambridge, Massachusetts, before arriving at this institution and was subsequently trained in the programming, operation, and maintenance of the simulation equipment via manufacturer-based training sessions. This knowledge was then used to train the nursing director and to implement a training program for additional faculty members.

This program began with an instructional session regarding use and care of the simulation equipment, CRM principles, and basic debriefing techniques such as the advocacy/inquiry model.23 New faculty then progressed to a series of in situ training sessions where they assisted in the conduction of ongoing simulation-based education. This allowed them to gain greater familiarity with real-time use of the simulation equipment and offered an opportunity to practice the debriefing skills taught during the initial training session. Finally, new faculty conducted their own simulation sessions with the supervision of more experienced staff. Faculty were evaluated during each of these sessions with regard to equipment competency and debriefing skill, and feedback was given immediately after each program. When new faculty were deemed competent (typically after two to three sessions), direct supervision was discontinued. PACE program staff received additional training on the teaching and debriefing of communication skills. The cost of this training is included in Table 1.

Program Structure and Role Distribution

Traditional simulation centers are typically staffed by an educational coordinator, charged with caring for the administrative needs of the program, an SP trainer, charged with assuring that the SPs are prepared to adequately play their roles, and one or more technicians responsible for mannequin programming and maintenance. Given our staffing model, it was necessary to distribute these roles among the faculty. We chose to assign the coordinator role to the program director and nursing director as it pertains to the overall program schedule with other faculty members participating in the aspects of the coordination process that fall within their respective clinical domains. The program director and nursing director also assumed the programmer role, as the prospect of learning this skill was viewed with trepidation by many other faculty members. Maintenance presented more of a challenge, as even with the improvements in mannequin technology that have occurred over the past several years, we anticipated facing issues that we could not resolve. We addressed this by securing ongoing institutional funding for the highest levels of manufacturer warranty coverage. This enables on-site repair when possible and free shipping for larger malfunctions and for preventative maintenance. The associate director of communication skills shouldered the burden of SP training, as this pertained primarily to the communication skills program. Table 2 gives a schematic of the program staffing structure. Table 3 lists the approximate costs attributable to each faculty member were they borne by the program and percent full time equivalent (FTE) devoted to the program.

Table 2:
Program Faculty Roles
Table 3:
Allotted Staff Time and Attributable Program Staffing Costs

Assessment of Program Quality and Growth

Before program implementation, resident and CCC nursing educational records were assessed to determine the number of hours spent per person per year in simulation-based training activities. After 2 years of operation, data regarding the number of simulation-based encounters and total number of hours spent per person in simulation-based training were collected from resident and nurse educational records. CCC nursing statistics only were used to track growth due to the currently sporadic involvement of other nursing groups, and because critical care was the initial focus of the program and hence would be expected to benefit the most initially. Overall program growth was quantified in terms of hours of simulation time offered per month. Demographic and attendance data were also collected during each session.

Each session of the CRM courses was evaluated by participants with respect to usefulness and quality using a quality assessment tool. This tool consisted of a series of 1 to 5 Likert scale questions asking raters to evaluate session organization, content, physical materials, mannequin, debriefing, session length, and use of video, followed by free-text questions asking for a description of programmatic strengths and weaknesses. Although presently unvalidated, this tool was benchmarked by using it to assess a series of CRM simulations involving the pediatric residents that were held at our medical school simulation center and comparing these ratings with sessions simultaneously conducted as part of our program. These data are depicted in Figure 1 and show a concordance between scores in these two locations. Although the subjects groups were different in constitution (off-site simulation groups included only pediatric residents, while groups within our in situ program consisted of pediatric residents, nurses, respiratory therapists, and pharmacists), this comparison preliminarily supports the utility of the tool.

Figure 1.:
Benchmark comparison of quality assessment tool results in both traditional and in situ environments. This figure depicts a comparison between quality assessment tool results as applied in both a longstanding traditional simulation center and in the in situ program described in this article. Likert scale anchors were as follows: 1 = poor, 2 = fair, 3 = acceptable, 4 = good, and 5 = excellent. None of the differences above were significant by Mann–Whitney U test.

To augment this session-specific data collection, yearly online surveys were distributed assessing similar aspects of the CRM program. A year-end assessment of the PACE course was also conducted during the most recent annual survey. Finally, program faculty were surveyed 2 years after program initiation and asked to quantify the percentage of time spent in simulation-based activities.

Statistical Analysis

Demographic, program growth, faculty involvement, and cost data are presented descriptively. Program growth trends were analyzed using linear regression. Data regarding time spent in simulation training by subjects before and after the implementation of the program were compared using the Mann–Whitney U and Kruskall-Wallis tests. All statistical tests were performed using the PASW (formerly SPSS) Statistics 18 program.


Program Demographics

Between July 2008 and July 2010, the program held a total of 166 sessions and supported a total of 786 learner encounters defined as a learner experiencing a single simulation-based learning experience. The program trained an average of 148 nurses, 77 residents, 10 respiratory therapists, and 9 pharmacists per year. Twenty percent (34) of sessions could not be held in the area in which they were originally scheduled due to bed occupancy issues and had to be moved to a designated backup site. No sessions were rescheduled due to faculty time issues. One session was cancelled. Each mannequin was returned to the manufacturer once due to malfunction, and each received one preventative maintenance evaluation.

Program Growth

Over the past 2 years, the program has grown from offering 2.7 educational hours per month (averaged quarterly) to 16.8, a greater than sixfold increase in educational time. Program growth was significant by linear regression (P < 0.001). Figure 2 depicts monthly program growth over the past 2 years.

Figure 2.:
Breakdown of monthly simulation time by program and overall 2-year programmatic growth. This graph depicts programmatic growth as broken down into different courses. Courses were implemented in a phased approach, beginning with CRM in July 2008, followed by Nursing Skills in February 2009 and the PACE in July 2009. In addition to this, we supported the Society for Pediatric Sedation's Pediatric Procedural Sedation Provider Course in May 2010, resulting in an unusually high amount of simulation time during that month. The overall growth trend is significant by linear regression (P < 0.001, R 2 value 0.66).

Program Equipment Costs

Startup costs for the program were $128920.89, including all equipment purchases and installation and communication skills training for PACE faculty. Ongoing costs per year are currently $11,695 and include the warranty and maintenance costs of the mannequins. The breakdown of initial and ongoing costs is shown in Table 1. Ongoing costs per learner encounter averaged over the past year were $14.87.

Effect of Program Implementation on Learner Education

Before curriculum implementation, categorical pediatrics and combined medicine-pediatrics residents attended an average of 1.7 hours of simulation-based training per year and CCC nurses attended an average of 1.1 hour of simulation-based training per year. The latter encounters often did not use high-fidelity simulation equipment. Simulation-based educational hours significantly increased over the first year, with residents attending an average of 3.0 hours of simulation-based training (P < 0.001 by Mann–Whitney U test) and CCC nurses attending an average of 2.9 hours of simulation-based training (P < 0.001 by Mann–Whitney U test). For the residents, this trend continued into the second year, with an average of 4.3 hours of simulation training per year (P = 0.043 by Mann–Whitney U test). Increase in nursing attendance leveled off, with an average attendance of 3.3 hours of simulation training per year (no statistical significance). Overall changes in hours of simulation time per learner are graphically depicted in Figure 3.

Figure 3.:
Increases in simulation-based educational time per learner over the initial 2 years after program implementation. This chart depicts the increase in hours of simulation time per learner per year effected by the program over the first 2 years. Categorical Pediatrics residents, Combined Medicine-Pediatrics residents, and CCC nurses were used as a reference group. Change in hours from 2007 to 2008 (preimplementation) was significant for both groups (P < 0.001 by Mann–Whitney U test for both groups). Change in hours from 2008–2009 to 2009–2010 was significant (P = 0.043 by Mann–Whitney U test) for residents but did not attain statistical significance for the nursing group. Overall trends were significant for both groups (P < 0.001 by Kruskal-Wallis test).

Program Perception by Learners

Participants ranked individual CRM course sessions highly, with an overall rating of 4.6/5 on a 5-point Likert scale. Session components were also rated highly, with average scores of 4.8/5 for session organization, 4.8/5 for session content, 4.7/5 for session educational material, 4.4/5 for mannequin realism, 4.5/5 for session length, 4.9/5 for session debriefing, and 4.1/5 for quality and usefulness of video in debriefing. Year end data from both the CRM and PACE courses were similar, with an overall rating of 4.5 and 4.4 on a 5-point Likert scale for the CRM course over the 2008–2009 and 2009–2010 academic years, respectively, and an overall rating of 4.4 on a 5-point Likert scale for PACE over the 2009–2010 academic year. Figure 4 depicts longitudinal ratings of the CRM program over the study period.

Figure 4.:
Longitudinal CRM Program quality assessment data. These charts depict the overall and domain-specific quality improvement scores for the CRM program as averaged on a quarterly basis over the study period. Likert scale anchors were as follows: 1 = poor, 2 = fair, 3 = acceptable, 4 = good, and 5 = excellent. The scale of the y-axis on the graph depicting domain-specific scores has been expanded to enable easier visualization of individual components.

Faculty Percentage Effort

Five faculty were involved with the program at its inception. Two years after implementation, this number had grown to nine. In addition, we have cultivated a group of SPs for use in the PACE program. Table 2 depicts the roles performed by each faculty member. Faculty, on average, redirected between 3% and 32% of their total employed time toward simulation-based activities. Table 3 depicts the time allotments for each individual faculty member in terms of approximate percent FTE committed to the simulation program and the yearly cost attributable to each were their salaries paid by the program. Overall redirected faculty costs are $115,320 per year.


This study supports the feasibility of operating a simulation program using minimal permanent space and diverted faculty educational time only. Despite the limitations created by the lack of full-time staff, the program was able to grow substantially over the study period. Given that time devoted by faculty to program-related activities did not exceed 32%, this growth is strong evidence of the overall viability of the model.

We were able to successfully mitigate many of the potential drawbacks of this staffing model by sufficiently distributing the responsibility for traditional simulation center roles among the program directors and consistently maintaining the most advanced warranties offered by the manufacturers. Given that the pattern of traditional role distribution adopted by our program was determined somewhat arbitrarily based on the amount of time that individual faculty members could devote to the program, there is no reason why further redistribution could not successfully be performed at other institutions where staff may not have as much assigned educational time. One possibility is to assign each staff member a mannequin or other piece of equipment for which they are responsible. Other options include the use of online scheduling systems, several of which are commercially available, to enable individual faculty to administrate their own programs in a more decentralized fashion while assuring that equipment usage conflicts remain minimal.24,25 Good communication between faculty is paramount if a more decentralized model is adopted.

Another question is the generalizability of this model. Would this approach only work in an academic medical center? Given the hybrid nature of our clinical environment and the resulting presence of program staff from both university and private environments, we believe that this model would also be viable in smaller, private hospitals. During the staff recruitment process, we encountered no difficulty in redirecting nursing staff time in a manner similar to the physician staff. Indeed, the process has been so successful that hospital administrators are now actively assisting the program in finding additional staff and launching new program nodes.

This approach generated clear benefits with regard to cost. Startup cost for this particular in situ program ($128920.89) was over 3.5 times less than the cost identified for construction of a permanent on-site center in one representative report ($472,000). This savings was realized in three primary ways: the absence of significant construction costs ($190,789 difference), differences in cost between stand-alone and portable mannequins ($128,360 difference), and our redirected staffing model ($57,500 difference).11 Although not strictly comparable, these considerations illustrate the cost effectiveness of each approach. Startup cost could have been further limited to $57,220 by the purchase of only one mannequin and the use of a portable video camera instead of the installation of a permanent system.13 Although the majority of savings in startup cost can be attributed to the program's in situ design, the savings enabled by our staffing model become much more apparent when ongoing costs are examined. When compared with the representative program above,11 our program was able to achieve a cost reduction of $28.82 per learner encounter per year, the bulk of which is attributable to staffing differences. Given that programmatic costs are tied only to yearly equipment maintenance and not to staff salaries, further increases in simulation involvement by learners will further drive down our cost per learner, increasing this differential. This makes the our approach especially suitable to smaller centers with fewer resources as, given a motivated staff and the initial investment needed to purchase simulation equipment, programs can be sustained thereafter with little additional finances.

Despite this low cost, the program generated significant benefits to individual learners, as we were able to offer a much higher volume of simulation-based educational time than we could previously with equivalent perceived session quality. These benefits were heightened by the use of actual clinical spaces which improved environmental fidelity. Furthermore, survey data demonstrated that we were able to maintain a consistent level of quality as the program grew, offering further evidence that this model did not sacrifice quality for volume and decreased cost.

Our approach, however, does possess significant potential limitations. One concern is the potential growth “ceiling” imposed by our staffing model. Given a limited number of potential faculty, each having other duties to carry out in addition to the simulation program, it is clear that at, at some point, a program using this methodology will eventually experience a demand higher than can be accommodated. Another concern is the applicability of this strategy for institutions with large numbers of learners (such as medical or nursing schools) that must be cycled through simulations multiple times a day. Such needs require permanent space in which to conduct those sessions. Considering this, it seems that the most appropriate context for this model is an institution with financial and spatial constraints that desires to use simulation primarily as a means of ongoing graduate medical education and professional development. Given the growing appreciation of the value of simulation as a teaching methodology, the relative inaccessibility of freestanding simulation centers for many institutions, and the financial constraints faced with regard to education by many hospitals in the current economic climate, many centers likely fit the description above, and hence could use this strategy to facilitate program initiation.5,26,27

An additional limitation arises from the use of a “pure” in situ approach. Given this methodology's use of actual patient bed spaces and conference areas for educational space, some programmatic disruption due to unplanned patient volume surges is unavoidable. Other programs using pure in situ methodology have reported a 15% to 17% cancellation rate due to patient volume and acuity.14 As discussed above, we were able to counteract this by designating several areas as “backup” sites, a strategy that was largely successful in avoiding cancellation.

Finally, the lack of quality assessment tool validation data constrains the strength of the conclusions we are able to draw regarding program quality. Given the results of the benchmarking process used, however, and the overall simplicity of the tool, it seems likely that the persistent high scores across the study period meaningfully reflect both program quality and stability.


This report demonstrates the feasibility of an in situ simulation program design that uses minimal space and “redirected” faculty for staffing. Participant impressions and growth data indicate that such a program can maintain quality while achieving sustained growth. Startup costs of this approach represent a savings compared with the construction costs inherent in the creation of a permanent on-site center and are substantially offset by the low ongoing costs made possible by lack of traditional staff salaries. Such a program structure may be useful for institutions with financial and spatial constraints who desire to implement simulation-based education.


1. Luft HS, Bunker JP, Enthoven AC. Should operations be regionalized? The empirical relation between surgical volume and mortality. N Engl J Med 1979;301:1364–1369.
2. Hayes CW, Rhee A, Detsky ME, Leblanc VR, Wax RS. Residents feel unprepared and unsupervised as leaders of cardiac arrest teams in teaching hospitals: a survey of internal medicine residents. Crit Care Med 2007;35:1668–1672.
3. Rogers PL, Grenvik A, Willenkin RL. Teaching medical students complex cognitive skills in the intensive care unit. Crit Care Med 1995;23:575–581.
4. Mayo PH, Hackney JE, Mueck JT, Ribaudo V, Schneider RF. Achieving house staff competence in emergency airway management: results of a teaching program using a computerized patient simulator. Crit Care Med 2004;32:2422–2427.
5. Morgan PJ, Cleave-Hogg D. A worldwide survey of the use of simulation in anesthesia. Can J Anaesth 2002;49:659–662.
6. Blum RH, Raemer DB, Carroll JS, Sunder N, Felstein DM, Cooper JB. Crisis resource management training for an anaesthesia faculty: a new approach to continuing education. Med Educ 2004;38:45–55.
7. Nunnink L, Welsh AM, Abbey M, Buschel C. In situ simulation-based team training for post-cardiac surgical emergency chest reopen in the intensive care unit. Anaesth Intensive Care 2009;37:74–78.
8. Reznek M, Smith-Coggins R, Howard S, et al. Emergency medicine crisis resource management (EMCRM): pilot study of a simulation-based crisis management course for emergency medicine. Acad Emerg Med 2003;10:386–389.
9. Weller JM. Simulation in undergraduate medical education: bridging the gap between theory and practice. Med Educ 2004;38:32–38.
10. Gettman MT, Pereira CW, Lipsky K, et al. Use of high fidelity operating room simulation to assess and teach communication, teamwork and laparoscopic skills: initial experience. J Urol 2009;181:1289–1296.
11. Weinstock PH, Kappus LJ, Kleinman ME, Grenier B, Hickey P, Burns JP. Toward a new paradigm in hospital-based pediatric education: the development of an onsite simulator program. Pediatr Crit Care Med 2005;6:635–641.
12. Kobayashi L, Shapiro MJ, Gutman DC, Jay G. Multiple encounter simulation for high-acuity multipatient environment training. Acad Emerg Med 2007;14:1141–1148.
13. Weinstock PH, Kappus LJ, Garden A, Burns JP. Simulation at the point-of-care: reduced-cost, in situ training via a mobile cart. Pediatr Crit Care Med 2009;10:176–181.
14. Patterson M, Blike G, Nadkarni V. In-situ simulation, challenges and results. AHRQ 2008; Available at: Accessed March 1, 2009.
15. Armstrong E. Simulation as a disruptive innovation. Presented at: International Meeting on Simulation in Healthcare; January 2009; Orlando, FL.
16. Cooper S, Wakelam A. Leadership of resuscitation teams: “Lighthouse Leadership.” Resuscitation 1999;42:27–45.
17. DeVita MA, Schaefer J, Lutz J, Dongilli T, Wang H. Improving medical crisis team performance. Crit Care Med 2004;32(suppl 2):S61–S65.
18. Howard SK, Gaba DM, Fish KJ, Yang G, Sarnquist FH. Anesthesia crisis resource management training: teaching anesthesiologists to handle critical incidents. Aviat Space Environ Med 1992;63:763–770.
19. Pittman J, Turner B, Gabbott DA. Communication between members of the cardiac arrest team—a postal survey. Resuscitation 2001;49:175–177.
20. Browning DM, Meyer EC, Truog RD, Solomon MZ. Difficult conversations in health care: cultivating relational learning to address the hidden curriculum. Acad Med 2007;82:905–913.
21. Calhoun AW, Rider EA, Meyer EC, Lamiani G, Truog RD. Assessment of communication skills and self-appraisal in the simulated environment: feasibility of multirater feedback with gap analysis. Simul Healthc 2009;4:22–29.
22. Meyer EC, Ritholz MD, Burns JP, Truog RD. Improving the quality of end-of-life care in the pediatric intensive care unit: parents' priorities and recommendations. Pediatrics 2006;117:649–657.
23. Rudolph JW, Simon R, Rivard P, Dufresne RL, Raemer DB. Debriefing with good judgment: combining rigorous feedback with genuine inquiry. Anesthesiol Clin 2007;25:361–376.
24. SimBridge; B-line Medical. Available at: Accessed February 15, 2011.
25. Arcadia Suite; Education Management Solutions. Available at: Accessed February 15, 2011.
26. Cheng A, Duff J, Grant E, Kissoon N, Grant VJ. Simulation in paediatrics: an educational revolution. Paediatr Child Health 2007;12:465–468.
27. Ziv A, Wolpe PR, Small SD, Glick S. Simulation-based medical education: an ethical imperative. Acad Med 2003;78:783–788.

Pediatrics; Simulation; Education; In situ; Critical care; Cost; Donor staffing; Program organization; Cost neutral; Staffing; Program structure; Simulation context; Physical setting; Faculty staffing; Low cost; Program initiation

© 2011 Society for Simulation in Healthcare