Journal Logo

Empirical Investigations

Cost-Utility Analysis of Virtual and Mannequin-Based Simulation

Haerling, Katie A. PhD

Author Information
doi: 10.1097/SIH.0000000000000280
  • Free

Abstract

The Institute of Medicine report, The Future of Nursing: Leading Change, Advancing Health, proposed research priorities for improving nursing education. These priorities included identifying teaching strategies that “most cost-effectively expand nursing education capacity.”1(p276) The follow-up to the Institute of Medicine report, Assessing Progress on the Institute of Medicine Report the Future of Nursing,2 recommended further investigation of strategies used to expand nursing education capacity. Given the limited availability of resources, it is important to make decisions about teaching and learning activities that optimize the cost-effectiveness equation to expand nursing education capacity.

Simulation is one teaching strategy educators and policy makers suggest as an option for increasing nursing education capacity. Mounting evidence supports the use of simulation in healthcare education as a viable alternative to traditional clinical experiences.3–5 Advances in technology and challenges such as increasing enrollments and competition for limited clinical placements sites are intensifying interest in novel applications of simulation. While mannequin-based simulation laboratories are now common components of nursing programs, other forms of simulation are gaining popularity. Specifically, virtual simulation has been described as “increasingly more accessible and affordable to students, faculty, and researchers.”6

Previous studies suggest virtual simulation as a cost-effective strategy in healthcare education.3,7 However, most research investigating the comparative and cost-effectiveness of virtual simulation has only compared virtual simulation with no intervention or compared it with control-like interventions such as lecture.4,8 To make informed decisions about which types of simulation are most effective for addressing specific learning objectives, research comparing the effectiveness and costs of specific types of simulation activities is necessary.

To fill this void, this current research compared learning outcomes between students who participated in one of two types of simulation activities and the costs associated with each. The purposes of this study were to (1) compare cognitive, affective, and psychomotor learning outcomes between students using mannequin-based simulations and students using virtual simulations and (2) describe a cost-utility analysis (CU) comparing the two types of simulation activities.

Theoretical Framework and Definitions

This study uses the National League for Nursing (NLN)/Jeffries Simulation Theory as a guiding framework, along with definitions from the Society for Simulation in Healthcare's Healthcare Simulation Dictionary to describe two types of simulation: mannequin-based simulation and virtual simulation.9,10

The NLN/Jeffries Simulation Theory provides a graphical representation of how the context, background, and design of a simulation affect the simulation experience and outcomes. This study looked at how manipulating specific design features impacted the simulation experience and learning outcomes. These variables are described in further detail in the methods section.

The Healthcare Simulation Dictionary10(p21) describe mannequin-based simulation as “The use of mannequins to represent a patient using heart and lung sounds, palpable pulses, voice interaction, movement (eg, seizures, eye blinking), bleeding, and other human capabilities that may be controlled by a simulationist using computers and software.” In contrast, the Healthcare Simulation Dictionary references McGovern11 to describe virtual simulation as “The recreation of reality depicted on a computer screen.”

METHODS

This study used a quasi-experimental, nonequivalent comparison group design to test differences between mannequin-based and virtual simulation learning activities.

Human Subject Review Approvals, Consent Procedures, and Study Population

After review and approval by the university human subjects division and institutional review boards from the two contributing colleges, the investigator recruited fifth and sixth quarter associate degree in nursing students to participate in the study. Recruitment strategies included posting flyers around campus and providing a short presentation during class sessions inviting students to participate. Students who expressed interest were sent follow-up information about participation via e-mail. Students who consented to participate were randomized to participate in either the mannequin-based or virtual simulation activities. Each participating student received a signed thank you letter from the investigator and a US $30 Visa gift card. Participants and potential participants informed of the opportunity to receive a thank you letter and gift card before signing the consent forms.

Study Interventions

To isolate specific concepts from the NLN/Jeffries Simulation Theory, the investigator used the same simulation scenario for both the mannequin-based and virtual simulation activities. Table 1 provides a comparison of the concepts from the NLN/Jeffries Simulation Theory as they apply to each of the study groups (mannequin-based and virtual).

TABLE 1
TABLE 1:
Comparison of the Concepts From the NLN Jeffries Simulation Theory as They Apply to Each Study Group

The simulation design included time for participants in both the mannequin-based and virtual simulation groups to prepare for their simulation experiences by completing an independent, computer-based learning module. The module provided a review of pathophysiology, pharmacology, and nursing actions relevant to the care of patients with chronic obstructive pulmonary disease (COPD). Next, participants in the mannequin-based simulation group completed the live, professionally facilitated, mannequin-based simulation in groups of two to four participants. Participants in the virtual simulation group independently completed the web-based, commercially available virtual simulation. A link to the vSim product can be found at http://www.laerdal.com/us/vsim.

Learning Technology by Laerdal was used for both the mannequin-based and virtual simulations, which provided a standardized simulation experience across groups. To mitigate any potential for conflict of interest, the study investigators did not seek or accept any donated resources from Laerdal. The “Chronic Obstructive Pulmonary Disease-Spontaneous Pneumothorax Complex Case,” which is available as both a mannequin-based simulation and as a virtual scenario, was used for both the mannequin-based and virtual simulation activities. In this scenario, the students are expected to assess a patient who has been admitted to the medical unit for an exacerbation of COPD. Shortly into the scenario, the patient experiences a pneumothorax. The students are expected to recognize the pneumothorax, notify the physician, and initiate appropriate nursing interventions.

Both the mannequin-based and virtual simulation experiences lasted approximately 30 minutes and included an opportunity to debrief. Participants in the mannequin-based simulation groups interacted with the simulated patient, their facilitator, and peers. These interactions allowed for collaborative learning. Participants in the virtual simulation groups interacted independently with the computer-based simulated patient. Educational strategies varied between the mannequin-based and virtual simulation groups. Participants in the mannequin-based simulation groups completed a facilitated debriefing session based on the plus/delta model. Participants in the virtual simulation group received computer-generated feedback from the vSim program including opportunities for improvement, a detailed log of their actions during the scenario, and a numerical score (0%–100%). Both groups completed a written reflection that also served as an opportunity to debrief their experience. This written reflection will be further described in the assessment measures section.

Assessment Measures

Outcomes from the simulation were measured using a variety of written and performance assessment measures.

Written Assessments

Figure 1 depicts the sequence of activities participants engaged in as part of the study. Before the simulation experiences, all participants completed a demographic questionnaire, preintervention knowledge examination covering content related to the nursing care of patients with COPD, and a modified version of the NLN Student Satisfaction and Self-Confidence in Learning survey. The knowledge examination questions were published in National Council Licensure Examination-like multiple choice questions from Wolters Kluwer Health, Inc. Findings from a review of the literature and psychometric analyses for the NLN Student Satisfaction and Self-Confidence in Learning survey indicate that the tool has been used extensively in simulation research and has demonstrated high internal consistency (Cronbach α) for the two subscales: satisfaction (α = 0.92) and self-confidence (α = 0.83).12

FIGURE 1
FIGURE 1:
Sequence of study activities.

After the simulation experience, all participants completed a postintervention knowledge examination and the modified version of the NLN Student Satisfaction and Self-Confidence in Learning survey. At the end of the postintervention knowledge examination, participants were also asked to document their patient assessment, write a patient hand-off report, and respond to two reflection-type questions about how the simulation made them feel and what they would do differently if they were to repeat the scenario in the future. This preintervention/postintervention assessment design provided data about learning in the cognitive and affective domains.

Performance Assessments

To assess performance across three learning domains (cognitive, affective, and psychomotor), a random sample of participants engaged in a standardized patient encounter after their mannequin-based or virtual simulation experience. During this encounter, individual participants interacted with a professional standardized patient who portrayed a person being admitted to hospital for an exacerbation of COPD. This patient was similar, but not identical, to the patient the participants encountered in the mannequin-based or virtual simulation activity. These interactions were video recorded and scored using the Lasater Clinical Judgment Rubric (LCJR) and Creighton Simulation Evaluation Instrument (C-SEI). The LCJR is an 11-item developmental rubric based on Tanner's Model of Clinical Judgement.13 Reliability and validity assessments support the use of the LCJR for assessing student performance in simulation activities including interrater reliability (intraclass correlation coefficient (ICC)[2,1] = 0.889) and percent agreement ranging from 92% to 96%.14,15 The C-SEI is a 22-item instrument for assessing student performance in simulated learning experiences based on the the following four components: assessment, communication, critical thinking, and technical skills.16 Interrater reliability of the C-SEI when used for simulation participant performance assessment was ICC[2,1] of 0.952.17

Quantitative Analyses

All statistical analyses were completed using IBM SPSS Statistics 19. Descriptive statistics were calculated as means and standard deviations for continuous variables and frequencies for categorical variables. Independent samples t test and χ2 tests were used to assess for differences in demographic characteristics between the two comparison groups and between the group who participated in the standardized patient encounter and those who did not participate in the standardized patient encounter. To assess for differences between preintervention and postintervention assessment scores, dependent samples t tests were used. Changes in assessment scores from preintervention to postintervention were calculated by subtracting each participant's preintervention assessment score from the corresponding postintervention assessment score. These scores are labeled as “change scores.” To assess for differences in change scores between the group that participated in the mannequin-based simulations and the group that participated in the virtual simulations, independent samples t tests were used. An α level of 0.05 was used to determine statistical significance. A priori power calculations informed by pilot research (n = 14) indicated that to achieve 80% power (α = 0.05), 35 participants for each of the two groups (n = 70) were required to test for any significant group differences.18

Cost-effectiveness is an essential consideration when making decisions about the use of simulation. However, a recent systematic review of simulation research found less than 1% of articles provided adequate cost data for comparative analyses.19 These investigators identified the lack of a standardized framework for calculating and reporting costs associated with simulation activities as an underlying gap in the literature. For the purposes of this study, the CU approach was used to compare mannequin-based and virtual simulation. This type of analysis allows the investigator to compare multiple alternatives (mannequin-based and virtual simulation activities) in terms of both their costs and their effectiveness. Costs are measured in monetary terms (dollars) and utility is measured as a composite of multiple measures of effectiveness. This capacity to consider multiple measures of “effectiveness” (knowledge and satisfaction and self-confidence) made the cost-utility approach especially appropriate for this study. Cost-utility analysis has been used extensively in healthcare and educational research and is helpful for making decisions when two interventions have demonstrated similar effectiveness.20,21

The “ingredient approach” described by Levin and McEwan20 and White et al22 for calculating the costs of an educational intervention was used to calculate the per-student costs associated with the mannequin-based and virtual simulations. The ingredients considered for this calculation were personnel (both faculty and simulation laboratory staff), facilities (physical space), durable equipment (mannequin-based patient simulators and computer stations), consumable supplies, scenario purchase, and student inputs (tuition). The line-item totals represent the costs for 30 students to complete the simulations. The overall total was then divided by 30 to get the per-student cost for each the mannequin-based and virtual simulation. Personnel costs were calculated using publicly available data on the Washington State Public Salaries Web site (www.fiscal.gov./salaries). Faculty time for the mannequin-based simulation included 1 hour for preparation and four hours to facilitate 8 1/2-hour simulations with three to four students in each session. Simulation laboratory staff time for the mannequin-based simulation included 2 hours of preparation and 4 hours to help run 8 1/2-hour simulations. Faculty time for the virtual simulation included 1/2 hour to monitor the computer laboratory where students were completing the virtual simulation. No simulation laboratory staff time was required for the virtual simulation. Equipment for the mannequin-based simulation included the annualized cost of ownership for the Laerdal SimMan 3G (US $23,500). The equipment for the virtual simulation included the annualized cost of ownership for 30 personal computers (US $8590). These annualized costs were then divided by the annual usage (540 hours for the simulator and 783 hours for the computer laboratory) and multiplied by the number of required hours for all 30 students to complete the simulation activity. These hours of usage represent the actual use for a specific college. If another institution used one of the resources more, it would lower the cost per hour of use and improve return on investment. Consumable supplies for the mannequin-based simulation included moulage, gloves, and simulated medications. Reusable patient equipment was not included in this calculation. There were no supplies required for the virtual simulation. Facility (physical space) and student input (tuition) costs were consistent across both the mannequin-based and virtual simulation and were excluded from the calculation. The total costs for 30 students to complete the mannequin-based and virtual simulations were US $1,096.47 and US $326.81, respectively. The calculations of these costs are shown in Table 2.

TABLE 2
TABLE 2:
Ingredients for Cost Estimate

Qualitative Analyses

Qualitative data from participants' written postintervention documentation of patient report, patient hand-off, and reflective questions were transcribed using Microsoft Excel and then transferred to Microsoft Word. These responses included each participant's written patient assessment, patient hand-off report, and answers to two reflective-type questions about how the simulation made the participant feel and what the participant would do differently if the scenario were repeated in the future. Using qualitative content analysis,23 two investigators used open coding to identify common themes within the data. By using this iterative process, the investigators were able to summarize and compare findings based on themes that emerged from the data. While some level of interpretation is inherent in all qualitative description, the intent of these analyses was to summarize participants' written responses as objectively as possible.

RESULTS

Eighty-four associate degree in nursing students were enrolled in the study and completed the study activities. However, only 81 students (96%) completed all of the written, quantitative assessments. Therefore, there are 81 participants reflected in the analyses of the written, quantitative data and 84 in the written, qualitative analyses. This sample size of 81 students who completed all of the written quantitative assessments exceeds the number needed to achieve 80% power (α = 0.05). Twenty-eight participants (13 from the mannequin-based simulation group and 15 from the virtual simulation group) were randomly selected to participate in the video-recorded performance assessment where they interacted one on one with a professional standardized patient. Because of resource limitations, only a subset of participants were able to complete the standardized patient encounter. Therefore, while the comparisons were adequately powered, the performance assessment was underpowered. These data are provided for descriptive purposes and not included in the CU.

Demographics

Table 3 shows demographics of the study participants. The average age was 32.9 years. Most were female (89%) and most had previous experience working in healthcare (59%). Some participants were certified in Advanced Cardiac Life Support (19.75%). Many of the participants reported speaking a language other than English at home (23.5%). There were no significant differences in demographic characteristics between the comparison groups. The group who participated in the standardized patient encounter and the group who did not participate in the standardized patient encounter varied significantly on one demographic variable: quarter in program. While both fifth and sixth quarter students participated in the study, there was a significantly larger (P < 0.05) percentage of fifth quarter students in the subset of students who participated in the standardized patient encounter as compared with those who did not. Fifth quarter students were evenly distributed across both mannequin-based and virtual simulation conditions.

TABLE 3
TABLE 3:
Participant Demographic Characteristics

Effect of Simulation on Knowledge

The overall mean (SD) preintervention knowledge score across groups was 69.54 (14.17). Preintervention knowledge scores were not significantly different between groups (P = 0.189). These findings help demonstrate the similarity between groups on a measure of knowledge before participants were exposed to the interventions (either mannequin-based or virtual simulation).

The overall mean (SD) postintervention knowledge score across groups was 80.89 (15.19). Postintervention knowledge scores were not significantly different between groups (P = 0.476). The overall mean (SD) change in knowledge score, the difference between preintervention and postintervention knowledge scores, across groups was 11.34 (18.50). Participants in both the mannequin-based and virtual simulation groups showed significant improvement (P < 0.05) in knowledge from preintervention assessment to the postintervention assessment. Change scores were not significantly different between groups (P = 0.659).

Effect of Simulations on Satisfaction and Self-confidence

The overall mean (SD) preintervention score on the Satisfaction and Self-Confidence in Learning Survey (SSC) across groups was 72.05 (12.98). Preintervention SSC scores were not significantly different between groups (P = 0.657). These findings help demonstrate the similarity between groups on a measure of satisfaction and self-confidence in learning before participants were exposed to the interventions.

The overall mean (SD) postintervention SSC score across groups was 80.93 (10.37). Postintervention SSC scores were not significantly different between groups (P = 0.126). The overall mean (SD) change in SSC scores across groups from preintervention to postintervention was 8.88 (14.04). The difference in change scores between the mannequin-based and virtual simulation groups was not significant (P = 0.112). Participants in both the mannequin-base and virtual simulation groups demonstrated significantly higher scores on the SSC after the simulation activities than before the simulation activities.

Effect of Simulation on Performance

Performance in the standardized patient encounter was used as a proxy for measuring how the type of simulation (mannequin-based or virtual) may affect the care participants provide to actual patients. Each video-recorded encounter was evaluated and assigned two scores. The mean (SD) score across groups on the LCJR was 80.26% (13.11%). The mean (SD) score across groups on the C-SEI was 83.19% (15.49%). There were neither significant differences in mean scores on the LCJR nor the C-SEI between the mannequin-based and virtual simulation groups (Table 4). It should be noted that because only 28 participants completed the standardized patient encounter, these analyses were less likely to detect a difference between groups.

TABLE 4
TABLE 4:
Comparison of Preintervention and Postintervention Assessment and Performance Scores

Comparative Effectiveness of Simulation Activities

Statistical analyses comparing postintervention assessments did not detect any significant differences between the two comparison groups. The qualitative responses to postintervention questions yielded additional data for comparing learning between two specific educational approaches (mannequin-based vs virtual). Although these findings may not be generalizable, they provide important insights about differences in learning between two specific groups of learners. When asked to document a focused respiratory assessment of the patient, 14 (37%) of 38 participants in the virtual group and 14 (30%) of 46 participants in the mannequin-based group provided a complete assessment. When asked what key elements they would include in a “hand-off” report such as Situation, Background, Assessment, Recommendation, 9 (24%) of 38 participants in the virtual group and 14 (26%) of 46 participants in the mannequin-based group provided a complete report.

The analysis of participants' reflections about what they would do differently if they were to repeat the simulation revealed the following three themes: (1) safety, (2) communication, and (3) prioritization/time management. Although these themes were present in the responses from both groups, there were notable differences in their frequency between groups. When asked what they would do differently if they were to repeat the simulation scenario, 23 (61%) of 38 participants in the virtual group, compared with only 15% of participants in the mannequin group, indicated that they would focus more on safety. Examples of this type of response about safety include “be more mindful of safety measures” and “I would read over orders more thoroughly….” On the other hand, although 12 (26%) of 46 participants in the mannequin group reported that they would focus more on communication only, 4 (11%) of 38 participants in the virtual group expressed this intention. Examples of responses about communication include “I would try to listen to my patient more…” and “better communication with team.” The same percentage of participants from each group (26%) indicated that they would focus on prioritization/time management if they were to repeat the scenario. Examples of responses about prioritization/time management include “move more efficiently” and “organize nursing interventions more effectively.”

The final reflective question asked participants how the scenario made them feel. Responses to this question revealed several themes about participants' affective experience. Responses such as “I felt like I was actually taking care of the patient,” “motivated!” and “afterwards I felt like I learned a lot and I'd love to do it again” were categorized as “positive.” In the virtual group, 25 (66%) of 38 participants wrote “positive” responses compared with only 22 (48%) of 46 participants in the mannequin-based group. Reponses such as “It made me question myself,” “not prepared for the floor,” and “At times I felt lost” were categorized as “negative.” In the virtual group, 14 (37%) of 38 participants wrote “negative” responses compared with only 11 (24%) of 46 participants in the mannequin-based group. A separate set of responses that specifically used the words “anxious” or “nervous” were categorized as “stress.” In the virtual group, 4 (11%) of 38 participants wrote “stress” responses compared with 15 (33%) of 46 participants in the mannequin-based group.

Finally, there was a unique theme of responses to the question about how the scenario made participants feel that reflected participants' discomfort with the virtual or mannequin-based simulation technology. In the virtual group, these “technology trouble” responses included “I knew what to do, but I couldn't find it on the computer” and “I got frustrated because I couldn't find what I was looking for.” In the mannequin-based group, these responses included “I dislike pretending stuff” and “finding the materials was stressful.” Although 17 (45%) of 38 participants in the virtual group reported “technology trouble,” only 2 (4%) of 46 participants in the mannequin-based group expressed similar frustration.

Cost-Utility Analysis

Table 5 illustrates a comparison of the costs and average improvement in knowledge and satisfaction and self-confidence for mannequin-based and virtual simulation activities. Recall from the methods section, the instrumental costs of the mannequin-based and virtual simulation that is estimated at 36.55/participant and US $10.89/participant, respectively. Because there were not statistically significant differences between the groups in terms of the improvement in knowledge or improvement in satisfaction and self-confidence scores, the overall utility of the mannequin-based simulation activity and the virtual simulation activity were equivalent (10.11). However, the costs associated with the mannequin-based simulations were three times higher than those of the virtual simulations. The cost per point improvement in knowledge was US $3.22 for the mannequin-based simulations and US $0.96 for the virtual simulation. The cost per point improvement in satisfaction and self-confidence was US $4.12 for the mannequin-based simulations and US $1.23 for the virtual simulation. The overall cost/utility ratio for the mannequin-based simulations was US $3.62 versus US $1.08 for the virtual simulation activities.

TABLE 5
TABLE 5:
Cost-Utility Analysis

DISCUSSION

Looking at the NLN Jeffries Simulation Theory, this study described and compared varying the simulation design and simulation experience by exposing one group of students to mannequin-based and another group to virtual simulation. Learning outcomes were compared between these two types of simulation. Although there were no significant differences between groups on any of the outcome measures, both groups showed significant improvement from preintervention to postintervention on knowledge and satisfaction and self-confidence in learning assessments.

In the CU, the equal overall utility of two types of simulation is offset by the larger cost per participant associated with the mannequin-based activities. When using the average change score from the knowledge and satisfaction and self-confidence in learning assessments to calculate the utility of each simulation activity, the virtual simulations seem to be the more economical option with a cost-utility ratio of US $1.08 versus the mannequin-based simulation's US $3.62.

The qualitative findings provide further description of participants' simulation experiences. At the conclusion of the simulation activities, participants in the virtual simulation group expressed an intention to focus on safety if they were to repeat the scenario. Participants in the mannequin-based group expressed an intention to focus on communication in the future, which is logical given the group interactions that were unique to the design features of the mannequin-based simulation activity. Another interesting finding from the qualitative data can be seen in participants' responses to a question about how the simulation activities made them feel. A theme in responses from participants in the virtual simulation group was feeling frustrated by the technology associated with the simulation environment. Finally, participants in both groups provided examples of negative and positive feelings that resulted from the simulation activities, but participants in the mannequin group focused more on feelings of stress and anxiety. These findings warrant additional investigation to help target simulation design features for specific learning outcomes, improve the simulation participant experience, and mitigate any negative effects of stress on learning.

Limitations of this study may affect the generalizability of the findings. While the purposes of the research were met, the data were limited to one specific simulation scenario in one local context. Simulation participants in other programs may have higher- or lower-quality mannequin-based simulation experiences, and the virtual simulation program used in this study is not representative of other options for virtual simulation. In this study, participants completed the mannequin-based simulation as part of an interactive group while participants who completed the virtual simulation only interacted with the computer-based program. Additional limitations of this study include the differences in the debriefing methods between the groups. The mannequin-based simulation groups experienced facilitated debriefing sessions, whereas the virtual simulation groups received computer-generated feedback from the vSim program. These differences in debriefing methods were not controlled for and likely affected learning. Similarly, no data were collected about participants' previous experiences with simulation, which may have affected their experiences and scores on the assessments.

When looking at the cost comparison, other educational institutions may have higher or lower costs associated with delivering the simulation activities, which could dramatically affect the CU. Furthermore, there was not a suitable cost comparison model identified in the literature, so the one created for this study was author developed. Another limitation of the study was the small sample size for the comparison of participants' standardized patient performances. Because of resource restrictions, only a small (n = 28) subset of participants completed the standardized patient encounter. Therefore, the nonsignificant results from this comparison may be due to a type II error.

Healthcare educators striving to make evidence-based decisions about how to best employ simulation pedagogy may consider these findings about the effectiveness and cost-effectiveness of various simulation modalities. However, additional research is needed. Instead of looking at a single simulation scenario, future research should compare outcomes between students who are exposed to mannequin-based or virtual simulation over the course of an entire semester or program. In addition, to facilitate meta-analyses from this work, researchers should adopt a more robust framework for cost-effectiveness research.19 This work will help develop simulation best practices to optimize the cost-effectiveness equation and potentially expand nursing education capacity.

REFERENCES

1. Institute of Medicine. The Future of Nursing: Leading Change, Advancing Health. Washington, DC: The National Academies Press; 2011.
2. National Academies of Sciences, Engineering, and Medicine. Assessing Progress on the Institute of Medicine Report the Future of Nursing. Washington, DC: The National Academies Press; 2016.
3. Cook D, Triola M. Virtual patients: a critical literature review and proposed next steps. Med Educ 2009;43(4):303–311.
4. Cook D, Erwin P, Triola M. Computerized virtual patients in health professions education: a systematic review and meta-analysis. Acad Med 2010;85(10):1589–1602.
5. Hayden JK, Smiley RA, Alexander M, Kardong-Edgren S, et al. The NCSBN National Simulation Study: a longitudinal, randomized, controlled study replacing clinical hours with simulation in prelicensure nursing education. J Nurs Regul 2014;5(2S):S1–S63.
6. Bauman EB. Games, virtual environments, mobile applications and a futurist's crystal ball. Clin Simul Nurs 2016;12(4):109–114.
7. Foronda C, Godsall L, Trybulski J. Virtual clinical simulation: the state of the science. Clin Simul Nurs 2013;9(8):e279–e286.
8. LeFlore JL, Anderson M, Zielke MA, et al. Can a virtual patient trainer teach student nurses how to save lives-teaching nursing students about pediatric respiratory diseases. Simul Healthc 2012;7(1):10–17.
9. Jeffries P. The NLN Jeffries Simulation Theory. Philadelphia: Wolters Kluwer; 2016.
10. Lopreiato JO, Downing D, Gammon W, et al. Terminology & Concepts Working Group. Healthcare Simulation Dictionary 2016. Available at: http://www.ssih.org/dictionary Accessed August 27, 2016.
11. McGovern KT. Applications of virtual reality to surgery: still in the prototype stage. BMJ 1994;308(6936):1054–1055.
12. Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN Student Satisfaction and Self Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today 2014;34(10):1298–1304.
13. Tanner CA. Thinking like a nurse: a research-based model of clinical judgment in nursing. J Nurs Educ 2006;45(6):204–211.
14. Adamson KA, Gubrud P, Sideras S, et al. Assessing the reliability, validity, and use of the Lasater Clinical Judgment Rubric: three approaches. J Nurs Educ 2012;51(2):66–73.
15. Lasater K. Clinical judgment using simulation to create an assessment rubric. J Nurs Educ 2007;46(11):496–503.
16. Todd M, Manz JA, Hawkins K, et al. The development of a quantitative evaluation tool for simulation in nursing education. Int J Nurs Educ Scholarsh 2008;5(1):Article 41.
17. Adamson KA, Parsons ME, Hawkins K, et al. Reliability and internal consistency findings from the C-SEI. J Nurs Educ 2011;50(10):583–586.
18. Adamson KA. Piloting a method for comparing two experiential teaching strategies. Clin Simul Nurs 2012;8(8):e375–e382.
19. Zendejas B, Wang AT, Brydges R, et al. Cost: the missing outcome in simulation-based medical education research: a systematic review. Surgery 2013;153(2):160–176.
20. Levin HM, McEwan PJ. Cost-effectiveness Analysis. Thousand Oaks, CA: Sage; 2001.
21. Lapkin S, Levett‐Jones T. A cost-utility analysis of medium vs. high‐fidelity human patient simulation manikins in nursing education. J Clin Nurs 2011;20(23–24):3543–3552.
22. White JL, Albers CA, DiPerna JC, et al. Cost Analysis in Educational Decision Making: Approaches, Procedures, and Case Examples. Madison: University of Wisconsin, Madison, Wisconsin Center for Education Research Coordination, Consultation & Evaluation Center; 2005.
23. Sandelowski M. Whatever happened to qualitative description? Res Nurs Health 2000;23(4):334–340.
Keywords:

Virtual simulation; nursing; manikin; mannequin; cost-utility; comparative effectiveness

© 2018 Society for Simulation in Healthcare