Novice nurses — those in their first few years of independent practice — are often overwhelmed in clinical settings and unable to care for multiple patients (Monagle, Lasater, Stoyles, & Dieckmann, 2018). This is a significant problem. Although novice nurses have theoretical knowledge, they have difficulty aligning pieces of clinical data and applying their knowledge in a timely manner (Kavanagh & Szweda, 2017); they find themselves unable to use their knowledge in situations of patient care. The lack of practice readiness is exacerbated by increasing patient acuity and decreasing lengths of stay in acute care settings (Spector et al., 2015).
Most often noted is the inability of novice nurses to manage multiple responsibilities, anticipate changes in patients, recognize the urgency of situations, communicate with other providers, and delegate tasks (Berkow et al., 2009; Monagle et al., 2018). For novice nurses, their ability to respond is negatively affected by low self-efficacy for providing nursing care to multiple patients (Monagle et al., 2018). Nurse preceptors, practice educators, and administrators recognize gaps in higher order thinking skills as contributors to the lack of practice readiness among novice nurses (Burgess, Murphy Buc, & Brennan, 2018; Haddeland, Slettebo, Carstens, & Fossum, 2018). It is understood that knowledge development in clinical practice requires experiential teaching and learning through facilitated, situated cognition with reflection (Kavanagh & Szweda, 2017).
Simulation offers active learning opportunities that mimic modern health care settings and help nursing students gain clinical experience efficiently while fostering the development of higher order thinking skills. Simulation is an exemplar of a strong clinical learning activity that involves facilitation and reflection (Lin, Viscardi, & McHugh, 2017). However, as there is a gap in the literature for simulation preparation related to best practices for multiple-patient simulation (MPS), the goal of this study was to examine how simulation preparation and repeated doses of MPS influence novice nurses’ competence and self-efficacy. This study tested the effectiveness of three simulation preparation methods — expert modeling (EM), voice-over PowerPoint (VOPP), and reading assignments — as training methods for novice nurses.
REVIEW OF LITERATURE
To enable nursing students to practice skills that hospital administrators ranked low in terms of novice nurses’ readiness for practice (Parker, Giles, Lantry, & McMillan, 2014), there has been a surge of interest in implementing MPS in recent years (Blodgett, Blodgett, & Bleza, 2016). Skills examined in MPS include decision-making and follow-up; recognizing change in patient status and interpreting assessment data; taking initiative, working independently, and completing tasks in a time frame; anticipating risk and delegating tasks; and keeping track of multiple responsibilities and prioritizing.
Several conceptual articles in the literature detail strategies for implementing MPS in nursing (Chunta & Edwards, 2013; Horsley, Bensfield, Sojka, & Schimitt, 2014; Nowell, 2016). However, there is wide variability in the types of MPS research reported, with 10 research studies representing qualitative, descriptive, and experimental designs. Research teams frequently evaluated learner performance in simulation with senior, undergraduate, prelicensure nursing learners using a physical simulation lab with manikins, but one used a virtual MPS learning activity (Josephson & Butt, 2014). Most studies used simulation as the intervention and then measured the impacts of MPS on behavioral performance. Only two studies had a comparison group (Franklin, Sideras, Gubrud-Howe, & Lee, 2014; Radhakrishnan, Roche, & Cunningham, 2007). In these 10 studies, MPS experiences lasted between 20 and 60 minutes and involved care of between two and four patients; in addition, researchers used a variety of tools and evaluation methods to assess outcomes of MPS.
Half of the studies required participants to complete their simulation individually (Franklin, Sideras et al., 2014; Frontiero & Glynn, 2012; Radhakrishnan et al., 2007), whereas the remaining studies allowed group participation and evaluated team performance (Beroz, 2016; Davies, Nathan, & Clarke, 2012; Ironside & Jeffries, 2010; Ironside et al., 2009; Josephson & Butt, 2014; Kaplan & Ura, 2010; Prince, Winmill, Wing, & Kahoush, 2016; Sharpnack, Goliat, & Rogers, 2013). The majority of MPS researchers used a performance checklist. Three teams used researcher-developed tools without published psychometrics, whereas two teams used the Creighton Simulation Evaluation Instrument (CSEI), which has acceptable reliability and validity (Franklin, Sideras et al., 2014; Frontiero & Glynn, 2012). One research team used qualitative methods.
The body of simulation research is missing a rigorous examination of how to best utilize MPS. Extant MPS research is affected by methodological weaknesses common to simulation research, including lack of a comparison group and/or randomization (Cantrell, Franklin, Leighton, & Carlson, 2017). Furthermore, use of outcome measures that are either researcher-developed or published without a description of psychometric performance limits trustworthiness of findings (Blodgett, Blodgett, & Bleza, 2016).
Competence and Self-Efficacy
Research has shown that simulation can advance competence and clinical judgment in novice nurses (Ball & Kilger, 2016). Many researchers have adopted Benner’s (1982) definition of competence, that is, the ability to perform to an expected standard with desirable outcomes. However, competent performance in simulation differs from the competent stage of development for nursing practice, where nurses are able to differentiate the important aspects of a clinical situation (Ball & Kilger, 2016). Simulation researchers operationalize competence in a laboratory by quantifying behavioral performance against a valid and reliable measurement tool. MPS may help prepare novice nurses to provide competent care in ways that single-patient simulation may not (Horsely et al., 2014), especially because MPS requires novice nurses to utilize global thinking skills.
Self-efficacy refers to a perception that an individual is capable of performing in a certain manner to achieve specific goals (Franklin, Gubrud-Howe, Sideras, & Lee, 2015). A sense of self-efficacy is important for novice nurses’ advancement during school and in the first 6 months of practice (Haman, 2014). The simulation literature is replete with studies evaluating self-efficacy as a stand-alone outcome (Franklin & Lee, 2014). To advance the science of simulation, the call for research now focuses on the relationship between competence and self-efficacy and how the relationship evolves over time (Kardong-Edgren, 2013).
Nursing faculty historically have used reading assignments to orient learners before simulation (Franklin, Sideras et al., 2014). Research from nursing and medical education suggests that the use of VOPP lecture might be an effective way to prepare learners for simulation (Beman, 2017; Fernandez et al., 2013). Presimulation EM videos may further enhance novice nurses’ competence (Coram, 2016; Franklin, Sideras et al., 2014; Jarvill, Kelly, & Krebs, 2018). Both VOPP and EM increase learner engagement because auditory and visual cues make learning more active. VOPP and EM minimize variance in what learners understand and increase accountability for learning (Ancy & Nagar, 2016). EM videos provide examples of clinical judgment and critical thinking processes in the context of a patient case (Coram, 2016).
Health care educators have found success using EM videos, especially because the expert model became a standard of reference for learners (Aronson, Glynn, & Squires, 2013; Johnson et al., 2012). One drawback to filming EM videos for use as simulation preparation is the time and attention to detail needed to script and record videos in the simulation laboratory (Franklin, Sideras et al., 2014), but researchers can use well-produced EM videos for several years to recoup the financial and time investment.
The purpose of this parallel, single-blinded, randomized trial was to evaluate the effect of three simulation preparation methods on novice nurses’ competence and self-efficacy for providing care to multiple patients. The study was guided by Bandura’s social cognitive theory (Bandura, 1977), which provides an explanation of the theoretical relationship between EM, self-efficacy, and competence, thereby guiding both intervention and analysis. Institutional review board approval was obtained.
Researchers used a random numbers table to group participants into three equal-sized groups (EM, VOPP, reading assignments); assignments were given to participants in a sealed envelope. Data from a pilot study revealed small effect sizes for all three simulation preparation interventions, though EM (d = 0.413) was found to be more effective than VOPP (d = 0.226). A power analysis revealed that 72 participants would be needed to detect a statistically significant difference between study groups, with an alpha of .1 and 80 percent power. The researchers used convenience sampling to recruit 74 students enrolled in capstone clinical courses in traditional baccalaureate programs at two schools of nursing (one private university, southern United States; one public university, western United States). There were no exclusion criteria. Seventy-three participants completed the study.
A complete description of the MPS is available in previous publications (Franklin, Dodd, Sideras, & Hutson, 2018; Franklin, Sideras et al., 2014). Briefly, the scenario involved care of three simulated manikin patients at the beginning of the shift. Participants were given a scripted, 15-minute bedside nursing report and completed the scenario independently while acting as an RN. They completed a 45-minute scenario during which they prioritized the care of the three patients, delegated tasks to an unlicensed nursing assistant, performed technical skills, and communicated with a provider via telephone. The simulation patients had diagnoses of respiratory distress, diabetic complications, and cardiovascular disease; one required rescue interventions, such as pro re nata medication administration and oxygen titration. Given the inherent time constraints, participants were required to establish and reorganize priorities in action.
Details pertaining to the logistics of managing data collection at multiple sites are available in a previous publication (Franklin et al., 2018). The research team used Box.com (a secure file-sharing workspace) to share data files. Multiple team meetings using Adobe Connect and a dry run of the scenario at each data collection site ensured successful implementation. The research team maintained fidelity to the protocol by scripting all dialogue and providing cues at specific intervals. Two raters and one manikin operator traveled to each data collection site, and pre- and posttest simulation assessments took place on each campus.
The study was designed to investigate the dose response to repeated MPS and, at the same time, implement three different simulation preparation interventions in three groups to capture the influence of each delivery method. Materials were delivered via the learning management system (LMS) and were double password-protected to protect against diffusion of intervention.
The study investigators instructed all participants to view group-specific materials at least four times during the four-week study period and sent weekly email reminders. As a baseline, all groups had access to articles, policies, and procedures pertinent to medical-surgical patient care; these represented usual care in terms of simulation preparation.
PRESIMULATION EM VIDEO/INTERVENTION
Participants had access to 45 minutes of EM videos that were embedded in VOPP lectures, which lasted 25 minutes. Materials were available over four weeks on the LMS. One study investigator was the expert model in the videos, and videos were filmed in the simulation theater at each campus. EM videos addressed content related to seven concepts: taking report with a graphic organizer worksheet, prioritizing patient care, delegating to unlicensed assistive personnel, safety checks in patient rooms, focused physical assessments, safe medication administration, and using a standardized health care provider communication tool on the telephone (Lasbon, 2013). There were additional handouts related to performing safety checks and a focused physical assessment.
Participants had access to 45 minutes of VOPP lecture, available over four weeks on the LMS, specifically to match exposure and content with the EM video group. The script for PowerPoint contained content related to the aforementioned seven concepts. There were additional handouts related to performing safety checks and a focused physical assessment.
TRADITIONAL SIMULATION PREPARATION/PASSIVE CONTROL
Participants had access to articles, policies, and procedures on the LMS. The estimated time required to review these materials was 45 minutes.
Data Collection and Instrumentation
Evaluation involved rater-observer measures of competence during pre- and posttest MPS assessments. Two consistent raters were used (one from each campus), and both raters were blinded to the groups in which participants were assigned. Self-efficacy surveys were administered at baseline and after the four-week intervention.
The study used instruments that are well established in simulation research with novice nurses. The CSEI (Todd, Manz, Hawkins, Parsons, & Hercinger, 2008), a 22-item rater-observation measure with dichotomous rating, was used to measure competence. CSEI content validity was established by an expert panel of simulation faculty. The CSEI has been used recently in two studies with MPS (Franklin, Sideras et al., 2014; Frontiero & Glynn, 2012) and had good interrater reliability in pilot work (kappa statistic, 0.811).
Immediately following the simulation, participants completed online debriefing questions to promote reaction to the simulation experience (Eppich & Cheng, 2015). Neither debriefing nor feedback was included as part of the study protocol because the intent was to evaluate the effect of the interventions directly. Following the online debriefing questions, participants completed a modified National League for Nursing Self-Confidence for Learning in Simulation (Jeffries & Rizzolo, 2006) survey. The Self-Confidence for Learning in Simulation is a seven-item Likert survey for participant self-report; modifications were made with permission from the National League for Nursing based on previous psychometric work that established validity and reliability (comparative fit index = 0.979, Tucker-Lewis index = 0.975, Cronbach’s alpha = .83; Franklin, Burns, & Lee, 2014). Scores are calculated by summing responses with higher responses representing greater self-efficacy.
Commercially available statistical software (Stata®/MP 13) was used with standard descriptive statistics to describe the sample; study personnel conducted all analyses under direction of a consult statistician. Analysis centered on comparison of novice nurses’ competence and self-efficacy among three groups. Comparison of baseline characteristics among intervention and control groups were made using Pearson’s χ2 analysis. A three-way mixed-effects analysis of variance (Group × Time) was used on the raw change scores from pre- to posttest simulation for competent behavioral performance. Another three-way mixed-effects analysis of variance (Group × Time) was used on the raw change scores from pre- to posttest for self-efficacy.
Demographic characteristics of the 73 participants who completed the study protocol are available in Table 1. The three groups were similar in age, previous work experience, ethnicity, and previous degree. The randomized groups were not similar in terms of the number of hours worked per week; the EM group worked more hours per week on average than the reading group, who worked more hours per week than the VOPP group. Irrespective to the study group to which they were randomized, participants viewed intervention materials a similar number of times. The kappa statistic for interrater reliability on the CSEI was 0.9044, which represents near-perfect agreement between the two raters for data collection at both sites.
There was no statistical difference between the three groups in terms of raw change in competent behavioral performance, F(2, 70) = 0.32, p = .727, χ2 = .009. Raw change in competence scores was greater in the EM group (d = 0.151) and the VOPP group (d = 0.102), compared with the reading group. The EM group started with the highest competence score and had some gain, but the increase in score was not statistically significant. On average, most participants started with a score of about 15 and finished with a score of about 20, out of a maximum score of 57. When the statistician collapsed study groups to evaluate the change among participants between pre- and posttest simulations, there was a statistically significant increase.
There was no statistical difference between the three groups in terms of raw or relative change in self-efficacy score. There was no correlation between relative change in self-efficacy and raw change in competence score.
MPS helps novice nurses develop higher order thinking skills, like managing multiple tasks, anticipating changes in patient status, and communicating with other providers. This is the first multisite trial to test three simulation preparation methods for how preparation and repeated doses of MPS influence novice nurses’ competence and self-efficacy. Findings from this trial of 73 prelicensure novice nurses enrolled in capstone clinical courses at two schools of nursing indicate that novice nurses need repeated MPS experiences to increase competent behavioral performance and that there is no relationship between competent behavioral performance and self-efficacy.
Interestingly, results from this multisite, fully powered randomized control trial differ from the single-site, randomized control pilot study. In the pilot, with a smaller group of participants from one nursing school, effect size estimates favored the effect of EM videos on increasing competent behavioral performance (Franklin, Sideras et al., 2014). However, there was no statistical difference between the three study groups representing EM, VOPP, and traditional reading assignments as simulation preparation in this larger, more diverse sample. On the one hand, researchers and educators now have concrete evidence from a highly rigorous study that variances in simulation preparation were not predictors of behavioral performance in MPS. On the other hand, it is interesting for educators to consider potential causes of the mismatch between the pilot study and full-scale trial results.
One potential confounding variable is the difference in previous simulation experiences between two data collection sites. At the campus where the majority of participants were recruited, nursing students were much more simulation savvy and had completed some individual benchmark simulations as part of their curriculum. Participants from this campus in the reading assignments control group had much higher pretest scores than the simulation naïve participants from the second data collection site; this could have biased study findings against the effectiveness of the EM and VOPP interventions.
Even though researchers did not detect a significant difference in behavior change from the three preparation groups, the results of this study are meaningful in terms of how educators plan and implement MPS learning activities. It is important for educators to remember that after the statistician collapsed study groups and compared behavioral performance at pretest versus posttest, the results were statistically significant. This effect of time means that there was significant (statistical and clinical) overall improvement in performance over time. Participants benefitted from experiencing the MPS learning activity more than once, regardless of what simulation preparation method they used. These results are consistent with other pre- and posttest design simulation studies in the literature with practicing nurses (Abe, Kawahara, Yamashina, & Tsuboi, 2013; Abelsson, Lindwall, Suserud, & Rystedt, 2017), as well as with novice nurses in a group, MPS where clinical judgment was the variable of interest (Ironside & Jeffries, 2010).
From a measurement standpoint, it is important to acknowledge the strengths and opportunities for improvement of the CSEI (Todd et al., 2008). Also known as the Creighton Competency Evaluation Instrument, the tool has been used in several simulation studies, including the landmark National Simulation Study (Hayden, Smiley, Alexander, Kardong-Edgren, & Jeffries, 2014). In this study, researchers used total CSEI scores to represent compentent behavioral performance. The researchers captured an indirect measure of novice nurses’ specific ability because the rating scale is dichotomous and behavioral descriptors frequently did not differentiate between “good performance” and “performance.”
Measurements of competence using behavioral performance measures are only as good as the instrument itself. The research team appreciated that the CSEI rewarded participants for making good choices in their nursing judgment and did not penalize nurses for making the “wrong” choice. Nonetheless, on occasion, it was impossible to differentiate whether learners who scored 10 on the CSEI did so because a) they were making poor choices and were essentially frozen in action as they were not confident about their next move in terms of managing multiple patients simultaneously or b) whether they spent all of their time with one patient and never examined a patient who represented a lesser priority.
There are several important implications for using MPS as a learning activity. First, the MPS experience provides invaluable opportunities for learners to demonstrate priority setting and delegation to unlicensed assistive personnel, which helps ease the transition from new graduate nurse to professional practice. One of the benefits of using multiple patients in terms of clinical reasoning is examining how learners juggle competing priorities within and between patients. Second, there are many logistics and challenges to consider, such as manikin resources, space, actors, expected behaviors, and scenario complexity; the research team addressed multisite research logistics in a separate publication (Franklin et al., 2018). Third, it is important for educators to decide the level of complexity that is appropriate for prelicensure MPS participants. In this study, three manikin patients represented the acuity of patients on a telemetry unit. Salient elements of complexity include urgency, time, and skill mix required. Nurse educators will want to be careful to balance the acuity of patients to the level of learners and also account for learner variance with which patient is chosen to be seen first and how that impacts total time spent with one patient before moving on to the next. Finally, educators need to consider the implications of using MPS with groups or individual learners. From a research standpoint, having one participant complete MPS at a time helped mitigate threats to validity in terms of having a stronger novice nurse “cover for” a peer whose clinical judgment and higher order thinking skills were not as strong. Doing MPS with individual learners takes more time and simulation space than it would take to do MPS with groups.
MPS provides simulation researchers and educators with opportunities to evaluate novice nurses’ time management. On the surface, researchers can evaluate time management based on how much time novice nurses spend in each patient room and on each phase of a task (e.g., safety checks, focused physical assessment, or medication preparation). At a deeper level, researchers get a picture of novice nurses’ abilities with clinical judgment and psychomotor skills because the nurse may fixate on a particular task for too long. In this study, researchers observed that participants who had trouble with the psychomotor skill of insulin preparation frequently could not manage their time effectively to deliver care to all three of their patients. In another MPS study, researchers found that novice nurses frequently missed cues related to deteriorating patients and that influenced how they managed their time and managed the clinical situation (Beroz, 2016). It is important for researchers and educators to level the complexities of a scenario to the novice nurses’ abilities. MPS should be challenging enough to promote learning but not too challenging to make time management seem impossible.
Even though the results indicate that EM videos are not superior to other simulation preparation methods, this study has important implications for nursing education. The study demonstrated that EM videos are effective in improving multiple-patient care competencies. Participants who watched EM videos had improved priority setting, delegation, safe medication administration, communication using the Situation, Background, Assessment, and Recommendation framework, and time management. Though educators introduce some of these concepts early in the nursing curriculum, EM videos may help learners understand these concepts more effectively than other teaching methods, such as didactic lecture (Jarvill et al., 2018). EM videos can have a powerful impact on learners, particularly because they are case-based, embedded in contextual knowledge, and available for repeated access (Coram, 2016; Kardong-Edgren et al., 2015). The upfront resources required for filming EM videos are small compared to their application to several cohorts of nursing learners in future semesters. When carefully crafted to exemplify current nursing practice, EM videos are effective and durable.
This study is rigorous in terms of research design, especially because researchers recruited the sample from two nursing programs in different parts of the United States. Multisite research gives credence to generalizability of findings. Furthermore, researchers from each campus recognized missed opportunities, especially related to medication administration safety with pro re nata nitroglycerin. Doing multisite research allows nurse scientists to recognize big picture curricular gaps. Researchers who implement multisite educational research studies can inform nursing education by making recommendations for curricular improvements to accelerate novice nurses’ practice readiness.
STRENGTHS AND LIMITATIONS
One of the main strengths of the current study is the randomized control design, which minimized threats to internal validity and allowed examination of the causal inference. Even though researchers recruited participants from two schools of nursing in diverse sections of the United States, the sample may underrepresent some groups based on age, gender, or race. One demographic characteristic that was different among study groups related to previous work experience in health care; the EM and reading groups generally had more work experience that could contribute to their competence. Findings from this study could be different if all three study groups’ demographic characteristics were more similar. Despite this limitation, the study analysis generated statistically significant results when comparing participants’ competent behavioral performance at pre- and posttest simulation assessments.
IMPLICATIONS FOR NURSING EDUCATION
This randomized control trial makes a significant contribution to nursing education because it represents clear empirical evidence that novice nurses benefit from experiencing MPS more than once during their nursing curriculum. MPS appears to have a positive impact on higher order thinking skills, like managing multiple tasks, anticipating changes in patient status, and communicating with other providers. Future research is needed to investigate the dose-added benefit when novice nurses participate in MPS on more than two occasions.
Abe Y., Kawahara C., Yamashina A., & Tsuboi R. (2013). Repeated scenario simulation to improve competency in critical care: A new approach for nursing education. American Journal of Critical Care
, 22, 33–40. doi:
Abelsson A., Lindwall L., Suserud B., & Rystedt I. (2017). Effect of repeated simulation on the quality of trauma care. Clinical Simulation in Nursing
, 13, 601–608. doi:
Ancy A. R., & Nagar C. (2016). Powerpoint presentation on computer simulation, blended learning, and educational podcasts. Retrieved from https://www.slideshare.net/rado001/powerpoint-presentation-on-computer-simulationblended-learning-and-educational-podcasts
Aronson B., Glynn B., & Squires T. (2013). Effectiveness of a role-modeling intervention on student nurse simulation competency. Clinical Simulation in Nursing
, 9, e121–e126. doi:
Ball L. S., & Kilger L. (2016). Analyzing nursing student learning over time in simulation. Nursing Education Perspectives
, 37, 328–330. doi:
Bandura A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review
, 84, 191–215.
Benner P. (1982). From novice to expert. American Journal of Nursing
, 82, 402–407.
Beman S. B. (2017). Evaluation of student competence in simulation following a prebriefing activity: A pilot study. University of Wisconsin-Milwaukee. Theses and Dissertations
. 1585. Retrieved from https://dc.uwm.edu/etd/1585
Berkow J. D., Virkstis K., Steward J., & Conway L. (2009). Assessing new graduate nurse performance. Nurse Educator
, 34, 17–22.
Beroz S. (2016). Explaoring the performance outcomes of senior-level nursing students in a multiple-pateint simulation. Nursing Education Perspectives
, 37(8), 333–334. doi:
Blodgett T. J., Blodgett N. P., & Bleza S. (2016). Simultaneous multiple patient simulation in undergraduate nursing education: A focused literature review. Clinical Simulation in Nursing
, 12(8), 346–355. doi:
Burgess A., Murphy Buc H., & Brennan J. (2018). Using a complex patient management scenario to help bridge the education-practice gap. Nursing Education Perspectives
, 39, 116–118. doi:
Cantrell M. A., Franklin A. E., Leighton K., & Carlson A. (2017). The evidence in simulation-based learning experiences in nursing education and practice: An umbrella review. Clinical Simulation in Nursing
, 13, 634–667. doi:
Chunta K., & Edwards T. (2013, November). Multiple-patient simulation
to transition students to clinical practice. Clinical Simulation in Nursing
, 9, e491–e496. doi:
Coram C. (2016). Expert role modeling effect on novice nursing students’ clinical judgment. Clinical Simulation in Nursing
, 12, 385–391. doi:
Davies J., Nathan M., & Clarke D. (2012, April). An evaluation of a complex simulated scenario with final year undergraduate children’s nursing students. Collegian
, 19, 131–138. doi:
Eppich W., & Cheng A. (2015). Promoting excellence and reflective learning in simulation (PEARLS): Development and rationale for a blended approach to health care simulation debriefing. Simulation in Healthcare
, 10, 106–115. doi:
Fernandez R., Pearce M., Grand J. A., Ranch T. A., Jones K. A., Chao G. T., & Kozlowski S. W. (2013). Evaluation of a computer-based educational intervention to improve medical teamwork and performance during simulated patient resuscitations. Critical Care Medicine
, 41, 2441–2562. doi:
Franklin A. E., Burns P., & Lee C. S. (2014). Psychometric testing on the NLN Student Satisfaction and Self-Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses. Nurse Education Today
, 34, 1298–1304. doi:
Franklin A. E., Dodd C., Sideras S., & Hutson J. (2018). A toolbox to make multisite simulation research successful. Clinical Simulation in Nursing
, 21, 16–22. doi:
Franklin A. E., Gubrud-Howe P., Sideras S., & Lee C. S. (2015). Effectiveness of simulation preparation on novice nurses’ competence and self-efficacy in a multiple-patient simulation
. Nursing Education Perspectives
, 36, 324–325. doi:
Franklin A. E., & Lee C. S. (2014). Effectiveness of simulation for improvement in self-efficacy among novice nurses: A meta-analysis. Journal of Nursing Education
, 53, 607–614. doi:
Franklin A. E., Sideras S., Gubrud-Howe P., & Lee C. S. (2014). Comparison of expert modeling
versus voice-over PowerPoint lecture and pre-simulation readings on novice nurses’ competence of providing care to mutliple patients. Journal Nursing Education
, 35, 615–622. doi:
Frontiero L. A., & Glynn P. (2012). Evaluation of senior nursing students' performance with high fidelity simulation. Online Journal of Nursing Informatics
, 16. Retrieved from ojni.org/issues/?p=2037
Haddeland K., Slettebo A., Carstens P., & Fossum M. (2018). Nursing students managing deteriorating patients: A systematic review and meta-analysis. Clinical Simulation in Nursing
, 21, 1–15. doi:
Haman R. P. (2014). Creating transition to professional practice model for new nurses
. Walden University. ProQuest Dissertations and Theses — Gradworks. Retrieved from https://search-proquest-com.ezproxy.tcu.edu/docview/1619572251?pq-origsite=summon
Hayden J., Smiley R. A., Alexander M., Kardong-Edgren S., & Jeffries P. R. (2014). The NCSBN National Simulation Study: A longitudinal, randomized controlled study replacing clinical hours with simulation in pre-licensure nursing education. Journal of Nursing Regulation
, 5(2, supplement). Retrieved from https://www.ncsbn.org/JNR_Simulation_Supplement.pdf
Horsley T. L., Bensfield L. A., Sojka S., & Schmitt A. (2014). Multiple-patient simulations: Guidelines and examples. Nurse Educator
, 39(6), 311–315. doi:
Ironside P. M., & Jeffries P. R. (2010). Using multiple-patient simulation
experiences to foster clinical judgment. Journal of Nursing Regulation
, 1, 38–41.
Ironside P. M., Jeffries P. R., & Martin A. (2009). Fostering patient safety competencies using multiple-patient simulation
experiences. Nursing Outlook
, 57, 332–337. doi:
Jarvill M., Kelly S., & Krebs H. (2018). Effect of expert role modeling on skill performance in simulation. Clinical Simulation in Nursing
, 24, 25–29. doi:
Jeffries P. R., & Rizzolo M. A. (2006). Designing and implementing models for the innovative use of using simulation to teach nursing care of ill adults and children: A national, multi-site, multi-method study
. New York, NY: National League for Nursing.
Johnson E. A., Lasater K., Hodson-Carlton K., Siktberg L., Sideras S., & Dillard N. (2012). Geriatrics in simulation: Role modelling and clinical judgment effect. Nursing Education Perspectives
, 33, 176–180.
Josephson J., & Butt A. (2014). Virtual multipatient simulation: A case study. Clinical Simulation in Nursing
, 10(5), e235–e240. doi:
Kaplan B., & Ura D. (2010). Use of multiple patient simulators to enhance prioritizing and delegating skills for senior nursing students. Journal of Nursing Education
, 49, 371–377. doi:
Kardong-Edgren S. (2013). Bandura’s self-efficacy theory…Something is missing. Clinical Simulation in Nursing
, 9, e327–e328. doi:
Kardong-Edgren S., Butt A., Macy R., Harding S., Roberts C. J., McPherson S., … Erickson A. (2015). Expert modeling
, expert/self-modeling versus lecture: A comparison of learning, retention, and transfer of rescue skills in health professions students. Journal of Nursing Education
, 54, 185–191. doi:
Kavanagh J. M., & Szweda C. (2017). A crisis in competency: The strategic and ethical imperative to assessing new graduate nurses’ clinical reasoning. Nursing Education Perspectives
, 38, 57–62. doi:
Lin P. S., Viscardi M. K., & McHugh M. D. (2017). Factors influencing job satisfaction of new graduate nurses participating in nurse residency programs: A systematic review. Journal of Continuing Education in Nursing
, 45, 439–452. doi:
Monagle J. L., Lasater K., Stoyles S., & Dieckmann N. (2018). New graduate nurse experiences in clinical judgment: What academic and practice educators need to know. Nursing Education Perspectives
, 39, 201–207. doi:
Nowell L. S. (2016). Delegate, collaborate, or consult? A capstone simulation for senior nursing students. Nursing Education Perspectives
, 37(1), 54–55. doi:
Parker V., Giles M., Lantry G., & McMillan M. (2014). New graduate nurses’ experiences in their first year of practice. Nurse Education Today
, 34, 150–156. doi:
Prince W. L., Winmill D., Wing D., & Kahoush A. (2016). Nursing students’ perceptions of a multiple-patient simulation
experience. Nursing Education Perspectives
, 37, 331–332. doi:
Radhakrishnan K., Roche J., & Cunningham H. (2007). Measuring clinical practice parameters with human patient simulation: A pilot study. International Journal of Nursing Education Scholarship
, 4(1), 1–8.
Sharpnack P. A., Goliat L., & Rogers K. (2013). Using standardized patients to teach leadership competencies. Clinical Simulation in Nursing
, 9, 395–e102. doi:
Spector N., Blegen M. A., Silvestre J., Barnsteiner J., Lynn M. R., Ulrich B., … Alexander M. (2015). Transition to practice study in hospital settings. Journal of Nursing Regulation
, 5, 24–38.
Todd M., Manz J. A., Hawkins K. S., Parsons M. E., & Hercinger M. (2008). The development of a quantitative evaluation tool for simulations in nursing education. International Journal of Nursing Education Scholarship
, 41, 1–17.