Retention of Pediatric Resuscitation Performance After a Simulation-Based Mastery Learning Session: A Multicenter Randomized Trial : Pediatric Critical Care Medicine

Journal Logo

Feature Articles

Retention of Pediatric Resuscitation Performance After a Simulation-Based Mastery Learning Session

A Multicenter Randomized Trial

Braun, LoRanée MD1; Sawyer, Taylor DO, MEd2; Smith, Kathleen MD1; Hsu, Angela MD2; Behrens, Melinda MD1; Chan, Debora PharmD2; Hutchinson, Jeffrey MD3; Lu, Downing MD, MPH3; Singh, Raman DO4; Reyes, Joel DO4; Lopreiato, Joseph MD, MPH3

Author Information
Pediatric Critical Care Medicine 16(2):p 131-138, February 2015. | DOI: 10.1097/PCC.0000000000000315
  • Free

Abstract

Objectives: 

Using simulation-based mastery learning, residents can be trained to achieve a predefined performance standard in resuscitation. After mastery is achieved, performance degradation occurs over time. Prior investigations have shown performance retention of 12–14 months following intensive simulation-based mastery learning sessions. We sought to investigate the duration of mastery-level resuscitation performance retention after a single 1- to 2-hour simulation-based mastery learning session.

Design: 

Randomized, prospective trial.

Setting: 

Medical simulation laboratory.

Subjects: 

Convenience sample of 42 pediatric residents.

Interventions: 

Baseline resuscitation performance was determined on four standardized simulation scenarios. After determination of baseline performance, each resident repeated each scenario, as needed, until mastery-level performance was achieved. Residents were then randomized and retested 2, 4, or 6 months later. Statistical analysis on scores at baseline and retesting were used to determine performances changes from baseline and performance retention over time.

Measurements and Main Results: 

Forty-two residents participated in the study (12 in 2 mo group, 14 in 4 mo group, and 16 in 6 mo group). At baseline, postgraduate year-3 residents performed better than postgraduate year-1 residents (p = 0.003). Overall performance on each of the four scenarios improved at retesting. The percent of residents maintaining mastery-level performance showed a significant linear decline (p = 0.039), with a drop at each retesting interval; 92% retained mastery at 2 months, 71% at 4 months, and 56% at 6 months. There was no difference in retention between postgraduate year-1, postgraduate year-2, and postgraduate year-3 residents (p = 0.14).

Conclusions: 

Residents displayed significant improvements in resuscitation performance after a single simulation-based mastery learning session, but performance declined over time, with less than 60% retaining mastery-level performance at 6 months. Our results suggest that relatively frequent refresher training is needed after a single simulation-based mastery learning session. Additional research is needed to determine the duration of performance retention following any specific simulation-based mastery learning intervention.

In clinical practice, there is limited opportunity for pediatric residents to achieve competency in pediatric and neonatal resuscitation due to the infrequency of acute life-threatening illnesses and cardiopulmonary arrests in young children and neonates. Life support courses, such as Pediatric Advanced Life Support (PALS) and the Neonatal Resuscitation Program (NRP), may not provide sufficient training for pediatric residents to become competent in pediatric and neonatal resuscitation (1, 2). Therefore, simulation training has been advocated as a method for pediatric residents to achieve competency in pediatric and neonatal resuscitation (3–6).

Using a rigorous form of competency-based education, known as “simulation-based mastery learning (SBML),” residents can be trained to achieve a predefined performance standard in cardiopulmonary resuscitation (7). SBML involves an initial evaluation of baseline performance, clear performance objectives with a mastery-level performance standard, engagement in simulation-based deliberate practice focused on reaching the standard, formative feedback based on performance, and continued practice until the mastery-level standard is reached (8). SBML has been shown to improve the quality of care provided by residents and decrease procedural complication rates in several areas of medicine and surgery (9–12). After performance at a set standard is achieved through SBML, it is expected that some degree of performances degradation will occur over time. Prior studies of performance retention after an intensive SBML intervention for Internal Medicine residents have shown retention of performance up to 12–14 months (13, 14). These intensive SBML sessions ranged from 8 hours to 3 days. Despite their clearly beneficial effect, these intensive SBML sessions may be difficult to conduct in some training environments and a single, shorter duration SBML session may be more feasible. Currently, it is unknown how long residents retain mastery-level resuscitation performances after a single SBML session, and the impact of the intensity and duration of the SBML session on performance retention remains to be defined.

The purpose of this study was to examine the retention of mastery-level resuscitation performances in pediatric residents after a single brief SBML session. Our hypotheses were as follows: 1) resident performance in pediatric and neonatal resuscitation would decline as a function of time after initial mastery-level performance was achieved, 2) an optimal retraining interval after a single SBML session could be defined, and 3) the results could be compared against those of prior studies of more intensive SBML interventions to gain insight into the duration of mastery-level performance retention following different initial SBML interventions.

METHODS

Overview

The study followed a multicenter, prospective randomized design. Pediatric residents in four different military pediatric residency programs (Madigan Army Medical Center, Tripler Army Medical Center, Walter Reed National Military Medical Center, and San Antonio Uniformed Services Health Education Consortium) were asked to participate in the study. Subjects who volunteered to participate completed a single SBML session and were randomized to return to the simulation center at a predetermined interval of 2, 4, or 6 months to complete a single retesting session. Subjects were randomized at each training site using a random number generator. Statistical analysis on scores at baseline and retesting were used to determine performance changes and performance retention over time. All subjects completed preinvestigational and postinvestigational questionnaires that included demographic data and information on participation in real and simulated resuscitations. Written informed consent was obtained from all participants. The study protocol was approved by the institutional review board at each site and investigators adhered to the policies for protection of human subjects as prescribed in Code of Federal Regulations, title 45, part 46.

Simulation Scenarios

Four standardized simulation scenarios were developed by the investigators specifically for the study. Scenario design was based on previously published pediatric simulation research (15). The four scenarios included hypovolemic shock, asystole, respiratory arrest, and meconium delivery (Table 1). Each scenario was completed by a single resident. At each center, a single assistant “nurse” was available to assist the resident during their baseline and retest simulation scenarios. The nurse was able to help with procedures such as starting peripheral IV and providing chest compressions. The nurse could not perform physician-level procedures, such as intubation and intraosseous placement. Detailed instructions were provided to all participating sites in order to standardize the environmental aspects of the simulation scenarios, including equipment lists, equipment location, and the assistant’s responses/actions for each simulation. A frequently asked question (FAQ) document was also created and distributed to investigators. Simulations were conducted in the local simulation center at each research site. The Laerdal SimBaby (Laerdal Medical, Wappingers Falls, NY) mannequin was used in the study for all scenarios. A brief description of each scenario is provided in Table 1.

T1-6
Table 1:
Simulation Scenarios Utilized for Simulation-Based Mastery Learning Sessions

Performance Evaluation

Performance on each scenario was based on scores derived from specially developed scoring matrices which accompanied each scenario. Each scoring matrix included key steps in the resuscitation scenario and observable performance metrics, many of which were time-based. Performance matrices were based on published PALS and NRP guidelines. Easily observable and discrete actions, and specific time metrics, were used to produce more objective and reliable scores. Scores of 0 (not done), 1 (performed with difficulty, out of order, or in longer than target time), or 2 points (performed without difficulty and/or within the allotted time) were assigned for each performance metric, based on the level of performance observed. An example of a scenario with scoring matrix is provided in Figure 1. Iterative review and β-testing was used to ensure that each scenario worked well and that performance objectives in the scoring matrix were reasonable and feasible.

F1-6
Figure 1:
Example scenario, including brief clinical history, flow diagram, and performance evaluation matrix with predefined performance standards and scores. A&B = airway and breathing, BP = blood pressure, HR = heart rate, IO = intraosseous, IVF = IV fluid, RR = respiratory rate, SBP = systolic blood pressure.

The validity of the data derived from the scoring matrices was determined using several lines of evidence including content, response process, relationship to other variables, internal structure, and consequence (16). Content validity was ensured by basing the performance metrics on established PALS and NRP treatment protocols. Response process validity, for example, protecting data integrity by controlling for potential sources of error associated with test administration, was provided by standardizing the simulation environments and scenarios. The relationship of the scoring matrices to other variables was evident in that the matrices were based on accepted critical care management guidelines and modeled after previously published pediatric simulation research instruments. Internal structure validity was evident in that the matrices were able to reliably differentiate postgraduate year (PGY)-3 resident performance from PGY-1 resident performance at baseline (p = 0.003). Consequence validity refers to the impact on examinees from the assessment. A critical component of consequential validity is that no harm comes from the assessment or, at the very least, the benefits of the assessment outweigh the harm. Performance of subjects during the research study was not shared with their program director and thus did not carry negative consequences. Subjects in the study likely derived significant positive benefits from participation, as witness by performance score improvements.

Performance evaluation was conducted by a local investigator at each center (2–3 per center). Rater training was provided and detailed instructions on scenario scoring were developed and distributed to investigators at each study site as part of the FAQ document. The reliability of the scoring matrices was evaluated by way of interrater reliability (IRR) testing using Cronbach’s α. Three example videos showing “poor,” “moderate,” and “excellent” performances were developed for each of the four simulation cases (12 videos total). The example videos were reviewed and scored by the local investigators at each site. The IRR of the scoring matrices were excellent, with a mean Cronbach’s α of greater than 0.9.

A modified Angoff method was used to establish a defensible score, which represented “mastery” on each of the four scenarios (17, 18). Using this method, a panel of expert pediatric and neonatal intensive care physicians, who were PALS and NRP instructors, reviewed each of the four scoring matrices and were asked to indicate the level of performance on each matrix of a “competent trainee.” A competent trainee was defined as one who was competent to perform the resuscitation scenario independently without direct supervision. Each of the four scenarios included six to seven discrete performance metrics scored from 0 to 2, resulting in a maximum score of 12–14. The performance metrics and the mastery-level score for each scenario are provided in Table 1.

Mastery-Learning Session

To begin the study, each subject participated in a SBML session in his/her own hospital’s simulation center. Each resident in the study reported to the simulation center at a predetermined date and time established by the local investigators at that site. The environment and equipment layout for each resident followed the standards established for the study. At the start of the SBML session, baseline performance on each of the four scenarios was determined. Each scenario was conducted in a preset sequence—1) hypovolemic shock, 2) asystole, 3) respiratory arrest, and 4) meconium delivery—to ensure standardization of the simulation intervention across the four study sites. A short break (< 5 min) for resetting of equipment was provided in between each simulation scenario. During each scenario, the single resident was assisted by a single assistant who was only able to perform in a scripted manner to provide assistance with technical skills (gathering equipment, chest compressions, IV access, etc.), but was not allowed to provide directions or lead the resuscitation. During the baseline performance evaluation session, no feedback was provided between the four sequential simulations in order to avoid positively impacting performance on the subsequent scenarios. Baseline performance was scored by a local investigator at the time of the session using the scoring matrices.

After baseline performance was determined, directed feedback was provided on performance during each of the four scenarios. Each subject was then required to repeat each of the four simulation scenarios, as needed, until a mastery-level score was achieved on all four scenarios. For each scenario in which mastery-level performance was not observed, directed feedback was again provided and the subject practiced the scenario until mastery-level performance was reached. This design ensured that mastery-level performance was observed in all subjects for all four scenarios prior to leaving their initial SBML session, thereby ensuring mastery-level baseline performance in all subjects. Total time for the SBML session was 1–2 hours, depending on the number of times a resident needed to repeat each of the four simulation scenarios to achieve mastery-level performance.

Retesting Session

In order to evaluate performance changes from baseline, and mastery-level performance retention over time, subjects were randomized into one of three study groups: 2-month retesting, 4-month retesting, or 6-month retesting. According to their group assignment, each subject returned to the simulation center 2, 4, or 6 months after the initial SBML session and repeated each of the same four simulations scenarios. The environment and conduct of the retesting session mirrored that of the initial SBML session with each resident working in a standardized environment with a single assistant nurse. Retesting was conducted on the same four scenarios as the baseline assessment. Scenarios were conducted in the same order in an attempt to emulate the context of the simulations as much as possible between the baseline and retesting sessions. A short break (< 5 min) for resetting of equipment was provided between each scenario. Performance was scored by a nonblinded local investigator using the same scoring tools. Focused feedback based on performance was provided to subjects after they completed all four retesting scenarios. However, no repeated attempts or deliberate practice were allowed during the retesting session.

Statistical Analysis

A priori sample size estimate, based on an assumed 90% retention of mastery-level performance at the 2-month retest, 80% mastery retention at 4 months, and 70% mastery retention at 6 months, required 26 residents in each study group to provide a power of 80% with an α level of 0.05. Demographic data were analyzed using one-way analysis of variance (ANOVA) or Kruskal-Wallis one-way ANOVA on ranks. Changes in the performance scores from baseline to retesting for the hypovolemic shock and asystole scenarios were evaluated using a paired Student t test as the data were parametrically distributed. Changes in the performance scores from baseline to retesting for the respiratory arrest and meconium delivery scenarios were evaluated using a Wilcoxon signed rank test as the data were nonparametrically distributed. Performance retention was analyzed based on both resident percentage scores and the percentage of residents who retained mastery-level performance at the time of retesting. Differences in percentage scores between study groups were determined by logistic regression with the follow-up group assignment as a continuous variable. Logistic regression and Wald chi-square analyses were used to determine differences in mastery-level performance retention between study groups. Results were adjusted for baseline performance scores (analysis of covariance [ANCOVA]) using a logistic regression model with the possible confounding effects as covariates. The odds ratio (OR) for a one unit change was based on exponentiating the variable estimate. Differences in performance between PGY groups at baseline were based on ANOVA. Differences between PGY groups at retest were based on ANCOVA, adjusted for the baseline scores. In order to determine if any one of the 25 key performance metrics drove the differences in performance among the three study groups at the time of retesting, a Kruskal-Wallis one-way ANOVAs on ranks on each of the 25 key performance metrics across three study groups was performed. A p value of less than 0.05 was considered significant. Data were analyzed using SAS 9.3 (SAS Institute, Cary, NC).

RESULTS

Forty-two of 101 total residents (42%) from the four pediatric residency programs volunteered to participate in the study. Of these, 12 subjects were randomized to the 2-month retesting group, 14 to the 4-month group, and 16 to the 6-month group. There were no differences in age, gender, PGY, or experience with real or simulated pediatric or neonatal resuscitation at baseline, or at the time of retesting, between the three study groups (Table 2). No subjects were lost to follow-up. All 41 residents who participated in the baseline SBML session returned at their assigned time for retesting.

T2-6
Table 2:
Demographic Data Obtained From Pediatric Residents

At baseline, mean performance scores among PGY-3 residents were significantly higher than among PGY-1 residents (PGY-3 baseline score mean, 79.7% [SD, 7.4]; median, 83% [range, 29–100] vs PGY-1 baseline score mean, 69.3% [SD, 10.9]; median, 73% [range, 17–100]; p = 0.003). Resident performance scores were higher at the time of retesting compared with baseline for each of the four scenarios (Table 3).

T3-6
Table 3:
Overall Performance Scores at Baseline and Retestinga

The overall mean performance score across the four scenarios dropped by an average of 1.6% for every additional 2 months of elapsed time from the initial SBML session, after adjustment for baseline performance scores. However, there was no significant difference in the overall mean performance scores at the time of retesting between the three study groups (2-month group retesting score: mean, 83.6% [SD, 7.7]; median, 83% [range, 33–100]; 4-month group retesting score: mean, 82.1% [SD, 5.8]; median, 83% [range, 43–100]; and 6-month retesting score: mean, 80.5% [SD, 7.8]; median, 81% [range, 33–100]; p = 0.21). There was also no significant difference in skill retention in any of the 25 individual performance metric across the three study groups at the time of retesting (p = 0.11–0.98).

Retention of mastery-level performance at the time of retesting was significantly different between the three study groups (Fig. 2). In the 2-month group, 92% of subjects (11/12) maintained mastery-level performance, in the 4-month group 71% (10/14) maintained mastery-level performance, and in the 6-month group 56% (9/16) maintained mastery-level performance (p = 0.04). The OR for failure to achieve mastery in the 4-month group versus 2-month group was 2.8 (95% CI, 1.1–7.7). The OR for failure to achieve mastery in the 6-month group versus 2-month group was 8.1 (95% CI, 1.1–12.3).There was no difference in mastery-level performance retention between PGY-1, PGY-2, and PGY-3 groups at the time of retesting (p = 0.14).

F2-6
Figure 2:
Total percent of residents maintaining mastery-level performance at each of the three retesting intervals. The percentage of residents retaining mastery-level performance showed a significant decline across the three retesting intervals (p = 0.04).

DISCUSSION

After participation in a single 1- to 2-hour SBML session, residents in our study displayed significant improvements in resuscitation performance compared with baseline testing. However, retention of mastery-level performance declined as a function of time, with greater than 90% performing at mastery level at 2 months, but less than 60% performing at mastery level at 6 months. The OR for failure to achieve mastery-level performance increased as the length of time between retesting intervals increased. There was no difference in performance retention between different PGY groups. The lack of differences in skill retention in any individual performance metrics suggest that no one key performance metric drove the differences in mastery-level performance retention among the groups.

Our results of improved resident resuscitation performance after simulation-based training are consistent with prior reports and further confirm the strength of simulation-based learning interventions. In a study by Nadel et al (4), residents who participated in a structured resuscitation curriculum that included simulated code scenarios performed better compared with residents who did not receive simulation training. Sawyer et al (5) showed that participation in simulation-based deliberate practice in neonatal resuscitation was effective at improving pediatric resident NRP performance. Donoghue et al (6) reported that simulation training improved performance by pediatric residents in PALS and recommended studies to investigate performance and knowledge decays over time.

Retention of resuscitation performance after a simulation-based educational session is an area of limited investigation. In a study by Kaczorowski et al (19), resident’s retention of knowledge and skill was evaluated 6–8 months after participation in a simulation-based NRP training course and was found to have deteriorated significantly despite “booster” training at 3–5 months. Patel et al (20) also showed that pediatric resident neonatal resuscitation knowledge and skills deteriorated shortly after NRP training. In that study, performance on a knowledge-based test was retained longer than performance in a simulation scenario. The authors concluded that discrepancies between knowledge and skill retention indicate that proficiency in one does not necessarily indicate proficiency in the other. An important point to consider is that these prior reports did not use an SBML strategy, and the retention of performance may be influenced by the type of simulation-based intervention provided.

In the current study, residents underwent a 1- to 2-hour SBML session. After this single session, retention of mastery-level performance declined significantly within 6 months. The significant decline indicates that residents failed to demonstrate performance to the standard which they had previously achieved and that retraining may be needed. Our results suggest that a retraining session is indicated prior to 6 months after a single 1- to 2-hour SBML session in order to maintain mastery-level performance in a large percentage of residents. In a prior study by Wayne et al (13), Internal Medicine resident skills in Advanced Cardiac Life Support (ACLS) after four 2-hour SBML sessions were evaluated (8 total hours). In that study, prospective follow-up out to 14 months showed that resident performances in ACLS did not significantly decay. In a more recent study by Moazed et al (14), Internal Medicine residents who participated in an intensive 3-day SBML “boot camp” were also found to retain performance in ICU skills in for up to 12 months. Comparing our results with those of Wayne et al (13) and Moazed et al (14), one could conclude that retraining sessions may be required less frequently if the initial SBML intervention is more intense and of longer duration. The differences in the duration of retention between prior SBML studies and the current study may suggest a dose-response relationship between the SBML intervention and the duration of the learning effect, such that the duration of performance retention is directly related to the intensity and/or duration of the initial SBML intervention. However, additional research on performance retention after variant intensities or duration of SBML is needed to prove that hypothesis. Currently, there are insufficient data to allow one to accurately predict how long mastery-level performance retention is anticipated after a specific intensity or duration of SBML instruction. Additionally, individual differences in baseline performance and experience may make it difficult to establish a universal retraining interval.

Our study has some limitations. Recruitment for the study was less than desired. The suboptimal recruitment may have been an artifact of the voluntary nature of the investigation. This may introduce selection bias and decrease the generalizability of our results. Prior investigations in which residents were required to participate in SBML have shown much higher recruitment (13, 14). As a result of suboptimal enrollment, we did not enroll the required number of subjects to meet our a priori sample size. A major limitation is the use of nonblinded performance evaluations. We chose this strategy for the SBML session because performance assessment was required in real time during the mastery-learning session and could not be assessed by blinded video review. Ideally, blinded video review could have been used to score the retest sessions. However, we were unable to reliably obtain video at all study sites and thus could not employ that strategy. We attempted to limit bias in scoring by the nonblinded evaluators as much as possible by using scoring criteria based on easily observed and time-driven markers which are at less risk for scoring bias. An additional limitation is that the team involved in the resuscitations was very limited (two people), which may not reflect clinical practice in the majority of resuscitation. We also did not specifically train or measure teamwork skills, which are critical skills in resuscitating those sick newborn and children. Finally, this is a T-1 level study (results achieved in the educational laboratory) (21). There are no data presented regarding the impact on patient care practices (T2) or patient and public health (T3). Further research is required to investigate the effect of this SBML session on T2 and T3 outcomes.

CONCLUSIONS

After participation in a single 1- to 2-hour SBML session, residents in our study displayed significant improvements in resuscitation performances. Retention of mastery-level performance declined as a function of time, with greater than 90% performing at mastery level at 2 months, but less than 60% performing at mastery-level at 6 months. There was no difference in performance retention between different PGY groups. Our results suggest that the greatest number of residents maintained mastery-level performance at the shortest training interval (2 mo) and that a retraining interval of less than 6 months may be needed for residents to maintain mastery-level performance after a single 1- to 2-hour SBML intervention, as described here. However, based on results from prior investigations in this area, a longer retraining interval may be expected after a more intensive and longer duration SBML session. Additional investigation is required in this area to be able to extrapolate the duration of performance retention to the amount and/or intensity of initial SBML instruction.

REFERENCES

1. Grant EC, Marczinski CA, Menon K. Using pediatric advanced life support in pediatric residency training: Does the curriculum need resuscitation? Pediatr Crit Care Med. 2007;8:433–439
2. Nadel FM, Lavelle JM, Fein JA, et al. Assessing pediatric senior residents’ training in resuscitation: Fund of knowledge, technical skills, and perception of confidence. Pediatr Emerg Care. 2000;16:73–76
3. Nguyena H, Daniel-Underwooda L, Ginkeld C, et al. An educational course including medical simulation for early goal-directed therapy and the severe sepsis resuscitation bundle: An evaluation for medical student training. Resuscitation. 2009;80:674–679
4. Nadel FM, Lavelle JM, Fein JA, et al. Teaching resuscitation to pediatric residents: The effects of an intervention. Arch Pediatr Adolesc Med. 2000;154:1049–1054
5. Sawyer T, Sierocka-Castaneda A, Chan D, et al. Deliberate practice using simulation improves neonatal resuscitation performance. Simul Healthc. 2011;6:327–336
6. Donoghue AJ, Durbin DR, Nadel FM, et al. Effect of high-fidelity simulation on Pediatric Advanced Life Support training in pediatric house staff: A randomized trial. Pediatr Emerg Care. 2009;25:139–144
7. Wayne DB, Butter J, Siddall VJ, et al. Mastery learning of advanced cardiac life support skills by internal medicine residents using simulation technology and deliberate practice. J Gen Intern Med. 2006;21:251–256
8. McGaghie WC. Research opportunities in simulation-based medical education using deliberate practice. Acad Emerg Med. 2008;15:995–1001
9. Barsuk JH, McGaghie WC, Cohen ER, et al. Simulation-based mastery learning reduces complications during central venous catheter insertion in a medical intensive care unit. Crit Care Med. 2009;37:2697–2701
10. Barsuk JH, McGaghie WC, Cohen ER, et al. Use of simulation-based mastery learning to improve the quality of central venous catheter placement in a medical intensive care unit. J Hosp Med. 2009;4:397–403
11. Barsuk JH, Ahya SN, Cohen ER, et al. Mastery learning of temporary hemodialysis catheter insertion by nephrology fellows using simulation technology and deliberate practice. Am J Kidney Dis. 2009;54:70–76
12. Zendejas B, Cook DA, Bingener J, et al. Simulation-based mastery learning improves patient outcomes in laparoscopic inguinal hernia repair: A randomized controlled trial. Ann Surg. 2011;254:502–509; discussion 509–511
13. Wayne DB, Siddall VJ, Butter J, et al. A longitudinal study of internal medicine residents’ retention of advanced cardiac life support skills. Acad Med. 2006;81(10 Suppl):S9–S12
14. Moazed F, Cohen ER, Furiasse N, et al. Retention of critical care skills after simulation-based mastery learning. J Grad Med Educ. 2013;5:458–463
15. Donoghue A, Nishisaki A, Sutton R, et al. Reliability and validity of a scoring instrument for clinical performance during Pediatric Advanced Life Support simulation scenarios. Resuscitation. 2010;81:331–336
16. Downing SM. Validity: On meaningful interpretation of assessment data. Med Educ. 2003;37:830–837
17. Angoff WHThorndike RL. Scales, norms, and equivalent scores. Educational Measurement. 1971Second Edition. Washington, DC American Council on Education:508–600 In:
18. Downing SM, Tekian A, Yudkowsky R. Procedures for establishing defensible absolute passing scores on performance examinations in health professions education. Teach Learn Med. 2006;18:50–57
19. Kaczorowski J, Levitt C, Hammond M, et al. Retention of neonatal resuscitation skills and knowledge: A randomized controlled trial. Fam Med. 1998;30:705–711
20. Patel J, Posencheg M, Ades A. Proficiency and retention of neonatal resuscitation skills by pediatric residents. Pediatrics. 2012;130:515–521
21. McGaghie WC, Draycott TJ, Dunn WF, et al. Evaluating the impact of simulation on translational patient outcomes. Simul Healthc. 2011;6(Suppl):S42–S47
Keywords:

degradation; master learning; resuscitation; retention; simulation

©2015The Society of Critical Care Medicine and the World Federation of Pediatric Intensive and Critical Care Societies