The Accreditation Council for Graduate Medical Education mandates that pediatric residency programs provide “sufficient training” for trainees to develop competency in 16 procedures.1 This mandate does not state how to conduct procedural skills training or assess competency. Pediatric skills training has traditionally employed the apprenticeship model of “see one, do one, teach one” at the patients bedside.2 This paradigm of skill development is not educationally sound or safe for patients. Parents are not comfortable with trainees learning a procedure on their child; however, they would allow novice trainees to perform procedures after achieving mastery on a simulator.3 A growing number of training programs are using simulation-based education for procedural training. Simulation-based interventions are well received by trainees and faculty, but they have not been systematically evaluated for their effect on pediatric patient outcomes.4,5
Deliberate practice is a key feature of simulation-based interventions for learning complex procedures.4 Conroy6 recently reported that interns who participated in deliberate practice on adult LP task trainers with focused feedback developed high levels of skill and retained these skills on the simulator at 3 to 6 months. Deliberate practice involves setting specific goals, providing immediate feedback, and concentrating as much on technique as on outcomes. During deliberate practice on a simulator, the motivated learner tries with a clearly defined goal in mind, fails, and adapts with each repeated cycle of practice. Learners evaluate their own performance and receive formative feedback on their specific deficits from a more experienced coach.7,8 Training is modified to the learners' needs in this iterative process of practice, feedback, and correction that is necessary to achieve expert skill performance in the clinical arena.7,8 Deliberate practice is a key feature of the mastery learning model where training continues until a minimal passing score is achieved.9
Today's medical school graduates are inexperienced and lack confidence in common procedures such as LP.2,10–12 At the start of training less than one-third of new emergency medicine, interns successfully obtained fluid when tested on an adult LP bench top model.13 In two recent studies of simulation-based training interventions, approximately 60% of new pediatric interns successfully obtained fluid when tested on an infant LP bench top model at the start of the year.14,15 Both studies demonstrated improvements in the number of correct steps and improved success rate when retested on the simulator at 6 months. Unfortunately neither of these interventions resulted in improvements in real patient performance compared with controls who did not participate.14,15
One possible reason for the lack of clinical difference seen in the latter studies was the lack of a coached deliberate practice session for the learners. In addition, neither study set a standard of mastery for their learners to achieve to “pass” the training portion, so it is possible that learning objectives were not met in all cases. We hypothesize that mastery training with coached deliberate practice on an infant LP simulator combined with an audiovisual training (AV) will improve clinical success rates with the procedure.
This was a randomized trial of a simulation-based deliberate practice intervention for teaching infant LP skills to pediatric residents in an urban tertiary care Department of Pediatrics. The local institutional review board approved this study. Following informed consent at the start of the year, residents were randomized within class year using a random number generator to participate in either intervention or control groups (see Fig. 1).
At the start of the study, questionnaires were completed to assess (1) prior training in and (2) experience with the infant LP procedure, (3) knowledge (using a 6-item quiz), and (4) confidence with the procedure (using a 4-point Likert scale). Individuals from both groups were then videotaped performing an LP on a Laerdal Baby Stap neonatal task trainer (Laerdal, Wappinger Falls, NY). The LP simulator is a plastic model that simulates the feel of an infant back, with palpable spinous processes and interspaces that allows fluid collection if a spinal needle is inserted correctly. Performance on this observed structured clinical examination (OSCE) was scored by the instructors (M.A., D.K., and J.F.), live, on the day of training, and later using video review by an author blinded to group assignment (M.T.) using a 15-item checklist of critical steps developed for this study.
The infant LP checklist used in this study was adapted from the validated adult LP checklist published by Lammers et al (26 items) as well as published texts.13,16,17 The original checklist was reviewed by a group of 14 board-certified practicing pediatric emergency medicine physicians using a modified delphi technique that resulted in the 15-step dichotomous checklist of critical steps used for this study. Because of limitations of the infant LP model and differences in the infant LP procedure, we removed 14 steps on the Lammers checklist: related to positioning (2), cleaning technique (1), local anesthetic injection (2), feeling a pop (1), bevel orientation (1), correcting for obstruction (1), stylet maneuvers (1), and manometry (5).12 Items on sterility (2) and sharps disposal (1) were added to the list along with the option of early stylet removal as a needle entry technique. The checklist was successfully pilot tested on four pediatric emergency medicine fellows with no modifications necessary.
Participants in both groups viewed an audiovisual presentation on the LP procedure. This 16-minute audiovisual presentation included the NEJM LP Procedure in Clinical Medicine Video18 and an additional segment developed by us to demonstrate infant specific features of the procedure. The content of the video included indications, contraindications, equipment, positioning, landmarks, preparation, key steps (including the 15 steps on the checklist, the early stylet removal technique, and use of local anesthesia and analgesia), and complications.
Participants in the intervention group then participated in a hands-on simulation-based coached deliberate practice training session in addition to the AV videos. This one-on-one session consisted of deliberate practice on the infant LP simulator while a faculty member coached them through the performance of 15 critical steps in the LP procedure. Three faculty coaches were trained to ensure consistency of teaching and assessment methods. Action-oriented coaching was the primary training technique used, emphasizing time-on-task that trainees had to practice the steps in the procedure. Trainees were required to master each step before moving on to the subsequent step, and therefore, the session was tailored to the individual learner based on their needs. Sessions were complete once the learner was ready and capable of demonstrating all critical steps flawlessly and independently from start to finish on the simulator. All trainees continued with repetitive deliberate practice until they could independently perform all tasks on the checklist. Simulator sessions lasted between 5 and 20 minutes depending on the learner. Postintervention questionnaires were administered to both intervention and control groups to measure knowledge and confidence at the end of the training session.
After the initial training, all participants were instructed to provide data at the time of their first LP performed on an infant (less than 12 months of age) via an online data entry tool. Reminders were sent via monthly emails to help ensure capture of all clinical encounters. Data collected included: whether cerebrospinal fluid (CSF) was obtained or not, cell counts, fluid appearance, number of attempts, age of the patient, clinical environment where the procedure was performed, and whether a prior attempt was made by another physician or not. Participants also answered questions measuring confidence and attitudes about their LP training experience after they had performed a clinical patient LP. Success was defined as obtaining an adequate CSF sample that was analyzed by the laboratory. Number of traumatic LPs (red blood cells >1000) was also reported. Underreporting of clinical events was assessed at the 6-month follow-up by asking all participants how many infant LPs they attempted during the year.
Simulator Outcome (OSCE)
After 6 to 8 months, both groups returned for a final OSCE using the simulator. Assessments of knowledge and confidence were repeated at this time. The same skills checklist used at study entry was used to assess retention of skills at 6 months. After the data were collected, the AV was repeated in both groups and the control group received the coached experiential simulator training.
We describe the participant characteristics using descriptive statistics. Binary clinical outcomes were compared using Fisher exact test. Skills on the simulator were compared at 6 to 8 months using Mann-Whitney U test for the overall score on the 15-item checklist and Fisher exact test for comparison of individual items. Bonferonni correction was done for all subanalyses. Knowledge and confidence levels were compared between groups using Mann-Whitney U test and χ2 analysis. Changes from baseline to 6 to 8 months were compared between groups using an analysis of covariance.
We powered the study to find a 25% difference in success rates at obtaining fluid between groups. We estimated a sample size of 56 residents in each group to have 80% power to detect this difference at an alpha level of 5% (G*Power for Mac, version 3.0.10). A planned interim analysis revealed a significant difference at a P value of less than 0.01 and therefore the decision was made to end the study early and our residency education leadership determined that all interns should be exposed to the simulation-based intervention in the coming year instead of continuing to randomize the incoming interns (and not exposing 50% of them to the intervention at the start of the year).
Fifty-six pediatric residents were randomized within postgraduate year to control or intervention groups and fifty-one residents completed the study (Fig. 1). Thirty-two residents [15 (58%) control and 17 (65%) intervention] completed a LP on an infant after the initial training and were analyzed for the clinical outcome. Data were only collected for the first LP performed after training. Demographics and baseline characteristics were similar in both groups (Table 1).
Sixteen of 17 subjects (94%) in the intervention group who performed a clinical infant LP successfully obtained CSF compared with 7 of 15 subjects (47%) in the control group (absolute risk reduction = 47%; 95% CI, 16%–70%; P = 0.005). The number need to teach is two to achieve one additional successful LP (95% CI, 1–6). Twenty-five percent of LPs in the intervention group were traumatic (>1000 red blood cells) compared with 43% in the control group, which was not a statistically significant difference. Groups were similar with regard to training level, hospital location where the procedure was done, number of attempts taken, and whether another provider had made previous attempts at an LP in the last 24 hours (Table 2). The timing and distribution of clinical events were similar between groups. The intervention group had a median delay of 56 days between their initial training and their first clinical encounter compared with 52 days in the control group.
Those who did not report a clinical outcome (n = 20) were asked at the 6-month follow-up whether they did an infant LP during the year or not. Four subjects in the intervention group and five subjects in the control group reported having a clinical event they did not report, bringing the true event rate from 65% to 81% in the intervention group and from 58% to 77% in the control group. Sensitivity analysis shows that if the unreported LPs are counted as failures for both groups, then 16/21 (76%) in the intervention group versus 7/20 (35%) in the control group were successful in obtaining CSF (difference = 41%; 95% CI, 11%–63%). If unreported LP attempts in the control group are counted as successful, this would increase success rate in the control group to 12/20 (60%).
There was no significant difference in the initial OSCE performance or the 6 to 8 months OSCE performance (Table 3) between the intervention and control group. Both groups improved over time, but this improvement was not statistically different between the groups (analysis of covariance). The timing and distribution of both the initial and follow-up training and assessment were similar between groups. During the year, both groups had similar exposure to didactics, observations of LPs, and total performance of LPs on infant.
The 15-item critical skills checklist done live at the time of the OSCE assessment demonstrated good internal consistency (Cronbach alpha 0.83). Sixty percent of the initial performances were reviewed by a blinded author (M.T.) on videotape. Inter-rater reliability could not be performed for all subitems on the checklist because of limitations on evaluating certain subitems on the videotape review. Inter-rater reliability was high for overall success (kappa 1.0).
Scores on the 6-item knowledge quiz were high in both groups (5 control vs. 4.9 intervention) at baseline and improved after training (5.5 control vs. 5.5 intervention). After 6 to 8 months, knowledge was retained in both groups (5.4 control vs. 5.2 intervention).
Baseline confidence (measured as agreement with the statement “I feel confident at infant lumbar punctures” on 4-point Likert scale 1 = strongly disagree, 2 = disagree, 3 = agree, 4 = strongly agree) was fair in both groups (2.8 control vs. 2.7 intervention) and improved immediately after training (3.2 control vs. 3.4 intervention). After 6 to 8 months, confidence remained high (3.2 control vs. 3.4 intervention). There were no statistical differences between groups for knowledge or confidence at any point that they were measured.
Participants in our simulation-based deliberate practice training reported higher rates of clinical success on their first infant LP compared with controls. For every two residents taught, there was one additional LP that was done successfully. This difference was independent of training level or setting where the procedure took place. These results support our hypothesis that simulation-based deliberate practice training improves LP procedural skills among pediatric residents. Trainees in this study, including senior residents, had a low level of experience and skill with infant LP. This suggests that the traditional apprenticeship model is inadequate for building expertise in this LP procedure by the end of pediatric residency.
This is the first study we are aware of in pediatrics to demonstrate the translation of a simulation skills training into improvements in providers' clinical performance. Our findings are consistent with a growing body of literature demonstrating that simulation training will improve patient care. Surgical simulation skills training has been associated with improved performance by subjects in the operating room in multiple studies.19 Internal medicine trainees who received targeted simulation skills training have had decreased errors and improved clinical performance with central line placement and advanced cardiac life support.9,20
In previous studies of infant LP educational interventions, a clinical difference has not been found. Kilbane et al15 compared a group of residents that received a 30-minute lecture to historical controls but did not detect any impact on clinical success rates. Gaies et al conducted a randomized trial of their intervention for pediatric interns that consisted of a simulation module given at the start of their emergency department rotation and did not find a difference in clinical success rates. The clinical end point of Gaies et al was based on multiple LPs per participant, whereas we only looked at the first LP. The module was designed to place an emphasis on repeated practice; however, it is not clear what the faculty to student ratio was and participants were not coached until they achieved a common minimal standard.14
In contrast to these studies that used mostly pedagogical methods in their interventions, our intervention combines three distinct educational strategies: deliberate practice, action-oriented coaching, and mastery training on the simulator. In the mastery training model, some residents may enter the training with a high level of skill and require minimal practice to achieve the end point of mastery performance on the simulator. Novices, by contrast, may require multiple cycles of repetitive practice on the simulator before they are able to perform at the predefined level of mastery. Some participants in our study required five or six cycles of deliberate practice going through some or all of the steps in the procedure before they were able to perform every step independently and successfully. In an article by Carraccio et al,21 the principles in the Dreyfus and Dreyfus model of skill development are applied to the development of physician competence. As a learner progresses through the stages (novice, advanced beginner, competent, proficient, expert, and master), they have different abilities and different needs. Formative assessment during simulation can help diagnose the learner's stage and allow the intervention to be tailored to their needs. This learner-centered approach optimizes the practice session and helps the learner progress toward mastery. Our trainers used action-oriented coaching to correct learner's mistakes and instructed them to repeat individual steps until mastery was achieved.
The participants in our study did not differ in OSCE performance on the simulator 6 to 8 months after training. Because all participants in the intervention group were trained to mastery, we did not see the utility in measuring skills immediately postintervention. We chose instead to measure skills at 6 months and in that way ascertain whether skills were retained. Studies on skill retention for other medical procedures, such as neonatal intubation, have demonstrated a significant deterioration of procedural skills by 4 months after simulator training.22,23
There are many possibilities for why a difference was seen clinically and not on the OSCE. First of all, we were underpowered to find a difference in success rate of less than 37% and the differences seen between groups were much less. This was likely at least in part due to a ceiling effect because performance on the OSCE at 6 months was at a median of 100% and 93%, respectively, in the intervention and control groups. The fact that both groups improved over time may be due to additional LP training or experiences that improved their skills.
A major limitation of our study is the fact that clinical performance was measured by self-report. The lack of true blinding of subjects to the interventions they received may have contributed to the large effects seen. To address this, we verified that at least 75% of all clinical events were captured in both groups. We also looked for differences in underreporting at the 6-month follow-up and found the rates of underreporting to be similar in both groups. Response rates were maximized through the use of a web-based data collection instrument and monthly reminder emails. Another limitation in this study was the use of a OSCE checklist that was adapted from prior validated tools but that has not been validated itself. Although the instrument demonstrated good internal consistency, limitations in the checklist could lead to us not finding a difference in simulator-based performance when it actually exists. Finally, we chose to present our teaching intervention in a single training session based on faculty availability, but a more educationally sound approach would have been to distribute deliberate practice sessions over time to more effectively reinforce the learning of this skill.24
There are multiple patient and provider factors that influence the likelihood of providers' clinical success at an infant LP. Some known predictors of success include practitioner experience, the use of analgesia, and early stylet removal.25 Other factors have not been validated in the literature but are thought to be clinically important including the position of the child, the ability of the holder to properly immobilize the infant, and the experience level of the supervisor. Although we collected these data, our sample size did not provide for adequate power to analyze all subgroups and confounders. Becauseof our small sample size (particularly of those reporting a clinical outcome), the 95% confidence intervals around our clinical difference are wide ranging from 16% to 70%. Even at the lower limit of confidence this translates into training six residents to prevent one dry tap. The cost of the infant LP simulator used in this study is 437 U.S. dollars.25 A full cost analysis should account for faculty teaching time, supplies and ongoing maintenance of the simulator, and will be the subject of future analysis. Future larger studies in varied settings are needed to validate this clinical effect and elucidate exactly which simulation factors are most helpful in promoting successful clinical outcomes. We are currently engaged in a multisite trial to test our hypothesis.
We believe that the positive impact of our intervention on clinical performance supports the use of simulation-based deliberate practice in the development of procedural skills. It can easily be performed at other institutions because of the short time requirements (5–20 minutes per learner), low cost, and the reproducibility of this teaching strategy.
This study suggests that dedicating time to simulation-based deliberate practice on an infant LP simulator after an audiovisual didactic improves the odds of obtaining CSF with the next clinical infant LP performed when compared with AV alone. This improvement seems to be independent of postgraduate year, knowledge level, confidence level, or OSCE performance on a simulator.
The authors acknowledge Rajasekhar Ramakrishnan, Scd (Director of the Division of Biomathematics/Biostatistics at Columbia University), for his contribution to this manuscript in reviewing all the statistical design and analysis. The authors also thank the research assistants Karl Santiago and Amanda Krantz for their help in this study.
1. Accreditation Council for Graduate Medical Education. Residency Review Committee. Pediatrics 2007. Available at: http://acgme.org/acWebsite/downloads/RRC_progReq/320_pediatrics_07012007.pdf
. Accessed January 2011.
2. Rodriguez-Paz J, Kennedy M, Salas E, et al. Beyond “see one, do one, teach one”: toward a different training paradigm. Qual Saf Health Care
3. Graber M, Wyatt C, Kasparek L, Xu Y. Does simulator training for medical students change patient opinions and attitudes toward medical student procedures in the emergency department? Acad Emerg Med
4. Issenberg S, Petrusa E, Lee Gordon D, Scalese R. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach
5. Scalese R, Obeso V, Issenberg S. Simulation technology for skills training and competency assessment in medical education. J Gen Int Med
6. Conroy S. Competence and retention in performance of the lumbar puncture
procedure in a task trainer model. Sim Healthcare
7. Ericsson K. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med
8. Ericsson K. Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med
9. Wayne D, Feinglass J. Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: a case-control study. Chest
10. Coberly L, Goldenhar LM. Ready or not, here they come: acting interns' experience and perceived competency performing basic medical procedures. J Gen Intern Med
11. Hicks C, Morton M, Gibbons R, Wigton R, Anderson R. Procedural experience and comfort level in internal medicine trainees. Int Med
12. Huang G, Gordon C, Feller-Kopman D, Davis R, Phillips R, Weingart S. Beyond the comfort zone: residents assess their comfort performing inpatient medical procedures. Am J Med
13. Lammers R, Temple K, Wagner M, Ray D. Competence of new emergency medicine residents in the performance of lumbar punctures. Acad Emerg Med
14. Gaies M, Morris S, Hafler J, Sandora T. Reforming procedural skills training for pediatric residents: a randomized, interventional trial. Pediatrics
15. Kilbane B, Adler M, Trainor J. Pediatric residents' ability to perform a lumbar puncture
: evaluation of an educational intervention. Pediatric Emerg Care
16. Kliegman R, Nelson W. Nelson Textbook of Pediatrics
. 18th ed. Philadelphia, PA: Saunders; 2007.
17. Straus S, Thorpe K, Holroyd-Leduc J. How do I perform a lumbar puncture
and analyze the results to diagnose bacterial meningitis. JAMA
18. Ellenby M, Tegtmeyer K, Lai S, Braner D. Videos in clinical medicine: lumbar puncture
19. Lynagh M. A systematic review of medical skills laboratory training: where to from here? Med Educ
20. Barsuk J, McGaghie W, Cohen E, O'Leary K, Wayne D. Simulation-based mastery learning reduces complications during central venous catheter insertion in a medical intensive care unit. Crit Care Med
21. Carraccio C, Benson B, Nixon L, Derstine P. From the educational bench to the clinical bedside: translating the Dreyfus developmental model to the learning of clinical skills. Acad Med
22. Curran V, Aziz K, O'Young S, Bessell C. Evaluation of the effect of a computerized training simulator (ANAKIN) on the retention of neonatal resuscitation skills. Teach Learn Med
23. Grober E, Hamstra S, Wanzel K, et al. Laboratory bases training in urological microsurgery with bench model simulators: a randomised controlled trial evaluating the durability of technical skills. J Urol
24. Cepeda N, Pashler H, Rohrer D. Distributed practice in verbal recall tasks: a review and quantitative synthesis. Psychol Bull
25. Nigrovic L, Kuppermann N, Neuman M. Risk factors for traumatic or unsuccessful lumbar punctures in children. Ann Emerg Med