Skip Navigation LinksHome > June 2010 - Volume 5 - Issue 3 > Improved Fourth-Year Medical Student Clinical Decision-Makin...
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare:
doi: 10.1097/SIH.0b013e3181cca544
Empirical Investigations

Improved Fourth-Year Medical Student Clinical Decision-Making Performance as a Resuscitation Team Leader After a Simulation-Based Curriculum

Ten Eyck, Raymond P. MD, MPH; Tews, Matthew DO; Ballester, John M. MD; Hamilton, Glenn C. MD, MSM

Free Access
Article Outline
Collapse Box

Author Information

From the Department of Emergency Medicine (R.P.T.E., J.M.B., G.C.H.), Boonshoft School of Medicine, Wright State University, OH; and Department of Emergency Medicine (M.T.), Medical College of Wisconsin, WI.

Abstract presentation at poster session of the 2009 IMSH meeting, Orlando, FL.

Reprints: Raymond P. Ten Eyck, MD, MPH, Department of Emergency Medicine, 3525 Southern Blvd., Kettering, OH 45429 (e-mail: Raymond.teneyck@wright.edu).

Collapse Box

Abstract

Objective: To determine the impact of simulation-based instruction on student performance in the role of emergency department resuscitation team leader.

Methods: A randomized, single-blinded, controlled study using an intention to treat analysis. Eighty-three fourth-year medical students enrolled in an emergency medicine clerkship were randomly allocated to two groups differing only by instructional format. Each student individually completed an initial simulation case, followed by a standardized curriculum of eight cases in either group simulation or case-based group discussion format before a second individual simulation case. A remote coinvestigator measured eight objective performance end points using digital recordings of all individual simulation cases. McNemar χ2, Pearson correlation, repeated measures multivariate analysis of variance, and follow-up analysis of variance were used for statistical evaluation.

Results: Sixty-eight students (82%) completed both initial and follow-up individual simulations. Eight students were lost from the simulation group and seven from the discussion group. The mean postintervention case performance was significantly better for the students allocated to simulation instruction compared with the group discussion students for four outcomes including a decrease in mean time to (1) order an intravenous line; (2) initiate cardiac monitoring; (3) order initial laboratory tests; and (4) initiate blood pressure monitoring. Paired comparisons of each student’s initial and follow-up simulations demonstrated significant improvement in the same four areas, in mean time to order an abdominal radiograph and in obtaining an allergy history.

Conclusions: A single simulation-based teaching session significantly improved student performance as a team leader. Additional simulation sessions provided further improvement compared with instruction provided in case-based group discussion format.

The defining element of simulation-based instruction is its incorporation of the actual practice of clinical skills in a modified clinical setting. This is a major departure from curricula presented in lecture or group discussion format. High-fidelity mannequin-based simulation offers realistic training in “an enhanced environment for experiential learning and reflective practice” to better prepare students for actual patient care.1 Based on experiences in other outpatient settings, most medical students are proficient in the typical approach to the initial assessment and treatment of a stable patient. Ethical considerations limit the extent to which medical students can practice certain roles2 including the role of resuscitation team leader in the emergency department. The simulation laboratory provides a safe alternative setting in which students can enhance their learning with deliberate practice of the skills required to effectively evaluate and treat an acutely ill patient. We chose fourth-year medical students for this study because most of them lack the skills needed to lead a resuscitation team. However, as interns, any one of them may be called on to function in this role. We believe that an Emergency Medicine clerkship is an appropriate place to practice these skills. The Department of Emergency Medicine, Wright State University, Boonshoft School of Medicine has a well-developed series of case studies for the required fourth-year medical student clerkship core curriculum. The traditional format for presenting this core curriculum had been case-based group discussion, which stressed the difference in approach to an acutely ill patient in the emergency department. The format required students to verbalize all activities including the application of basic patient assessment skills, request of diagnostic tests, analysis of data, formulation of a differential diagnosis, and completion of appropriate therapeutic interventions. Converting the cases to a simulation environment added multitasking, team communications, situational awareness, and application of psychomotor skills to the student competencies required to manage each case.3 Previous studies have demonstrated the positive impact of high-fidelity simulation-based educational sessions on clinical skills4–10 and team performance.11–13 The results of simulation training for medical students have varied with demonstrated benefit from some14–17 but not all18–19 studies. Those studies not showing a benefit associated with simulation-based instruction used assessment tools, which evaluated a combination of knowledge-based and management end points. The primary objective of our study was to compare simulation-based instruction with group discussion to determine the impact on student performance in the role of emergency department resuscitation team leader. The leadership role was measured by their ability to accomplish or direct other team members to accomplish eight essential functions in the initial evaluation of a simulated seriously ill emergency department patient. The secondary objective was to evaluate the degree of improvement in team leader skills after a single simulation-based experience to assess the impact achieved with a limited simulation program.

Back to Top | Article Outline

METHODS

Study Population

All fourth-year medical students completing the mandatory Emergency Medicine clerkship offered through the Department of Emergency Medicine, Boonshoft School of Medicine between August 1, 2007, and April 30, 2008, were eligible for inclusion. The December 2007 cohort was not offered an opportunity to participate due to scheduling problems, which surfaced before the start of the month precluding completion of both the initial and follow-up individual simulations. All other eligible students chose to participate in the study. The sample size needed to detect a large and medium effect-size with a power of 0.8 was calculated to be 25 and 64 subjects per group, respectively.20 We anticipated a large effect in most of the measured end points based on our prior simulation experience with fourth-year medical students. The maximum potential sample size was 83 subjects based on the number of students in the class of 2008 scheduled for the clerkship during the inclusive dates with exclusion of the December cohort. The study was conducted in the Department of Emergency Medicine’s Center for Immersive Medical Education and Research.

Back to Top | Article Outline
Study Design

This was a randomized, controlled, single-blinded study using an intention to treat analysis approved as an expedited protocol by the Wright State University Institutional Review Board. Each month’s student cohort was randomly allocated to two groups with four to six students per group. An informed consent document was signed by each subject before participating in the study. Following an orientation to the simulation laboratory and to the high-fidelity simulation model (Laerdal SimMan, Stavanger, Norway), all students completed an initial individual simulation case in which each student served as the team leader directing two additional team members. In all instances, the team consisted of a simulation center staff member and a senior resident. The team members were instructed to perform all tasks requested by the students to the best of their abilities but not to suggest any evaluation or treatment actions. Each group subsequently completed the same standardized curriculum, consisting of eight cases, in one of two teaching formats. Group A completed the cases in simulation format and group B in case-based group discussion format. Nine to 12 days after the initial individual simulation, all students completed a follow-up individual simulation case. The order of the two individual cases was randomly determined each month. One case was randomly assigned as the preintervention simulation and the other as the postintervention simulation. To assure all students had an equal exposure to the simulation format, which we had hypothesized to be superior, we used a controlled evaluation design that allowed the study to proceed without denying the curriculum to the control group.21 The simulation-based curriculum was presented to the control group during the last two class sessions of the month after completion of their follow-up individual simulation. At the end of each month, digital recordings of all individual simulations were sent to a remote coinvestigator blinded to group assignment and to the order of presentation of the individual cases. He reviewed each individual simulation, recorded the predetermined performance data points, and sent the results back to the primary investigator who transferred the data to the appropriate group (simulation or group discussion) and category (initial or follow-up). Measured outcomes included (1) performance during the second individual case for group A students compared with group B students and (2) change in each student’s performance using paired comparisons of the initial and follow-up individual simulation cases. A 10% sample of the recordings was randomly selected for review by a second observer to verify the reliability of the scoring rubric (Fig. 1).

Figure 1
Figure 1
Image Tools
Back to Top | Article Outline
Individual Simulation Sessions

Both individual simulations consisted of preprogrammed cases involving an older man with abdominal pain. In one case, the patient had a complete bowel obstruction complicated by dehydration and hypokalemia. In the other individual case, the patient had a leaking abdominal aortic aneurysm. The order of the two cases was randomly assigned before the start of each month. All individual simulation sessions were conducted by the same faculty member.

Back to Top | Article Outline
Group Simulation and Case-Based Discussion Sessions

Sets of four simulation cases were written for each of the core curriculum topics, and two of the sets (chest pain and altered mental status) were used in the group instruction portion of this study. Parallel case presentation scripts for each topic were used in the case-based group discussion format. Sets of PowerPoint (Microsoft, Redmond, WA) slides were developed for each topic and were presented by the instructor after completion of each case in both the simulation and the group discussion formats. Both groups completed each of the two sets of cases at the same time but were randomized by the group assignments for teaching format. All group sessions were chief complaint based, lasted approximately 3 hours for both groups, and each of the four cases presented per session represented a separate diagnosis. The same faculty member operated the simulator for all the simulation sessions with assistance from the third-year emergency medicine resident assigned as teaching resident each month. The resident played the role of a consultant available to the students if they needed guidance in a technical procedure (eg, placing a chest tube) and when they wanted to make a disposition on their patient. The group discussion sessions were all presented by the emergency medicine full-time faculty member assigned as teaching faculty each month or the third-year teaching resident. Each instructor was given a copy of the learning objectives, the cases, and the PowerPoint slide sets.

Students performed the simulations in their randomly allocated groups. The role of team leader rotated between cases during the group simulation sessions and the remaining students served in support roles. Each student in the simulation group participated in all eight simulation sessions and served as team leader in one or two sessions. The majority of students served as team leader in two of the sessions. A team approach to assessment and treatment of the simulated patients was encouraged during these sessions. The students were informed they could offer suggestions to the team leader, and the team leader was encouraged to ask for suggestions from team members as needed. However, all the final decisions were made by the team leader. They were instructed to evaluate and treat the simulated patients to the point of disposition. At the conclusion of each simulation session, the team of students moved to an adjacent classroom for a debriefing, which was initiated by asking the team leader to share his/her opinion of how the team performed in the areas of clinical, technical, and teamwork skills. The team members were subsequently asked for any additional thoughts they had in the same areas. Each debriefing concluded with a review of the slides covering core curricular items for each topic, which primarily emphasized task-based issues.

The students also completed the case-based discussions in groups facilitated by a faculty preceptor. The cases were presented in a real-time problem-solving format starting with the information included in the triage note of the corresponding simulated patient. The faculty preceptor guided the discussion by asking the students questions regarding assessment and treatment, by role playing to respond to questions asked of the patient, and by providing information regarding the findings on evaluation and the results of treatment. After completion of the case, the preceptor reviewed core curricular issues including the unique aspects of approaching a patient with an acute serious illness in the emergency department. The specific considerations for each topic were reviewed using the same slide sets, which were incorporated into the simulation debriefings.

Back to Top | Article Outline
Development of the Data Collection Instrument

The initial data sheet reflected all possible activities the students were likely to complete during the evaluation and treatment of each individual simulation case. The authors then selected clear, measurable end points for all the listed actions. Prior experience with fourth-year medical students suggested the greatest impact of simulation appeared to be improved efficiency in the initial evaluation of seriously ill simulated patients. Using a consensus process, eight actions that measured the efficiency of the initial evaluation and intervention were selected for end points in the final assessment tool. Seven actions involved continuous dependent variables including the time to initiate a particular action or the number of items completed. One action involved the discrete dependent variable of asking about allergies before administering any medications.

Back to Top | Article Outline
Statistical Analysis

Discrete data was analyzed using the McNemar χ2 test. Continuous data was analyzed with repeated measures multivariate analysis of variance with follow-up analysis of variance to determine significant main effects and to evaluate for differences between groups in the initial simulation. We analyzed the postintervention simulations using an analysis of covariance with factors including group and a continuous variable equal to the corresponding initial simulation outcome. We considered differences in performance with P ≤ 0.05, corrected with the sequentially rejective Bonferroni method for multiple tests, to be significant. Interrater reliability was assessed with Pearson correlations using Excel (Microsoft, Redmond, WA). All other analyses were performed using SAS version 9.2 (Cary, NC).

Back to Top | Article Outline

RESULTS

Recordings of both initial and follow-up individual simulations were obtained for 68 students (82%). Eight students were lost from the simulation group and seven from the group discussion group due mainly to two weather-related class cancellations. The performance of these 15 students, as measured by their scores on the final written examination and the clerkship final grade, was not significantly different from the 68 students completing the study. The between-group comparison of mean performance during the follow-up individual simulation cases demonstrated significantly better performance after simulation instruction (group A) compared with case-based discussion instruction (group B) in four of the defined outcomes including the mean time (seconds) to (1) order an intravenous line—group A: 28.3 [95% confidence interval (CI): 9.7–46.8] and group B: 86.0 (95% CI: 67.7–104.4); (2) initiate cardiac monitoring—group A: 36.2 (95% CI: 20.6–51.9) and group B: 79.1 (95% CI: 63.4–94.8); (3) order initial laboratory tests—group A: 114.9 (95% CI: 75.0–154.8) and group B: 215.2 (95% CI: 175.5–254.8); and (4) initiate blood pressure monitoring—group A: 43.4 (95% CI: 21.5–65.2) and group B: 87.8 (66.2–109.4). There were no significant differences between the means of the remaining four components: (1) Time (seconds) to order an abdominal radiographic study—group A: 255.4 (95% CI: 200.5–310.3) and group B: 332.0 (95% CI: 277.5–386.6); (2) number of history items elicited—group A: 7.2 (95% CI: 6.3–8.1) and group B: 8.0 (95% CI: 7.1–8.9); 3. Number of physical examination items completed— group A: 3.5 (95% CI: 3.0–4.1) and group B: 3.7 (95% CI: 3.1–4.2); and (4) number asking about allergies to medications— group A: 9 and group B: 8. Graphic summaries of the timed actions and the counted actions for between group comparisons are illustrated in Figures 2 and 3, respectively.

Figure 2
Figure 2
Image Tools
Figure 3
Figure 3
Image Tools

Paired results for a within-subject comparison of each individual student’s performance on initial and follow-up simulations (regardless of group assignment) demonstrated significant improvement in six of the eight clinical performance items evaluated including the five timed events and the number of students asking the patient about allergies to medications. The measured outcomes that showed significant improvement included mean time (seconds) to (1) order an intravenous line—initial: 198.4 (95% CI: 173.9–222.9) and follow-up: 57.7 (95% CI: 43.2–72.2); (2) initiate cardiac monitoring—initial: 218.5 (95% CI: 171.9–265.0) and follow-up: 58.0 (95% CI: 45.8–70.3); (3) initiate blood pressure monitoring—initial: 179.4 (95% CI: 146.8–211.9) and follow-up: 67.4 (95% CI: 51.1–83.7); (4) order initial laboratory tests—initial: 338.9 (95% CI: 294.2–383.5) and follow-up: 168.2 (95% CI: 137.5–198.8); (5) order an abdominal radiographic study—initial: 409.7 (95% CI: 365.6–453.9) and follow-up: 295.2 (95% CI: 255.9–334.5); and (6) the number of students asking about allergies— initial: 11 and follow-up: 17. There were no significant differences between the means of the remaining two components: (1) the number of history items elicited—initial: 7.8 (95% CI: 7.1–8.5) and follow-up: 7.6 (95% CI: 6.9–8.3) and (2) the number of physical examination items completed—initial: 3.8 (95% CI: 3.4–4.1) and follow-up: 3.6 (95% CI: 3.2–4.0). Graphic summaries of the timed actions and the counted actions for initial and follow-up simulation comparisons are illustrated in Figures 4 and 5, respectively.

Figure 4
Figure 4
Image Tools
Figure 5
Figure 5
Image Tools

There were no statistically significant differences between the means of the measured end points when the preintervention simulation performance of group A was compared with group B. There were also no significant differences when the data were sorted on the basis of the order in which the randomly allocated individual cases were presented (complete small bowel obstruction or leaking abdominal aortic aneurysm). Interrater reliability for all parameters in the sample of 15 videos reviewed by a second observer showed a strong positive correlation: Number of history items (0.80); number on examination items (0.92); time to cardiac monitor (1.00); time to blood pressure monitor (1.00); time to order IV (1.00); time to order labs (0.99); and time to order abdominal films (0.99). There was also100% agreement on the single discrete measure of whether the student asked about allergies.

Back to Top | Article Outline

DISCUSSION

Our study demonstrated that experience derived from directing the evaluation and treatment of a seriously ill simulated patient resulted in improved performance of fourth-year medical students during subsequent simulated emergency department encounters. In their review of outcomes from simulation-based medical education programs, McGaghie et al22 found that high-fidelity simulator practice had a positive impact on standardized learning outcomes with an association approximating a dose-response relationship. McMahon et al23 observed third-year medical student performance during simulation sessions, which were added to the standard curriculum in an internal medicine clerkship. The students obtained an appropriate history and completed a brief physical examination, but they consistently had difficulty applying their medical knowledge to care for a simulated critical patient. However, consistent with our results, they found student performance of critical tasks improved with additional simulation experience. We quantified the consistency in performance of history and physical examinations in all individual sessions and the iterative improvement in timeliness of evaluation/intervention after multiple simulation sessions. Steadman et al15 demonstrated a significant improvement in a group of fourth-year medical students instructed in simulation format compared with a group instructed in a problem-based format. However, the improvement was only demonstrated when the test topic was the same as the instruction topic. Our study demonstrated simulation training led to significant improvements in the timeliness of student actions when the test topic was different from the training topic, thus indicating that the learned behavior was more generalized. The difference in our findings may be due to the different evaluation tools used in each of the studies. Steadman’s group use a weighted checklist oriented to the chief complaint. We measured objective criteria to evaluate the timeliness of team leader initiated evaluations and interventions. The common element in both studies is the improved outcome with simulation compared with a group discussion format.

In this current study, we assessed the impact of two levels of simulation experience on medical student performance as the team leader of a simulated emergency department resuscitation. Our primary assessment was a between-groups evaluation, which measured the impact of additional simulation experience in the form of a standard curriculum presented in group simulation format compared with case-based group discussion format. The second assessment was a within-subject comparison, which measured the impact of the single initial individual simulation/debriefing session on performance during a follow-up individual simulation session with each student serving as his/her own control. Student performance improved in the follow-up simulated emergency department resuscitation compared with the initial simulation session with statistically significant differences in six of eight areas measured. We also found a significantly greater improvement in the treatment group (who participated in eight simulations cases between sessions) compared with the control group (who completed the same eight cases in group discussion format between sessions) in four of eight areas involving time to predefined actions. The difference in mean time to action varied from 43 to 171 seconds in areas achieving statistical significance. We believe these differences are also clinically significant. Considering the compression of time in most simulations, the differences in an actual clinical setting would likely be greater than those measured in the study. However, the main reason for considering these results clinically significant is that the simulation format enhanced achievement of our department’s long standing goal to change student behavior when assessing a seriously ill patient. Improvement in the simulation/debriefing group was mainly attributed to an improvement in their initial approach to the patient. They learned to mobilize their team earlier and initiate essential tasks concurrent with, rather than after, performance of the history, and physical examination. This is in contrast to the typical clinic approach for the evaluation of nonacute patients, which involves completion of a full history and physical examination before initiating any additional work-up or treatment.

Overall, students demonstrated a significant improvement in asking the simulated patients about allergies to medications in the follow-up simulations compared with the initial simulations. However, the lack of a significant improvement in this behavior for the simulation group compared with the discussion group was disappointing and was most likely attributed to our failure to emphasize this issue during debriefings. The topic was incorporated into the debriefings but was not highlighted. In one of the simulation case series, conducted after the follow-up individual assessment, the patient stated that he was allergic to the first drug ordered when the student failed to ask about allergies. This near miss had a profound effect on the remainder of the group, all of whom asked about allergies before ordering any medications in subsequent simulations. This anecdotal experience suggests a method for stressing this key aspect of patient management and warrants further evaluation.

There were no significant differences between the two groups in the number of history or physical examination items completed. This was most likely attributed to the well-developed history and physical examination skills most students had before the start of the clerkship. Consequently, their improved efficiency involved the integration of history taking and physical examination with the other critical aspects of the bedside evaluation and treatment of the simulated patients.

Back to Top | Article Outline
Limitations

This study has a number of limitations. First, we evaluated student performance in a simulation laboratory and not in an emergency department. Ethical concerns limit our ability to assess student performance as a team leader directing the evaluation and treatment of a critical patient in a clinical setting. Consequently, we cannot prove students would behave the same way in real patient care. However, we demonstrated that students do a better job of applying the behaviors we desire to see in the approach to a simulated seriously ill patient following simulation-based instruction than they do with case-based group discussion instruction. Second, our assessment tool for team leader effectiveness consisted of a series of objective measurements of expected actions and was not a previously validated assessment scale. Nonetheless, assessments composed of quantified task completion and timeliness of task completion have been used in multiple studies designed to evaluate the impact of simulation.24–27 Third, all the digital recordings were analyzed by a single investigator who was blinded to each student’s identity, group assignment, and the order of the individual simulation cases each month. Because the recording included a timer running along with the video recording and the end points were all timed, quantified, or dichotomous events, we did not think confirmation with a second blinded evaluator was necessary. This impression was supported by the high interrater reliability in a randomly selected 10% sample group evaluated by a second observer. Fourth, our results comparing simulation to case-based group discussion represent leadership skills measured from a single case. Fifth, our subjects represent students from a single institution, and we were not able to evaluate external validity.

Back to Top | Article Outline

CONCLUSIONS

Simulation is an effective tool for providing fourth-year medical students with an opportunity to practice the expanded range of skills required to evaluate and treat critical patients in urgent and emergent situations. A single simulation exposure with an accompanying debriefing produced significant improvement in a number of objective measurements of timely evaluation and intervention by the team leader. Additional simulation sessions resulted in a significant improvement in four of eight measured areas compared with students receiving instruction using case-based group discussion. A simulation-based curriculum enhances the educational experience for medical students by incorporating additional skills, which they will need to fulfill the responsibilities of their future graduate medical education programs.

Back to Top | Article Outline

REFERENCES

1. Gordon J, Oriol N, Cooper J. Bringing good teaching cases “to life”: a simulator-based medical education service. Acad Med 2004;79:23–27.

2. Ziv A, Wolpe P, Small S, Glick S. Simulation-based medical education: an ethical imperative. Acad Med 2003;78:783–788.

3. Rogers P. Simulation in medical student critical thinking. Crit Care Med 2004;32(suppl 2):S70–S71.

4. Shavit I, Keidean I, Hoffman Y, et al. Enhancing patient safety during pediatric sedation: the impact of simulation-based training of nonanesthesiologists. Arch Pediatr Adolesc Med 2007;161:740–743.

5. Wayne D, Butter J, Siddall V, et al. Mastery learning of advanced cardiac life support skills by internal medicine residents using simulation technology and deliberate practice. J Gen Intern Med 2006;21:251–256.

6. Davis D, Buono C, Ford J, Paulson L, Koenig W, Carrison D. The effectiveness of a novel, algorithm-based difficult airway curriculum for air medical crews using human patient simulators. Prehosp Emerg Care 2007;11:72–79.

7. Rosenthal M, Adachi M, Ribaudo V, Mueck J, Schneider R, Mayo P. Achieving housestaff competence in emergency airway management using scenario based simulation training: comparison of attending vs. housestaff trainers. Chest 2006;129:1453–1458.

8. Tuttle R, Cohen M, Augustine A, et al. Utilizing simulation technology for competency skills assessment and a comparison of traditional methods of training to simulation-based training. Respir Care 2007;52:263–270.

9. Wayne D, Didwania A, Feinglass J, Fudala M, Barsuk J, McGaghie W. Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: a case-control study. Chest 2008;133:56–61.

10. Crofts J, Barlett C, Ellis D, Hunt L, Fox R, Draycott T. Training for shoulder dystocia: a trial of simulation using low-fidelity and high-fidelity mannequins. Obstet Gynecol 2006;108:1477–1485.

11. Wallin C, Meurling L, Hedman L, Hedegard J, Fellander-Tsai. Target-focused medical emergency team training using a human patient simulator: effects on behavior and attitude. Med Educ 2007;41:173–180.

12. DeVita M, Shaefer J, Lutz J, Wang H, Dongilli T. Improving medical emergency team (MET) performance using a novel curriculum and a computerized human patient simulator. Qual Saf Health Care 2005;14:326–331.

13. Shapiro M, Morey J, Small S, et al. Simulation based teamwork training for emergency department staff: does it improve clinical team performance when added to an existing didactic teamwork curriculum? Qual Saf Health Care 2004;13:417–421.

14. Morgan P, Cleve-Hogg D, Desousa S, Lam-McCulloch J. Applying theory to practice in undergraduate education using high fidelity simulation. Med Teach 2006;28:e10–e15.

15. Steadman R, Coates W, Huang Y, et al. Simulation-based training is superior to problem-based learning for the acquisition of critical assessment and management skills. Crit Care Med 2006;34:151–157.

16. Weller J, Robinson B, Larsen P, Caldwell C. Simulation-based training to improve acute care skills in medical undergraduates. N Z Med J 2004;117:1119–1127.

17. Issenberg B, McGaghie W, Gordon D, et al. Effectiveness of a cardiology review course for internal medicine residents using simulation technology and deliberate practice. Teach Learn Med 2002;14:223–228.

18. Morgan P, Cleave-Hogg D, Mcllroy J, Devitt J. A comparison of experiential and visual learning for undergraduate medical students. Anesthesiology 2002;96:10–16.

19. Schwartz L, Fernandez R, Kouyoumjian S, Jones K, Compton S. A randomized comparison trial of case-based learning versus human patient simulation in medical student education. Acad Emerg Med 2007;14:130–137.

20. Lehr R. Sixteen s-squared over d-squared: a relation for crude sample size estimates. Stat Med 1992;11:1099–1102.

21. Kern D, Thomas P, Howard D, Bass E. Curriculum Development for Medical Education: A Six-step Approach. Baltimore: The Johns Hopkins University Press; 1998.

22. McGaghie W, Issenberg S, Petrusa ER, Scalese R. Effect of practice on standardized learning outcomes in simulation-based medical education. Med Educ 2006;40:792–797.

23. McMahon G, Monaghan C, Falchuk K, Gordon J, Alexander E. A simulator-based curriculum to promote comparative and reflective analysis in an internal medicine clerkship. Acad Med 2005;80:84–89.

24. Hunt EA, Walker AR, Shaffner DH, Miller MR, Pronovost PJ. Simulation of in-hospital pediatric medical emergencies and cardiopulmonary arrests: highlighting the importance of the first 5 minutes. Pediatrics 2008;121:e34–e43.

25. Falcone RA, Daugherty M, Schweer L, Patterson, Brown RJ, Garcia VF. Multidisciplinary pediatric team training using high fidelity trauma simulation. J Pediatr Surg 2008;43:1065–1071.

26. Ellis D, Crofts JF, Hunt LP, Read M, Fox R, James M. Hospital, simulation center, and teamwork training for eclampsia management. Obstet Gynecol 2008;111:723–731.

27. Hunziker S, Tschan F, Semmer, et al. Hands-on time during cardiopulmonary resuscitation is affected by the process of teambuilding: a prospective randomized simulator-based trial. BMC Emerg Med 2009;9:3.

Keywords:

Medical education; Leadership; Clinical skills; Simulation; Educational effectiveness; Randomized controlled trial; Medical training

© 2010 Lippincott Williams & Wilkins, Inc.

Login

Article Tools

Images

Share

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.