Secondary Logo

Journal Logo

Impact of standardized patients on the training of medical students to manage emergencies

Herbstreit, Frank Dr. med.a,*; Merse, Stefanie Dr. med.b; Schnell, Rainer Prof. Dr.c; Noack, Marceld; Dirkmann, Daniel PD Dr. med.a; Besuch, Annae; Peters, Jürgen Prof. Dr. med.f

Section Editor(s): Siddiqi., Haseeb A.

doi: 10.1097/MD.0000000000005933
Research Article: Clinical Trial/Experimental Study

Background: Teaching emergency management should educate medical students not only for facts and treatment algorithms but also for time effective physical examination, technical skills, and team interaction. We tested the hypothesis, that using standardized emergency patients would be more effective in transmitting knowledge and skills compared with a more traditional teaching approach.

Methods: Medical students (n = 242) in their fourth (second clinical) year were randomized to receive either training on standardized patients simulating 3 emergency settings (“acute chest pain,” “stroke,” and “acute dyspnea/asthma”) or traditional small group seminars. Before and after the respective training pathways, the students’ knowledge base (multiple-choice examination) and practical performance (objective structured clinical examination using 3 different emergency scenarios) were assessed.

Results: Teaching using standardized patients resulted in a significant albeit small improvement in objective structured clinical examination scores (61.2 ± 3 for the standardized patient trained group vs 60.3 ± 3.5 for the traditional seminar group; P = 0.017, maximum achievable score: 66), but no difference in the written examination scores (27.4 ± 2.4 vs 27.0 ± 4.4; P = 0.341; maximum achievable score: 30).

Conclusion: Teaching management of emergencies using standardized patients can improve medical students’ performance in clinical tests, and a change from traditional seminars in favor of practice sessions with standardized patients does not compromise the learning of medical facts.

aStaff Anesthesiologist, Klinik für Anästhesiologie & Intensivmedizin, Universitätsklinikum Essen

bStandardized Patients Program, Student Deans office, Faculty of Medicine, Universität Duisburg-Essen

cProfessor, Institut für Soziologie, Universität Duisburg-Essen

dLecturer, Institut für Soziologie, Universität Duisburg-Essen

eMedical Student, Universität Duisburg-Essen

fProfessor of Anesthesiology & Intensive Care Therapy, Universität Duisburg-Essen, and Chairman, Klinik für Anästhesiologie & Intensivmedizin, Universitätsklinikum Essen, Essen, Germany.

Correspondence: Frank Herbstreit, Klinik für Anästhesiologie und Intensivmedizin, Universitätsklinikum Essen, Universität Duisburg-Essen, Hufelandstr, 55; 45122 Essen, Germany (e-mail:

Abbreviations: ACLS = advanced cardiac life support, EMT = emergency medical technician, OSCE = objective structured clinical examination.

Authors’ contribution: FH, JP, and SM conceived the study; FH and JP designed the trial; SM recruited and trained simulated patients; FH and AB recruited participants and coordinated student training; FH and DD designed the scenarios and the OSCE score sheets; RS, MN, SM, and FH designed the questionnaires; FH, AB, RS, MN, and JP analyzed the data; FH drafted the manuscript; and all authors contributed substantially until its completion.

Funding: The study was conducted with financial support from the Faculty of Medicine, Universität Duisburg-Essen.

The authors have no conflicts of interest to disclose.

This is an open access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

Received October 10, 2016

Received in revised form December 23, 2016

Accepted December 28, 2016

Back to Top | Article Outline

1 Introduction

Teaching management of medical emergencies to medical students and residents alike should consist of improving factual knowledge, and also practical skills such as quick patient assessment, manual skills, and overall care.

Many techniques can be taught on simulators, including airway management and endotracheal intubation,[1,2] cardiopulmonary resuscitation,[3] or vascular access. However, even high-fidelity simulators, although useful to teach procedural skills[4] such as endotracheal intubation and more complex scenarios,[5,6] do not provide physician–patient interaction and probably do not improve communication skills. Accordingly, standardized patients, sometimes referred to as simulated patients, are increasingly used in the education of students and residents,[11–15] and, albeit very costly compared with traditional seminars, this may improve trainee communication skills[7–10] and teaching in various medical fields.[16–17]

However, the effect on clinical performance when using standardized patients has not been studied to a great extent, mostly using observational studies or self-report satisfaction questionnaires.[5] Few controlled trials assess the impact of simulation in teaching of medical emergencies,[18,19] and randomized controlled trials are lacking.

With tightening budgets of medical schools and limited physician time, money should only be spent on teaching techniques that have proven to be effective.

Thus, our study was designed to assess whether medical students’ performance is altered after being trained either on standardized patients or using traditional seminars. Two outcomes were analyzed:

  1. Performance in a written multiple choice test to assess factual knowledge
  2. Performance in an objective structured clinical examination (OSCE) to assess technical and communicative skills.

Additionally, students were asked after the course how well they felt prepared to handle a similar emergency in the future (perceived preparedness).

Back to Top | Article Outline

2 Material and methods

2.1 Study cohort and interventions

After approval by the local ethics committee of the medical faculty (Ethikkommission der Medizinischen Fakultät, Robert Koch-Str. 9–11; 45147 Essen, Germany), its faculty board committee on teaching, and the students‘ informed written consent, 2 consecutive classes of medical students were included. The participants were fourth (second clinical) year medical students (n = 274) and scheduled to attend a 2-week course in emergency medicine. Three emergency scenarios were assessed: acute chest pain, stroke, and acute dyspnea/asthma. The students completed a questionnaire regarding potential confounders. Biographic data, previous medical training, and, in particular, the amount of previous exposure/training to treat medical emergencies (ie, as an EMT or paramedic) were recorded for all participants.

Participants were randomized via a computer-generated random list to receive either three training sessions (90 minutes each) on standardized patients, or 3 traditional seminars of equal duration, each covering the 3 scenarios. For the purposes of randomization, each participant was assigned a number. The computer randomly assigned a form of training (simulation or seminar) to each participant. The order of cases was also assigned at random using computer-generated random numbers (

Standardized patients were professional actors recruited and trained by medical faculty for each scenario to represent a standardized patient. The traditional seminars were taught using standardized educational materials (presentation, script for the lecturer).

The 2 emergency medical technicians (EMTs) assisting the student team leader being examined were also standardized and portrayed by trained employees from our Department of Anesthesiology and Intensive Care Medicine. The assistants were unaware of the students’ prior study allocation and history, and they were instructed both to be cooperative and to perform the routine tasks of a nurse or EMT in an emergency setting.

Staff members scoring the students‘ written knowledgebase tests, the standardized patients in the OSCE, and examiners scoring the OSCE were all unaware of the students‘ allocation to the traditional or the standardized patients learning pathways. All OSCE examinations were scored by the same staff members.

Back to Top | Article Outline

2.2 Measurements

The effects of the training method were assessed by comparing between the 2 groups the performance in a written examination and a clinical examination at the completion of the course. After the course, students were examined regarding their factual knowledge base using a written examination composed of 30 multiple-choice questions. The multiple-choice questions had been used in previous other cohorts and have successfully been validated. Various techniques had been used for validation. For instance, inverse receiver operator characteristics were calculated for every question analyzing the ability of a specific question to discriminate students with a better overall performance from those with worse results. Clinical and communication skills had been evaluated in 3 previous OSCEs[20] for each of the 3 respective scenarios taught.

The 3 OSCE cases were scored using case-specific score sheets, and a maximum of 22 points was achievable for each case. Thus, the maximum score was 66 for the 3 OSCE stations and 30 for the written test. The OSCE score sheet included clinical tasks, and also communicative aspects, and the performance of each student functioning as the leader of a medical emergency team consisting of 3 providers was examined. As a performance measure, unrelated to the 3 scenarios being taught, the participants’ performance in a resuscitation scenario (ACLS) using a high-fidelity simulator (METIman, CAE, St. Laurent, Quebec, Canada) was also studied. All students received identical training in ACLS before this examination. There was 1 evaluator for each OSCE case; thus the same raters scored each student. A measurement of inter-rater reliability was unnecessary.

The study design is depicted in Fig. 1 and an exemplary score sheet for 1 of the scenarios is shown in Fig. 2.

Figure 1

Figure 1

Figure 2

Figure 2

For both the traditional teaching and the standardized patients’ pathways, students were also asked how well they felt prepared to handle such a patient after the lesson (perceived competence). A numerical rating scale was used and graded from 1 (“I feel much better prepared”) to 5 (“I feel much worse”).

Back to Top | Article Outline

2.3 Statistical analyses

Results are depicted as means ± standard deviation (SD) unless otherwise indicated. SPSS statistical software (IBM, Armonk, NY), Stata (StataCorpLP, College Station, TX), and R (R Core Team, were used for statistical computations.

Since we are interested in a change evoked by training, a strong effect of the training on the results in the written examination was looked for. Therefore, an a priori power analysis was performed assuming a Cohen effect size d[21,22] of 0.5, an alpha error of 0.05, and a power of 0.95. One hundred five participants were required in each study arm. Since a single semester cohort would not have yielded an adequate sample size, the study was conducted over a period of 2 semesters.

A Shapiro-Wilk test was applied to test for normal distribution, and a Student t test for unpaired samples was used to compare mean values of normally distributed variables between study cohorts. The results of the questionnaire testing the preparedness of the students were compared between the conventional group and the group being trained on simulated patients using the Kruskal-Wallis test.

An a priori alpha error P of less than 0.05 was considered to be statistically significant.

Back to Top | Article Outline

3 Results

In all, 274 students were initially enrolled and randomly allocated to either the traditional seminar or standardized patient teaching pathway, and 242 students completed the course and their examination scores, and data were subsequently analyzed (Fig. 1). Thirty-two students were not included in the data analysis because they did not participate fully in the course and missed training sessions due to sickness (30) or dropout.[2]

The incidence of the students‘ previous medical experiences, for example, training as a nurse or paramedic, internships in emergency medicine, or dedicated courses in emergency medicine, were not different among cohorts and neither were the students’ age, sex, or years of enrolment in medical school. There were no differences between groups in these aspects and thus they cannot be regarded as potential confounders.

Back to Top | Article Outline

3.1 OSCE

For the 3 OSCE scenarios with standardized patients, the students taught with traditional seminars scored an average of 60.3 points ± 3.5 (±SD), whereas students of the standardized patient cohort scored 61.2 points ± 3 (t = −2.140 [Student t test; 1-tailed, unequal variances], d.f. = 221, difference = −0.913; P = 0.017, Cohen d: −0.279; Fig. 3).

Figure 3

Figure 3

In contrast, the students’ performance in the ACLS scenario, that is, a scenario unrelated to the test scenarios, did not differ between cohorts (standardized patient cohort: points 15 ± 1.3 vs traditional cohort: 15.2 points ± 1.1; P = 0.253).

Back to Top | Article Outline

3.2 Knowledge (written examination scores)

The traditional seminar cohort's average was 27.0 points ± 4.4 and the students taught on standardized patients scored an average of 27.4 ± 2.4 points (t = −0.955 [Student t test; 2-tailed, equal variances], d.f. = 240, difference = −0.427; P = 0.341, Cohen d: −0.123; Fig. 3).

Back to Top | Article Outline

3.3 Students’ perceived competence (self-assessment)

When asked (numerical rating scale) how well the students felt to be prepared for handling a particular emergency after having received instructions, those taught using standardized patients for the stroke scenario felt better prepared (Kruskal-Wallis; P < 0.0001). However, there was no significant difference between subcohorts for chest pain or acute dyspnea/asthma scenarios (Kruskal-Wallis; P = 0.067 and P = 0.899, respectively).

Back to Top | Article Outline

4 Discussion

Students being taught on standardized patients demonstrated a small but statistically significant benefit in clinical testing (OSCE) while not showing a disadvantage in medical knowledge when compared with their fellow students taught with traditional seminars. Whereas statistically significant, the better testing results probably do not reflect a relevantly better performance. The effect size is small.

Whereas this difference is small in absolute size, it still seems remarkable in different ways.

First, the standardized patient group lacked a traditional seminar without an inferior performance in the written examination testing factual knowledge. Thus, at least in our setting, replacing traditional teaching with standardized patients does not compromise the acquisition of factual knowledge. However, teaching using standardized patients requires additional resources consuming more human resources and time, for example, instructors, elaborate preparations including recruitment of actors, training the actors, preparing the scenario for each participant including applying moulage to the actors and preparation of medical equipment, and providing instruction and feedback to the participants. The apparently small benefit resulting from teaching using standardized patients may or may not justify these investments.

Students were asked after each course about their perceived preparedness in handling a patient showing the respective emergency syndromes. Whereas there were no differences between cohorts for the acute chest pain or acute dyspnea scenarios, the group being taught on the stroke scenario using standardized patients felt significantly better prepared. The reason for this difference cannot be pinpointed by our study. However, one may speculate that recognition and assessment of stroke involves more patient interaction and physical examination, whereas assessment of acute chest pain or dyspnea scenarios follows a rather rigid algorithm requiring execution of predetermined tasks. Thus, teaching using standardized patients may be more effective in some scenarios than in others. No studies have addressed these issues of medical education so far.

Back to Top | Article Outline

4.1 Limitations

Some limitations of this study should be discussed. Our faculty has strict requirements on the specifics and design of OSCE, and case scenarios must not be longer than 6 minutes. Accordingly, a longer physician–patient interaction might be required to demonstrate even better behavioral skills. Three defined scenarios with the trained standardized patients presenting very specific symptoms were used. Thus, only little patient history taking was required to arrive at a diagnosis. Accordingly, the selected scenarios and the assessment of teaching results might not fully explore the skills acquired by the students by teaching with standardized patients.

We did not perform any testing before the course. Thus, we are unable to report a gain in factual knowledge or skills. Our written examinations are not validated to being repeated with different questions for comparison of results, and repeating the OSCE examination, for the purpose of this study, would have created an undesirable training effect.

Furthermore, we tested a specific cohort of medical students in an advanced stage of medical training. Potentially, teaching using simulated patients may yield different results in other cohorts, that is, younger medical students, interns, and residents. However, to address effects in all these groups was beyond the scope of our study.

In conclusion, teaching handling medical emergencies using standardized patients slightly but significantly improved medical students’ performance in a structured clinical test compared with a traditional seminar cohort, without compromising factual knowledge. Whether this small improvement in student performance is quantitatively meaningful, given tight budgets and a lot of additional resources required must be decided individually.

Back to Top | Article Outline


[1]. Howells TH, Emery FM, Twentyman JE. Endotracheal intubation training using a simulator. An evaluation of the Laerdal adult intubation model in the teaching of endotracheal intubation. Br J Anaesth 1973;45:400–2.
[2]. Vennila R, Sethuraman D, Charters P. Evaluating learning curves for intubation in a simulator setting: a prospective observational cumulative sum analysis. Eur J Anaesthesiol 2012;29:544–5.
[3]. Ahnefeld FW, Dick W, Dolp R, et al. A teaching and training device for resuscitation. The “AMBU-simulator” (author's transl). Der Anaesthesist 1975;24:547–51.
[4]. Druck J, Valley MA, Lowenstein SR. Procedural skills training during emergency medicine residency: are we teaching the right things? West J Emerg Med 2009;10:152–6.
[5]. McFetrich J. A structured literature review on the use of high fidelity patient simulators for teaching in emergency medicine. Emerg Med J 2006;23:509–11.
[6]. Meguerdichian DA, Heiner JD, Younggren BN. Emergency medicine simulation: a resident's perspective. Ann Emerg Med 2012;60:121–6.
[7]. Bosse HM, Schultz JH, Nickel M, et al. The effect of using standardized patients or peer role play on ratings of undergraduate communication training: a randomized controlled trial. Patient Educ Counsel 2012;87:300–6.
[8]. Lagan C, Wehbe-Janek H, Waldo K, et al. Evaluation of an interprofessional clinician-patient communication workshop utilizing standardized patient methodology. J Surg Educ 2013;70:95–103.
[9]. Ravitz P, Lancee WJ, Lawson A, et al. Improving physician-patient communication through coaching of simulated encounters. Acad Psychiatry 2013;37:87–93.
[10]. Wehbe-Janek H, Song J, Shabahang M. An evaluation of the usefulness of the standardized patient methodology in the assessment of surgery residents’ communication skills. J Surg Educ 2011;68:172–7.
[11]. Rinker B, Donnelly M, Vasconez HC. Teaching patient selection in aesthetic surgery: use of the standardized patient. Ann Plast Surg 2008;61:127–31. discussion 32.
[12]. Hernandez C, Mermelstein R, Robinson JK, et al. Assessing students’ ability to detect melanomas using standardized patients and moulage. J Am Acad Dermatol 2013;68:e83–8.
[13]. Wanat KA, Kist J, Jambusaria-Pahlajani A, et al. Improving students’ ability to perform skin examinations and detect cutaneous malignancies using standardized patients and moulage. J Am Acad Dermatol 2013;69:816–7.
[14]. Doolen J, Giddings M, Johnson M, et al. An evaluation of mental health simulation with standardized patients. Int J Nurs Educ Scholar 2014. 11.
[15]. Shirazi M, Lonka K, Parikh SV, et al. A tailored educational intervention improves doctor's performance in managing depression: a randomized controlled trial. J Eval Clin Pract 2013;19:16–24.
[16]. Josephson SA, Gillum LA. An intervention to teach medical students ankle reflex examination skills. Neurologist 2010;16:196–8.
[17]. Herbstreit F, Fassbender P, Haberl H, et al. Learning endotracheal intubation using a novel videolaryngoscope improves intubation skills of medical students. Anesth Analg 2011;113:586–90.
[18]. Morgan PJ, Cleave-Hogg D, McIlroy J, et al. Simulation technology: a comparison of experiential and visual learning for undergraduate medical students. Anesthesiology 2002;96:10–6.
[19]. Burdick WP, Escovitz ES. Use of standardized patients in a freshman emergency medicine course. J Emerg Med 1992;10:627–9.
[20]. Johnson G, Reynard K. Assessment of an objective structured clinical examination (OSCE) for undergraduate students in accident and emergency medicine. J Accident Emerg Med 1994;11:223–6.
[21]. Wilkinson L, Inference TFS. Statistical methods in psychology journals: guidelines and explanations. Am Psychol 1999;54:594–604.
[22]. Cohen J. Statistical Power Analysis for the Behavioral Sciences. second ed. New Jersey: Lawrence Erlbaum Associates; 1988.

clinical scenario; medical emergencies; medical training; simulation training

Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.