For both the traditional teaching and the standardized patients’ pathways, students were also asked how well they felt prepared to handle such a patient after the lesson (perceived competence). A numerical rating scale was used and graded from 1 (“I feel much better prepared”) to 5 (“I feel much worse”).
2.3 Statistical analyses
Results are depicted as means ± standard deviation (SD) unless otherwise indicated. SPSS statistical software (IBM, Armonk, NY), Stata (StataCorpLP, College Station, TX), and R (R Core Team, www.r-project.org) were used for statistical computations.
Since we are interested in a change evoked by training, a strong effect of the training on the results in the written examination was looked for. Therefore, an a priori power analysis was performed assuming a Cohen effect size d[21,22] of 0.5, an alpha error of 0.05, and a power of 0.95. One hundred five participants were required in each study arm. Since a single semester cohort would not have yielded an adequate sample size, the study was conducted over a period of 2 semesters.
A Shapiro-Wilk test was applied to test for normal distribution, and a Student t test for unpaired samples was used to compare mean values of normally distributed variables between study cohorts. The results of the questionnaire testing the preparedness of the students were compared between the conventional group and the group being trained on simulated patients using the Kruskal-Wallis test.
An a priori alpha error P of less than 0.05 was considered to be statistically significant.
In all, 274 students were initially enrolled and randomly allocated to either the traditional seminar or standardized patient teaching pathway, and 242 students completed the course and their examination scores, and data were subsequently analyzed (Fig. 1). Thirty-two students were not included in the data analysis because they did not participate fully in the course and missed training sessions due to sickness (30) or dropout.
The incidence of the students‘ previous medical experiences, for example, training as a nurse or paramedic, internships in emergency medicine, or dedicated courses in emergency medicine, were not different among cohorts and neither were the students’ age, sex, or years of enrolment in medical school. There were no differences between groups in these aspects and thus they cannot be regarded as potential confounders.
For the 3 OSCE scenarios with standardized patients, the students taught with traditional seminars scored an average of 60.3 points ± 3.5 (±SD), whereas students of the standardized patient cohort scored 61.2 points ± 3 (t = −2.140 [Student t test; 1-tailed, unequal variances], d.f. = 221, difference = −0.913; P = 0.017, Cohen d: −0.279; Fig. 3).
In contrast, the students’ performance in the ACLS scenario, that is, a scenario unrelated to the test scenarios, did not differ between cohorts (standardized patient cohort: points 15 ± 1.3 vs traditional cohort: 15.2 points ± 1.1; P = 0.253).
3.2 Knowledge (written examination scores)
The traditional seminar cohort's average was 27.0 points ± 4.4 and the students taught on standardized patients scored an average of 27.4 ± 2.4 points (t = −0.955 [Student t test; 2-tailed, equal variances], d.f. = 240, difference = −0.427; P = 0.341, Cohen d: −0.123; Fig. 3).
3.3 Students’ perceived competence (self-assessment)
When asked (numerical rating scale) how well the students felt to be prepared for handling a particular emergency after having received instructions, those taught using standardized patients for the stroke scenario felt better prepared (Kruskal-Wallis; P < 0.0001). However, there was no significant difference between subcohorts for chest pain or acute dyspnea/asthma scenarios (Kruskal-Wallis; P = 0.067 and P = 0.899, respectively).
Students being taught on standardized patients demonstrated a small but statistically significant benefit in clinical testing (OSCE) while not showing a disadvantage in medical knowledge when compared with their fellow students taught with traditional seminars. Whereas statistically significant, the better testing results probably do not reflect a relevantly better performance. The effect size is small.
Whereas this difference is small in absolute size, it still seems remarkable in different ways.
First, the standardized patient group lacked a traditional seminar without an inferior performance in the written examination testing factual knowledge. Thus, at least in our setting, replacing traditional teaching with standardized patients does not compromise the acquisition of factual knowledge. However, teaching using standardized patients requires additional resources consuming more human resources and time, for example, instructors, elaborate preparations including recruitment of actors, training the actors, preparing the scenario for each participant including applying moulage to the actors and preparation of medical equipment, and providing instruction and feedback to the participants. The apparently small benefit resulting from teaching using standardized patients may or may not justify these investments.
Students were asked after each course about their perceived preparedness in handling a patient showing the respective emergency syndromes. Whereas there were no differences between cohorts for the acute chest pain or acute dyspnea scenarios, the group being taught on the stroke scenario using standardized patients felt significantly better prepared. The reason for this difference cannot be pinpointed by our study. However, one may speculate that recognition and assessment of stroke involves more patient interaction and physical examination, whereas assessment of acute chest pain or dyspnea scenarios follows a rather rigid algorithm requiring execution of predetermined tasks. Thus, teaching using standardized patients may be more effective in some scenarios than in others. No studies have addressed these issues of medical education so far.
Some limitations of this study should be discussed. Our faculty has strict requirements on the specifics and design of OSCE, and case scenarios must not be longer than 6 minutes. Accordingly, a longer physician–patient interaction might be required to demonstrate even better behavioral skills. Three defined scenarios with the trained standardized patients presenting very specific symptoms were used. Thus, only little patient history taking was required to arrive at a diagnosis. Accordingly, the selected scenarios and the assessment of teaching results might not fully explore the skills acquired by the students by teaching with standardized patients.
We did not perform any testing before the course. Thus, we are unable to report a gain in factual knowledge or skills. Our written examinations are not validated to being repeated with different questions for comparison of results, and repeating the OSCE examination, for the purpose of this study, would have created an undesirable training effect.
Furthermore, we tested a specific cohort of medical students in an advanced stage of medical training. Potentially, teaching using simulated patients may yield different results in other cohorts, that is, younger medical students, interns, and residents. However, to address effects in all these groups was beyond the scope of our study.
In conclusion, teaching handling medical emergencies using standardized patients slightly but significantly improved medical students’ performance in a structured clinical test compared with a traditional seminar cohort, without compromising factual knowledge. Whether this small improvement in student performance is quantitatively meaningful, given tight budgets and a lot of additional resources required must be decided individually.
. Howells TH, Emery FM, Twentyman JE. Endotracheal intubation training using a simulator. An evaluation of the Laerdal adult intubation model in the teaching of endotracheal intubation. Br J Anaesth 1973;45:400–2.
. Vennila R, Sethuraman D, Charters P. Evaluating learning curves for intubation in a simulator setting: a prospective observational cumulative sum analysis. Eur J Anaesthesiol 2012;29:544–5.
. Ahnefeld FW, Dick W, Dolp R, et al. A teaching and training device for resuscitation. The “AMBU-simulator” (author's transl). Der Anaesthesist 1975;24:547–51.
. Druck J, Valley MA, Lowenstein SR. Procedural skills training during emergency medicine residency: are we teaching the right things? West J Emerg Med 2009;10:152–6.
. McFetrich J. A structured literature review on the use of high fidelity patient simulators for teaching in emergency medicine. Emerg Med J 2006;23:509–11.
. Meguerdichian DA, Heiner JD, Younggren BN. Emergency medicine simulation: a resident's perspective. Ann Emerg Med 2012;60:121–6.
. Bosse HM, Schultz JH, Nickel M, et al. The effect of using standardized patients or peer role play on ratings of undergraduate communication training: a randomized controlled trial. Patient Educ Counsel 2012;87:300–6.
. Lagan C, Wehbe-Janek H, Waldo K, et al. Evaluation of an interprofessional clinician-patient communication workshop utilizing standardized patient methodology. J Surg Educ 2013;70:95–103.
. Ravitz P, Lancee WJ, Lawson A, et al. Improving physician-patient communication through coaching of simulated encounters. Acad Psychiatry 2013;37:87–93.
. Wehbe-Janek H, Song J, Shabahang M. An evaluation of the usefulness of the standardized patient methodology in the assessment of surgery residents’ communication skills. J Surg Educ 2011;68:172–7.
. Rinker B, Donnelly M, Vasconez HC. Teaching patient selection in aesthetic surgery: use of the standardized patient. Ann Plast Surg 2008;61:127–31. discussion 32.
. Hernandez C, Mermelstein R, Robinson JK, et al. Assessing students’ ability to detect melanomas using standardized patients and moulage. J Am Acad Dermatol 2013;68:e83–8.
. Wanat KA, Kist J, Jambusaria-Pahlajani A, et al. Improving students’ ability to perform skin examinations and detect cutaneous malignancies using standardized patients and moulage. J Am Acad Dermatol 2013;69:816–7.
. Doolen J, Giddings M, Johnson M, et al. An evaluation of mental health simulation with standardized patients. Int J Nurs Educ Scholar 2014. 11.
. Shirazi M, Lonka K, Parikh SV, et al. A tailored educational intervention improves doctor's performance in managing depression: a randomized controlled trial. J Eval Clin Pract 2013;19:16–24.
. Josephson SA, Gillum LA. An intervention to teach medical students ankle reflex examination skills. Neurologist 2010;16:196–8.
. Herbstreit F, Fassbender P, Haberl H, et al. Learning endotracheal intubation using a novel videolaryngoscope improves intubation skills of medical students. Anesth Analg 2011;113:586–90.
. Morgan PJ, Cleave-Hogg D, McIlroy J, et al. Simulation technology: a comparison of experiential and visual learning for undergraduate medical students. Anesthesiology 2002;96:10–6.
. Burdick WP, Escovitz ES. Use of standardized patients in a freshman emergency medicine course. J Emerg Med 1992;10:627–9.
. Johnson G, Reynard K. Assessment of an objective structured clinical examination (OSCE) for undergraduate students in accident and emergency medicine. J Accident Emerg Med 1994;11:223–6.
. Wilkinson L, Inference TFS. Statistical methods in psychology journals: guidelines and explanations. Am Psychol 1999;54:594–604.
. Cohen J. Statistical Power Analysis for the Behavioral Sciences. second ed. New Jersey: Lawrence Erlbaum Associates; 1988.
Keywords:Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.
clinical scenario; medical emergencies; medical training; simulation training