Secondary Logo

Journal Logo

Miscellaneous

Simulated patient-based teaching of medical students improves pre-anaesthetic assessment

A rater-blinded randomised controlled trial

Berger-Estilita, Joana M.; Greif, Robert; Berendonk, Christoph; Stricker, Daniel; Schnabel, Kai P.

Author Information
European Journal of Anaesthesiology: May 2020 - Volume 37 - Issue 5 - p 387-393
doi: 10.1097/EJA.0000000000001139

Abstract

Introduction

Anaesthesiology has been taught in medical schools since the early 1950s.1 Topics such as airway management, cardiopulmonary resuscitation, pain management, operating theatre exposure and postoperative critical care are usually taught through lectures,2–4 although specific anaesthesiology curricula in medical schools are far from homogeneous.5

Even though the involvement of anaesthetists in university undergraduate curricula varies, whether compulsory or not, the inclusion of anaesthesia clerkships (/internships) are increasing in medical school programmes.6 The goals of these anaesthesia clerkships are to master the basic clinical skills in airway management and vascular access, to gain experience in handling common anaesthesia procedures and intra-operative clinical problems, and to learn about career possibilities as peri-operative physicians.

Pre-anaesthetic assessment of patients is common to most such anaesthesia clerkships. Their importance lies in the timely recognition of potential risk factors for the development of intra-operative and/or postoperative complications, and when possible, they provide the opportunity to optimise the patient physical status to reduce such complications.7 Guidelines from the major anaesthesiology societies7,8 indicate that pre-anaesthesia assessments should include an interview with the patient or guardian to review the medical, anaesthesia and medication history of the patient; an appropriate physical examination; a review of the diagnostic data; and an assignment of the American Society of Anesthesiologists physical status score (ASA-PS9).

Pre-anaesthetic assessments are a complex competency that all medical students need to be taught, although the teaching strategies can be challenging. Most instructions are provided in an ‘apprentice’ format, with the Anaesthesia Department or residents as the instructors, and they are sometimes carried out under limited time resources. In such a setting, the medical students construct and experience their own clinical curriculum10 independent of the initially agreed instructional outcomes. Therefore, investigations into effective teaching methods in such pre-operative assessments will provide benefits both to the learners and the teachers, and will contribute to increased patient safety, according to the Helsinki Declaration on Patient Safety in Anaesthesiology.11

Both the apprentice-type of bedside teaching style in the operating room and lectures to transmit factual knowledge are commonly used teaching strategies in undergraduate anaesthesia education. Universities have also adopted simulations over recent years, therefore, broadening their teaching approaches. Simulated patients are lay persons or actors trained to portray specific medical roles or symptoms.12 These highly trained nonphysicians can take on the roles of patients and assessors to realistically replicate patient encounters.10

With the tightening of budgets in medical schools and pressures on physicians’ time which is focused on patient care, the delivery of teaching to medical students has become more challenging. For these reasons, with this study we aimed to investigate the teaching effectiveness of simulated patients in the teaching of medical students to perform standard pre-anaesthetic assessments of patients according to international recommendations. We hypothesised that a single 30-min teaching sequence given by a simulated patient will improve the performance of year 4 medical students in their pre-anaesthetic assessments with real patients.

Methods

Study population

The study participants gave their written informed consent and the Bern Cantonal Ethics Committee (KEK Bern BASEC-Nr. Req-2019-00269) waived the need for ethics approval according to the Swiss Act on Human Research. All of the students in the Medical Faculty of the University of Bern who were due to take the mandatory 1-week anaesthesia clerkship in year 4 of their studies were invited to participate in the current study. Their participation was voluntary and had no effects on the academic grading of the participants.

Procedures

During the autumn semester, one senior Anaesthesia Department staff member (RG) taught the purpose and content of pre-anaesthesia visits as part of the standard anaesthesia education to all year 4 medical students as a 1-h anaesthesia lecture, entitled ‘An introductory course to clinical clerkships’ (refer to Table 1 for the learning outcomes). Following this, all of the students participated in small groups (nine students per group) in the mandatory 1-week anaesthesia clerkship, from February to November of the following year.

T1
Table 1:
Pre-anaesthetic assessment lecture contents

All of the 144 students from the yearly cohort of year 4 agreed to participate in the current study, and they were randomly assigned to two groups (www.randomization.com, version 2008). On the first clerkship day, while the control group had conventional bedside ‘apprentice’ teaching in the operating room, the intervention group had a 30-min teaching session with a simulated patient. Then in the afternoon of the first clerkship day, the students in both groups carried out pre-anaesthetic assessments of scheduled surgical patients on the ward under the supervision of the rating anaesthesiologists, who were specifically trained to evaluate student performance and were blinded to the randomisation. To ensure that both of these student groups could benefit from the simulated patient interactions, those in the control group were given the same simulated patient training on the following day (refer to Fig. 1 for study flowchart).

F1
Fig. 1:
Study flowchart.

Simulated patient training and simulated patient-led teaching

The simulated patients were laypersons who had been recruited and trained for their feedback skills by medical professionals of the Bern Institute of Medical Education. One senior Anaesthesia Department staff member (RG) and one senior simulated patient trainer (KPS) trained eight people as simulated patients over 2 h. The training focused on teaching the simulated patients to play a single scripted role (Table 2) and then giving constructive and corrective feedback on student performance in taking the patient anaesthesia-related history and undertaking a focused anaesthesia-related clinical examination. A self-developed feedback form (Supplemental Digital Content 1, https://links.lww.com/EJA/A251) supported the simulated patients to provide structured feedback in the same way to all of the students. To avoid student cognitive overload, the standard feedback form at the university encourages a focus on three main positive aspects and three points for improvement, which results in a positive overall mark. In addition, a section was added to ensure the correctness of each student-attributed ASA classification, their Mallampati score assessment, their examination of the cervical and dental status of the patient, the ‘red flag’ questions about allergies and complications during previous anaesthesia, the patient history of angina pectoris, and the fasting times respected by the patients before the planned operation. The senior anaesthetist examined each of the simulated patients and informed them about their individual correct Mallampati scores. The training was continued until everyone was satisfied with the performance of the simulated patients.

T2
Table 2:
Instructions for the simulated patient

The simulated patient teaching session for the intervention group was held in a separate room and was divided into two parts: first, the students performed a 15-min focused pre-anaesthesia patient history and clinical examination with the simulated patient in the patient role. During the following 15 min, the students were given structured verbal and written feedback from the simulated patient on their performance, with strategies for their improvement. The feedback form was given to the students at the end of the session. No other staff member was involved in these simulated patient teaching sessions.

Measurements

During the afternoon of the first day of their clerkship, all of the student participants assessed elective real surgical patients scheduled for surgery the following day, according to the standards of the Anaesthesia Department of Bern University Hospital. To appraise the student performances, staff anaesthesiologists used the mini-Clinical Evaluation Exercise (mini-CEX) tool. This tool is widely used in undergraduate and postgraduate medical education,13,14 for both formative and summative assessments and feedback.15 In addition, it is the current method of assessment for all clinical clerkships at the University of Bern during the years 4 and 5 (Supplemental Digital Content 2, https://links.lww.com/EJA/A252). The mini-CEX forms were filled out for this study after the students had completed their patient encounters.

The supervising anaesthesiologists rated the complexity of the patient encounters as low, average or high, and recorded the duration of the direct observation as well as the duration of the feedback (in minutes). Using a 10-point Likert scale that ranged from 1 (great need for improvement) to 10 (little need for improvement), the supervisors rated the following domains: domain 1, history taking; domain 2, physical examination (including airways assessment); domain 3, communication of anaesthesia management; domain 4, ASA-PS classification9; domain 5, work organisation and efficiency of the focused patient encounter; and domain 6, professional behaviour of the medical student. At the end, an overall performance score was attributed (domain 7). A detailed description of the mini-CEX has been published elsewhere.16

It has been a consistent finding that mini-CEX domain scores correlate highly with each other,17–19 and that different mini-CEX domain scores measure a single global dimension.20–22 For these reasons, the mini-CEX mean domain score (expressed as the mean of the items 1 to 6) and the overall impression score (expressed by item 7) were used to appraise the student performances for their pre-anaesthesia assessment visits.

Statistics

As we were interested in a change in student performance after a short session of simulated patient teaching, the entire cohort of students who were participating in the anaesthesia clerkship were included. From previous studies, typical mean scores of this 10-point mini-CEX scale were around 7 to 9, with a SD of 1.23 To achieve a significant difference of 0.5 on this mini-CEX scale with an α error of 5% and a power of 80%, 63 test participants per group were required24 (https://clincalc.com/stats/samplesize.aspx). In addition, we assumed that a beneficial change would be obtained through this simulated patient training, so the a priori power analysis was performed for a Cohen effect size d of 0.5. The effect size is defined as the magnitude of the difference between the groups and it serves to quantify the effectiveness of an intervention.25 The Cohen's term d stratifies effect sizes as small (d = 0.2), medium (d = 0.5) and large (d ≥ 0.8), where ‘a medium effect of 0.5 is visible to the naked eye of a careful observer’.24 The distributions of the task complexity in the control and intervention groups were compared using χ2 statistics. Tukey's tests of nonadditivity were performed to determine whether the sum of domains 1 to 6 was permitted, to express the mini-CEX mean domain score as the mean of domains 1 to 6. Cronbach-α was calculated to determine the internal consistency of the mini-CEX mean domain scores. The relationships between the mini-CEX overall impression scores and the mini-CEX mean domain scores were investigated using Pearson's product–moment correlation coefficients. The mini-CEX mean domain scores and overall impression scores were compared between the control and intervention groups using Student's t tests for two independent samples, with homogeneity of variance assessed using Levene's tests.

A probability of less than 0.05 was considered as statistically significant, with Bonferroni corrections applied where appropriate. The data are presented as means ± SD or as percentages. The SPSS Statistical Software (IBM, Armonk, New York, USA) was used for these statistical computations.

Results

The entire student cohort of 144 students were randomised to the control (n=71; conventional bedside ‘apprentice’ teaching in the operating room) or intervention (n=73; extra 30-min teaching session with a simulated patient) groups. In total, 133 out of 144 students were analysed. Eight students asked to reschedule their clerkships, and three assessments had incomplete data. These cases were excluded from the final analysis (dropout rate, 7.4%). There were equivalent proportions of males and females in the two groups (P = 0.753). There were no differences in the duration of the observations (15 ± 10 min; P = 0.410) and the duration of the feedback (6 ± 2 min; P = 0.962). Task complexity was equally distributed among the two groups (; P = 0.608). The Cronbach α coefficient for domains 1 to 6 was 0.87, which thus demonstrated high internal consistency.

The students in the intervention group had higher scores than those in the control group with respect to the following domains: history taking (8.6 ± 1.0 SD vs. 8.2 ± 1.0 SD; P = 0.02), physical examination (8.8 ± 1.1 SD vs. 8.4 ± 1.3 SD; P = 0.113), communication of peri-operative management (8.1 ± 1.4 SD vs. 7.8 ± 1.5 SD; P = 0.418), assessment and ASA classification (8.8 ± 1.1 SD vs. 8.3 ± 1.3 SD; P = 0.050), organisation and efficiency (8.5 ± 1.3 SD vs. 7.9 ± 1.2 SD; P = 0.019) and professional behaviour (9.2 ± 0.9 SD vs. 8.8 ± 1.1 SD; P = 0.022).

Tukey's tests of nonadditivity showed that the mini-CEX domains of 1 to 6 could be summed for the mini-CEX mean domain score [F(1, 279) = 2.277; P = 0.132]. The Pearson's correlation between item 7 (the overall impression score) and the mini-CEX mean domain score was 0.858 (P < 0.001).

The students in the intervention group scored significantly higher in their overall impression score [domain 7; 8.8 ± 0.8 vs. 8.3 ± 0.9; t(df = 131) = 2.858; P = 0.004; Cohen's d = 0.56], as well as in their mini-CEX mean domain score [8.7 ± 0.8 vs. 8.3 ± 0.9; t(df = 128) = 3.145; P = 0.01; Cohen's d = 0.50].

Discussion

The current study shows that one short teaching intervention by a trained simulated patient can significantly improve student performances for pre-anaesthesia patient visits, measured in terms of the mini-CEX tool. Remarkably, this 30-min teaching sequence with a trained nonphysician simulated patient provided an improvement in the overall student performance of the magnitude of a medium effect size, of about 0.5. This implies that this medical teaching by a trained simulated patient can lead to adequate acquisition of the core competencies in the setting of pre-anaesthesia assessments of patients. This opens a wide field of further application of simulated patients as teachers in other teaching environments where students need to learn how to perform focused patient assessments, such as in pain clinics, emergency rooms and ICUs. Our findings support the study of Herbstreit et al.,26 who showed improvements in medical student skills with the use of a simulated patient intervention in the emergency room.

In anaesthesia, the teaching of ‘bedside’ skills owes much to the apprenticeship model, especially in the operating theatre. A graded approach to this skill provides trainees with a stepwise way of learning, alongside close supervision. Therefore, the teacher leads the trainee on to perform more complex tasks, with the aim of independent performance.12 Nonetheless, this learning process is time and resource consuming, and it requires differentiated supervisors with both technical and andragogical skills.

Recently, simulation has also been introduced27–29 as a way to teach anaesthesia procedural skills, although it does not provide any physician–patient interactions. All in all, with regards to pre-anaesthetic assessments of patients, previous studies have not been conclusive in the support of alternative forms of teaching.30 However, training and assessment situations with simulated patients have been successfully and extensively used in undergraduate medical education,31–33 and as our study shows for the first time, trained laypersons acting as simulated patients can teach medical students to successfully perform pre-anaesthetic assessments in real patient encounters. The simulated patients in the current study were trained to portray specific medical histories and physical symptoms for the students to assess in about 15 min. This simulated patient training also provided instructions on how to deliver structured, positively framed, corrective and constructive feedback. This feedback focused on the professional approach of the students to the patients, and how they structured their focused clinical assessments of the patients. As the students who underwent the simulated patient teaching performed better than expected in their following clinical encounters with real patients, we can conclude that the structured interventions with specific feedback by the simulated patients were the reason for the significant improvements in the student clinical performances, when compared with the control group.

The mini-CEX scores of the student performances showed notable ceiling effects. Ceiling effects occur when a test appears to be too easy, such that almost all of the people tested reach the maximum test score.23 At first glance, this finding might appear strange, as a pre-anaesthetic visit is a new and complex skill for the students, in which they have little previous teaching and practical experience. However, such high scores in the formative workplace assessments like the mini-CEX have been thoroughly described previously.20–23 In workplace assessments, clinical supervisors who act as assessors often take on the role of coaches who focus on the student achievements rather than the strict allocation of the scores.34 Starting with high scores at baseline, it is all the more remarkable that the current study has demonstrated increased performances with just this one short intervention.

A limitation of this study is that only one clinical scenario was used by the simulated patient, with a very specific clinical picture. Although the teaching involved only one scenario, the evaluation with the mini-CEX consisted of heterogeneous situations of pre-anaesthesia visits. That being said, significant statistical differences were noted in the history taking skills between the two student groups, and an improvement in their skills was demonstrated. Overall, this ‘standard situation’ meant that the students performed better in a random ‘real’ situation. This means that a single training scenario taught the students about a broad variety of different patient encounters.

We also did not perform any testing before the interventions. If the level of competence had been assessed before the interventions, an estimate of the ‘total competence growth’ would have been possible, which would have strengthened our data further. However, such a pre-interventional assessment was not possible logistically.

We are at present not able to pinpoint the reasons why this simulated patient training improved the student performances. Indeed, this was not the focus of the current study from the start, although we would suggest that a qualitative study can be carried out (by means of interviews or focus groups) to determine the reasons why this short simulated patient training can lead to better results.

We tested a cohort of year 4 medical students here who had no previous specific anaesthesia training. Teaching with simulated patients might produce different results in more experienced cohorts, but to address such effects in such stratified groups was beyond the scope of the current study.

This was a single-centre trial, which can raise concerns about its generalisability. In addition, a longitudinal assessment of our intervention was not possible, as we did not have the opportunity to bring these students back to the Anaesthesia Department after their clerkships. On the contrary, little is known about the effectiveness of engaging simulated patients in teaching and the subsequent impact on patient outcomes. A recent review29 indicated improvements in resident learning, although it did not provide any further clear direction.

Finally, the literature on the economic benefits of using simulated patients is currently limited.32 We did not perform any cost–benefit assessments for this intervention, as this was beyond the scope of this study. We would assume, however, that the benefits of this simulated patient teaching will also lead to economic savings.

Nonetheless, a strength of this educational study is the successful use of nonhealthcare laypersons as trained simulated patient teachers to educate medical students in an anaesthesia setting. In addition, our study design was that of an experimental randomised controlled trial. It was therefore methodologically sound, as it limits selection bias while enhancing the internal validity. Moreover, our sample size was one of the largest in the literature involving simulated patients. Further, as a consequence of this study, a simulated patient teaching programme has indeed been implemented into the curriculum of the year 4 medical students at the University of Bern, so there has been a local sustainable change in teaching practice.

In conclusion, this randomised controlled educational study shows that one single teaching encounter with a trained nonphysician simulated patient can significantly improve the performance of year 4 medical students in their pre-anaesthetic clinical assessments of surgical patients. This kind of teaching might thus be suitable for many clinical teaching situations where physicians are not available.

The article adheres to the appropriate CONSORT checklist.

Acknowledgements relating to this article

Assistance with the study: the authors would like to acknowledge all of the staff of the Department of Anaesthesiology and Pain Medicine for the mini-CEX assessments and all of the simulated patients, students and patients who participated in this study. For statistical support, we would like to acknowledge Anja Rogausch (Institute for Medical Education).

Financial support and sponsorship: this study was supported by the Department of Anaesthesiology and Pain Medicine, Inselspital, Bern University Hospital, Bern, Switzerland.

Conflicts of interest: none.

Presentation: none.

References

1. Harbord RP. The teaching of anaesthesia to medical students. Br J Anaesth 1954; 26:64–73.
2. Curry SE. Teaching medical students clinical anesthesia. Anesth Analg 2018; 126:1687–1694.
3. Rohan D, Ahern S, Walsh K. Defining an anaesthetic curriculum for medical undergraduates. A Delphi study. Med Teach 2009; 31:e1–e5.
4. Overton MJ, Smith NA. Anaesthesia priorities for Australian and New Zealand medical school curricula: a Delphi consensus of academic anaesthetists. Anaesth Intensive Care 2015; 43:51–58.
5. Cheung V, Critchley LA, Hazlett C, et al. A survey of undergraduate teaching in anaesthesia. Anaesthesia 1999; 54:4–12.
6. Smith AF, Sadler J, Carey C. Anaesthesia and the undergraduate medical curriculum. Br J Anaesth 2018; 121:993–996.
7. De Hert S, Staender S, Fritsch G, et al. Preoperative evaluation of adults undergoing elective noncardiac surgery: updated guidelines from the European Society of Anaesthesiology. Eur J Anaesthesiol 2018; 35:407–465.
8. Apfelbaum JL, Connis RT, Nickinovich DG, et al. Practice advisory for preanesthesia evaluation: an updated report by the American Society of Anesthesiologists Task Force on preanesthesia evaluation. Anesthesiology 2012; 116:522–538.
9. Owens WD, Felts JA, Spitznagel EL Jr. ASA physical status classifications: a study of consistency of ratings. Anesthesiology 1978; 49:239–243.
10. Stillman PL, Swanson DB. Ensuring the clinical competence of medical school graduates through standardized patients. Arch Intern Med 1987; 147:1049–1052.
11. Mellin-Olsen J, Staender S, Whitaker DK, et al. The Helsinki declaration on patient safety in anaesthesiology. Eur J Anaesthesiol 2010; 27:592–597.
12. Nestel D, Burn CL, Pritchard SA, et al. The use of simulated patients in medical education: guide supplement 42.1 – viewpoint. Med Teach 2011; 33:1027–1029.
13. Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31. Med Teach 2007; 29:855–871.
14. Mortaz Hejri S, Jalili M, Shirazi M, et al. The utility of mini-Clinical Evaluation Exercise (mini-CEX) in undergraduate and postgraduate medical education: protocol for a systematic review. Syst Rev 2017; 6:146.
15. Weston PS, Smith CA. The use of mini-CEX in UK foundation training six years following its introduction: lessons still to be learned and the benefit of formal teaching regarding its utility. Med Teach 2014; 36:155–163.
16. Montagne S, Rogausch A, Gemperli A, et al. The mini-Clinical Evaluation Exercise during medical clerkships: are learning needs and learning goals aligned? Med Educ 2014; 48:1008–1019.
17. Norcini JJ, Blank LL, Duffy FD, et al. The mini-CEX: a method for assessing clinical skills. Ann Intern Med 2003; 138:476–481.
18. Kogan JR, Bellini LM, Shea JA. Feasibility, reliability, and validity of the mini-Clinical Evaluation Exercise (mCEX) in a medicine core clerkship. Acad Med 2003; 78:S33–S35.
19. Cook DA, Beckman TJ. Does scale length matter? A comparison of nine- versus five-point rating scales for the mini-CEX. Adv Health Sci Educ Theory Pract 2009; 14:655–664.
20. Ney EM, Shea JA, Kogan JR. Predictive validity of the mini-Clinical Evaluation Exercise (mCEX): do medical students’ mCEX ratings correlate with future clinical exam performance? Acad Med 2009; 84:S17–S20.
21. Fernando N, Cleland J, McKenzie H, et al. Identifying the factors that determine feedback given to undergraduate medical students following formative mini-CEX assessments. Med Educ 2008; 42:89–95.
22. Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. J Am Med Assoc 2009; 302:1316–1326.
23. Berendonk C, Rogausch A, Gemperli A, et al. Variability and dimensionality of students’ and supervisors’ mini-CEX scores in undergraduate medical clerkships – a multilevel factor analysis. BMC Med Educ 2018; 18:100.
24. Cohen J. Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum Associates; 1988.
25. Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986; 1:307–310.
26. Herbstreit F, Merse S, Schnell R, et al. Impact of standardized patients on the training of medical students to manage emergencies. Medicine (Baltimore) 2017; 96:e5933.
27. Hallikainen J, Vaisanen O, Randell T, et al. Teaching anaesthesia induction to medical students: comparison between full-scale simulation and supervised teaching in the operating theatre. Eur J Anaesthesiol 2009; 26:101–104.
28. Drummond D, Delval P, Abdenouri S, et al. Serious game versus online course for pretraining medical students before a simulation-based mastery learning course on cardiopulmonary resuscitation: a randomised controlled study. Eur J Anaesthesiol 2017; 34:836–844.
29. Vennila R, Sethuraman D, Charters P. Evaluating learning curves for intubation in a simulator setting: a prospective observational cumulative sum analysis. Eur J Anaesthesiol 2012; 29:544–545.
30. Carrero E, Gomar C, Penzo W, et al. Comparison between lecture-based approach and case/problem-based learning discussion for teaching preanaesthetic assessment. Eur J Anaesthesiol 2007; 24:1008–1015.
31. Cantillon P, Stewart B, Haeck K, et al. Simulated patient programmes in Europe: collegiality or separate development? Med Teach 2010; 32:e106–e110.
32. Kaplonyi J, Bowles KA, Nestel D, et al. Understanding the impact of simulated patients on healthcare learners’ communication skills: a systematic review. Med Educ 2017; 51:1209–1219.
33. Collins JP, Harden RM. AMEE Medical Education Guide No. 13: real patients, simulated patients and simulators in clinical examinations. Med Teach 1998; 20:508–521.
34. Govaerts MJ, Van de Wiel MW, Schuwirth LW, et al. Workplace-based assessment: raters’ performance theories and constructs. Adv Health Sci Educ Theory Pract 2013; 18:375–396.

Supplemental Digital Content

Copyright © 2020 European Society of Anaesthesiology. All rights reserved.