Secondary Logo

Journal Logo

Editorial

Simulation in undergraduate medical education

O'Flynn, Siuna; Shorten, Georgeb

Author Information
European Journal of Anaesthesiology: February 2009 - Volume 26 - Issue 2 - p 93-95
doi: 10.1097/EJA.0b013e32831a47df

In this issue, Hallikainen et al.[1] describe medical student performance at induction of anaesthesia following standardized training provided either in a clinical or a simulated environment. Those students who received training in the simulated environment performed better when assessed using the (same) simulator 1–3 weeks later. The authors describe a framework, which employed many of the features of simulation known to lead to effective learning, as documented by the Best Evidence Medical Education (BEME) group [2]. These include the provision of feedback, repetitive practice, curriculum integration, a range of levels of difficulty, multiple learning strategies, the need to capture clinical variation, a controlled environment, individualized learning, reproducible standardized educational experiences in which learners are active participants not passive bystanders, defined outcomes and simulator validity. It appears that training in simulated environments confers benefits in terms of formative feedback, and collaborative learning if appropriately structured (as outlined above).

In their discussion, Hallikainen et al.[1] acknowledge certain limitations of their study design, in particular the greater familiarity that the ‘simulator-trained’ students had with their test environment. Steps were taken to reduce evaluation apprehension in the clinically trained cohort by orientating them within the simulator environment prior to testing. It would have been useful to assess the baseline perceptual, visuospatial and psychomotor abilities of the participants as these influence the rate of attainment of proficiency in other skills [3]. Other sources of confounding might include sex, hand dominance and experience with computer games [4].

The subject of the article [1] raises important questions regarding simulation and the teaching of clinical skills to medical students.

  1. Is proficiency in performance of certain procedural skills a legitimate learning objective for medical students?
  2. If one considers the range of procedures a new graduate is expected to perform independently, the answer is clearly ‘yes’. In the UK, for example, the General Medical Council has made this expectation explicit [5].
  3. What constitutes proficiency at the undergraduate level?
  4. A greater understanding of the nature of competence has caused medical trainers to revise their approach to postgraduate training and assessment of performance. It is clear that assessment of competence must ‘go beyond the identification of who practitioners are, on the basis of evidence of their personal attributes or dated credentials, to capture what they actually do in the context of contemporary practice’ [6]. The achievement of competence in this ‘performance’ sense may not be a legitimate objective at the undergraduate level.
  5. Currently, medical students do not meet the ‘expected standards’ in terms of practical skills [7,8]. Although there is consensus regarding standards for cognitive domains in undergraduate curricula, this is not so for procedural skills. Clearly any such standards will be context specific for level of training and for procedure. For medical students, familiarity with the ‘steps’ entailed may be sufficient (e.g. central line insertion) while automaticity (or mastery) might more reasonably be expected for intravenous cannulation. The curricular time devoted in each case should reflect the target attainment level. The ‘integrated procedural performance instrument’ described by Kneebone et al.[9] offers one practical approach of merit in assessing competence.
  6. Can simulation be used to optimize teaching and learning of clinical skills at the undergraduate level in terms of efficacy, cost-effectiveness, safety and feasibility?

Several factors have conspired to increase the prevalence of simulation-based training and assessment in procedural skills. These include the changing profile of hospital patients, political accountability and more active professional regulation. Perhaps the greatest effector has been the change in societal expectations. Now, patients rightly demand competence from their attending doctors but are less likely to participate in their training. The ‘push’ from traditional methods is coupled with a ‘pull’ towards technology-enhanced learning. Technological advances have enabled increasingly cost effective and high-fidelity reproduction of clinical events in an interactive and pedagogically sound way.

Simulation and clinical skills laboratories provide a forgiving, safe environment in which to learn procedures. Some anxiety is useful in learning but excess anxiety undermines the process. Medical students experience significant stress when required to learn procedures as novices on patients [10]. Instructional science demonstrates that the acquisition of expertise in clinical medicine is governed by the learners' engagement in deliberate practice, a facility afforded by simulation [11].

‘We can’ does not necessarily mean ‘we should’. It is not clear that improved performance in a simulated environment implies equivalent improvement in the clinical practice? For postgraduate training, the evidence is equivocal. One review of simulation research indicates that a minority of simulation-based programs convincingly impact at level 3 or above of the Kirkpatrick hierarchy [12]. However, in some surgical settings (e.g. laparoscopic cholecystectomy [13]), convincing evidence exists that simulation-based training (compared with traditional methods) results in superior clinical performance. These results must be extrapolated to the undergraduate setting with caution. First, the nature of the procedures (laparoscopic) with a substantial reliance on visual input from a limited field of interest (monitor) may be particularly suitable for translation from simulated to clinical environments. The benefits may not be as clear when applied to ‘real world’ clinical problems, which require integration of input from many different sources (e.g. the trauma patient is wheeled into a busy resuscitation room).

The definitive clinical assessment requires application of valid and reliable tools or sets of ‘metrics’, which currently do not yet exist for most procedures. Hallikainen et al.[1] point out the practical difficulties in undertaking standard assessment in the operating theatre. It would have been interesting if the authors had assessed the performance of anaesthetic trainees and experts in order to establish the construct validity of the tools used, that is, to determine if they reliably discriminated between novices and expert.

Data interrogating ‘real’ costs in medical education training are sparse and contentious. Issenberg et al.[14] make a case for the cost-effectiveness of simulation-based medical education in cardiology teaching. Hallikainen et al. suggest that some economic benefit will result from teaching using a 1: 5 trainer student ratio (in their simulation group) as compared with 1: 1 in the clinical group. This argument is limited, being based only on staff costs for a 3-h training period. In similar work, Owen and Plummer [15] suggested that learning by groups of two students (compared with groups of five) was more effective, thus mitigating any potential economic benefits. Shorter clinical training times afforded by the use of simulation may lead to cost reduction. To date, convincing evidence for this contention at the undergraduate level is lacking.

The application of simulation to clinical skills training is not without risks. Simulation training increases student's confidence, but student's confidence and competence are not necessarily equivalent. [16,17]. Inevitably, a decay occurs in clinical skills that are not practised [18]: the inherent risk is that the learner is not always aware of this. Again, Kneebone et al.[19] suggest a viable solution, namely the provision of ‘distributed learning resources alongside the workplace so that each learner would have access to a range of simulations appropriate to his or her level of expertise coupled with the opportunities to access them whenever prompted by clinical need’.

In undergraduate medical education, simulation facilitates learning of core clinical skills, those that are basic and nonspecialist in nature. Anaesthetists have been to the forefront in harnessing simulation in the teaching and assessment of skills in both undergraduate and postgraduate levels ranging from collaboration with Lærdal, a toy manufacturer, to produce anatomical models to the work of Gaba, Good and Gravenstein. This thought-provoking article by Hallikainen et al. continues that valuable work.

References

1 Hallikainen J, Väisänen O, Randell T, et al. Teaching anaesthesia induction to medical students: comparison between full scale simulation and supervised teaching in the operating theatre.
2 Issenberg SB, McGaghie WC, Petrusa ER, et al. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach 2005; 27:10–28.
3 Ritter EM, McClusky DA 3rd, Gallagher AG, et al. Perceptual, visuospatial, and psychomotor abilities correlate with duration of training required on a virtual-reality flexible endoscopy simulator. Am J Surg 2006; 192:379–384.
4 Grantcharov TP, Bardram L, Funch-Jensen P, Rosenberg J. Impact of hand dominance, gender, and experience with computer games on performance in virtual reality laparoscopy. Surg Endosc 2003; 17:1082–1085.
5 General Medical Council. Tomorrow's doctors: recommendations on undergraduate medical education. London: GMC; 1993.
6 Norcini JJ. Current perspectives in assessment; the assessment of performance at work. Med Educ 2005; 39:880–889.
7 Ringstedt C, Schroeder TV, Henriksen J, et al. Medical students' experience in practical skills is far from stakeholders' expectations. Med Teach 2001; 23:412–416.
8 Coberly L, Goldenhar LM. Ready or not, here they come: acting interns' experience and perceived competency performing basic medical procedures. J Gen Intern Med 2007; 22:491–494.
9 Kneebone R, Nestel D, Yadollahi F, et al. Assessing procedural skills in context: exploring the feasibility of an integrated procedural performance instrument. Med Educ 2006; 40:1105–1114.
10 Du Boulay C, Medway C. The clinical skills resource: a review of current practice. Med Educ 1999; 33:185–191.
11 Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med 2004; 79(10 Suppl):S70–S81.
12 American College of Surgeons. Technical skills education in surgery. 2006 http://www.facs.org/education/technicalskills. [Accessed 21 July 2008].
13 Ahlberg G, Enochsson L, Gallagher AG, et al. Proficiency-based virtual reality training significantly reduces the error rate for residents during their first 10 laparoscopic cholecystectomies. Am J Surg 2007; 193:797–804.
14 Issenberg SB, McGaghie WC, Hart IR, et al. Simulation technology for healthcare professional skills training and assessment. J Am Med Assoc 1999; 282:861–866.
15 Owen H, Plummer J. Improving learning of a clinical skill: the first year's experience of teaching endotracheal intubation in a clinical simulation facility. Med Educ 2002; 36:635–642.
16 Morgan PJ, Cleave-Hogg D. Comparison between medical students experience, confidence and competence. Med Educ 2002; 36:534–539.
17 Barnsley L, Lyon PM, Ralston SJ, et al. Clinical skills in junior medical officers: a comparison of self-reported confidence and observed competence. Med Educ 2004; 38:358–367.
18 Arthur W, Bennet W, Stanush PL, Mcnelly TL. Factors that influence skill decay and retention. A quantitative review and analysis. Hum Perform 1998; 11:57–101.
19 Kneebone RL, Scott W, Darzi A, Horocks M. Simulation and clinical practice: strengthening the relationship. Med Educ 2004; 38:1095–1102.
© 2009 European Society of Anaesthesiology