Secondary Logo

Journal Logo

Editorial

Postgraduate training in anaesthesiology, resuscitation and intensive care: state-of-the-art for trainee evaluation and assessment in Europe

Van Gessel, Elisabeth; Goldik, Zeev; Mellin-Olsen, Jannickefor the Education, Training Standing Committee of the European Board of Anaesthesiology, Reanimation, Intensive Care

Author Information
European Journal of Anaesthesiology: August 2010 - Volume 27 - Issue 8 - p 673-675
doi: 10.1097/EJA.0b013e32833cad28

In the context of multiple postgraduate training reforms in many medical specialties throughout Europe,1 evaluation and assessment of both residents and curriculum have consistently lagged behind.

In the last decade, competency-based curricula in anaesthesiology, resuscitation and intensive care have been slowly implemented in Europe, based mainly on the Canadian Medical Education Directions for Specialists (CanMEDS) framework for doctor competencies, which requires that a competent doctor should be proficient in seven distinct roles.2 This framework is an outcome-based education model spanning a large range of domains of activities and associated competencies to be acquired. Learning outcomes have been defined that underpin these competencies and should permit appropriate trainee evaluation and assessment.

The European Board of Anaesthesiology, Reanimation and Intensive Care that is subordinate to the Union of European Medical Specialties (Union Européenne des Médecins Spécialistes, UEMS) has recently revised its guidelines for training.3 Although the paper does not support CanMEDS, or any similar, framework, it does emphasise the use of realistic and measurable training endpoints along with proper tools for evaluation. The latter will enable definition of a current standard of practice required from a specialist in anaesthesiology, resuscitation and intensive care after 4–6 years of training.

This editorial briefly reviews and proposes a spectrum of evaluation tools to be implemented in the existing curricula, with a more specific discussion about the place, purpose and standard of our existing European diploma specialist examination. It should be viewed as a complement to the UEMS policy statement on assessments during specialist postgraduate medical training (PGMT) published in 2006.4

How does one choose appropriate tools for trainee evaluation and assessment?

Assessment of the trainee includes more than just recognising the 4–6 years spent in practical training and the associated clinical experience. For the majority of curricula, an end-of-training examination exists that tests mainly knowledge. However, with implementation of competency-based curricula and learning outcomes, a greater emphasis is being placed on multiple forms of performance-based assessment.

Each level of clinical competence offers learning opportunities that may be combined with a valid assessment of the trainee that should include all aspects from knowledge (KNOWS) to action (DOES) (Table 1). This was originally proposed by Miller when he first described his framework for clinical assessment in the form of a pyramid.5 This ensures that the trainee is able to acquire not only the knowledge and the technical skills but is also able to perform and act as a specialist in the clinical context of the specialty, with a minimum standard required.

Table 1
Table 1:
Learning opportunities and assessment tools at the different levels of clinical competence required

In an analysis of the trends in postgraduate medical education, Harden6 discusses the theme of competency and performance-based assessments and asks important questions that follow from examination of Miller's framework:

  1. KNOWS (knowledge): What knowledge have the trainees acquired?
  2. KNOWS HOW (competence): Have they mastered appropriate clinical judgement and decision-making skills?
  3. SHOWS HOW (performance): Have they mastered the necessary clinical skills and required practical procedures?
  4. DOES (action): Do they exhibit appropriate attitudes and professionalism? Can they work effectively and efficiently with other team members?

The available assessment tools, both formative and summative, which have been implemented and, more importantly, validated worldwide, can be grouped as follows.

  1. Formative in-training evaluations: these include wide-ranging peer assessments, as well as direct observations of trainees [such as the mini-clinical evaluation exercise (Mini-CEX), direct observation of procedural skills (DOPS)], or case-based discussions and so on; they are intended mainly to give feedback and monitor progress of trainees in their everyday training.
  2. Self-assessment tools, including use of portfolios and logbooks: trainees are expected to take responsibility for their training and through a portfolio or other reporting tools [cumulative sum control chart (CUSUM) scoring, incident reporting, logbooks, etc.] to give evidence or proof that learning has taken place. Therefore, trainees not only report the range of activities accomplished but also chart their own progress.
  3. Obtaining points from a credit system: several learning modules take place throughout the curriculum [e.g. Advanced Trauma Life Support (ATLS)] which include end of module examinations and for which credits can be obtained that demonstrate that learning has taken place. This type of credit system, widely used in Europe through the Bologna process, can be applied to diverse learning/teaching tools such as e-learning or simulator-based training.
  4. Summative evaluations and examinations: in order to provide reliable evidence that trainees have achieved a minimum standard in their acquisition of professional competences, a series of summative (or pass/fail) evaluations sanction progress at different levels of training. These assessments are very often used as exit or final assessments and include tools such as multiple choice questionnaires (MCQ), short-answer questions, essays and oral examinations. They are considered to be limited in their appreciation of competencies beyond knowledge and theoretical clinical reasoning,7 that is the ‘Medical Expert’ role. The introduction of the objective structured clinical examination (OSCE) has opened the door to evaluation of a further level of competence (Table 1), the ‘show how’. According to Harden,6 its use in postgraduate training programs allows testing of a wide sample of competencies in controlled situations. Use of simulators as examination tools will be considered in the future.8

What is the place and importance of the European specialist's examination?

The first Pan-European examination in any medical specialty was created 25 years ago. A group of anaesthesiologists from the former European Academy of Anaesthesiology (EAA)9 established the European Diploma of Anaesthesiology and Intensive Care (EDA) to help the harmonisation of knowledge and training initially in western Europe, but it is now applicable throughout the continent.

This first European examination (a written MCQ) was held in Strasbourg and Oslo in September 1984 with 102 candidates; the pass rate was 44.1%. One year later, 25 of the successful candidates sat the oral examination; the pass rate was 84%. These were the very first candidates to be awarded the EDA.

Since its inception 25 years ago, more than 10 000 anaesthesiologists have sat the EDA part I (written) in 29 centres and the EDA part II (oral) in eight centres (Fig. 1). Latest figures show that in 2009, 1026 candidates sat part I and 255 candidates part II.

Fig. 1
Fig. 1

After the amalgamation of all the main organisations representing European anaesthesia into a single powerful body, the European Society of Anaesthesiology (ESA), responsibility for the EDA passed from the academy to the ESA. The EDA has been a notable success and today 26 different medical disciplines have their own European diploma, and two additional disciplines (Neurology and ENT-ORL Head and Neck Surgery) started their European Diploma examination during the course of this year.

The differing natures of the individual cultures of the European nations have resulted in a variety of approaches to methods of assessment in postgraduate medical education.10 In an attempt to bring different methods of assessment into line, an advisory body was established in Glasgow, UK, under the auspices of the UEMS, with representatives from all relevant disciplines, through an initiative of the European Board of Paediatric Surgery (EBPS). This body, called UEMS-CESMA (The Council for European Specialist Medical Assessment), was created for the purpose of harmonising European board assessments and has provided the boards with guidelines on the conduct of assessments and their administration.

What is the reason for the increasing popularity of European examinations?

Equilibration of standards is the main goal of most examinations. They represent a mark of quality that can be recognised as such by the UEMS4 and also as a benchmark against which one can evaluate formally the core knowledge of trainees throughout Europe. They allow the incorporation of a comprehensive assessment that combines both summative and formative (in-training sessions) aspects as mentioned above.

The UEMS Policy Statement on Assessments during Specialists Post Graduate Medical Training defines assessment as ‘the component of evaluation of specialist PGMT required to improve trainee learning and training in order to award specialist certification and to assure the quality of training’.4 Assessments, globally speaking, should be valid and reliable;11 the EDA has been recognised as valid (it measures what it purports to measure) and reliable (it achieves this consistently) and is justified in awarding European specialist certification.

The EDA has advantages and limitations that are well known. The former includes the large pool of high-quality multilingual MCQs, robust statistical measures and standardised viva formats with external examiners to maximise objectivity. The major limitation is the dependence on knowledge-based outcomes together with some communication skills to the exclusion of other performance-based outcomes.

Consequently, although each European country might consider the EDA to be an important tool, it should not be the sole arbiter of trainees' specialist certification.

What recommendations should we make?

Evaluating a trainee's performance, both clinical and professional, is difficult and encompasses a large variety of tools. As outlined above, and in a recent paper by Chou et al.,7 if written and oral examinations are easier to perform and organise (the written examinations being more reliable and reproducible than the oral ones) than judging the practical management of clinical problems, then the knowledge aspect of competencies acquired during specialist training will dominate. This leads some to ask whether excellent (or very bad) results at these examinations accurately identify an excellent (or incompetent) anaesthesiologist.

Unfortunately, the development and use of other more performance-based tools are often limited or based on casual and random trainee observation or unchecked portfolio/logbook reporting. A further problem lies in validity and reliability of these different assessment methods, as well as their cost and practical feasibility in the clinical setting.

However, according to the European Board of Anaesthesiology, Reanimation and Intensive Care, the following guidelines should apply to any curriculum:

  1. Clinical competencies should be clearly stated in the curriculum (learning objectives and outcomes).
  2. Clinical competencies should be realistic and measurable, thus be assessed.
  3. Multiple evaluation tools should be considered to assess different aspects of competence and performance.
  4. These tools should be used frequently (the more observations, the more reliable the data).
  5. Summative examinations should be considered as one of the important tools (but not the sole tool) to assess and evaluate a trainee's progress and achievement.
  6. Finally, training of faculty providing medical education, as well as recognition, will stimulate creativity.

Acknowledgements

The authors would like to thank the members of the Education and Training Standing Committee of the European Board of Anaesthesiology, Reanimation and Intensive Care for their contribution to this article during our Committee meetings.

References

1 De Roberti E, Knape JTA, McAdoo J, Pagni R. Core curriculum in emergency medicine. Eur J Anaesthesiol 2008; 25:691–692.
2 Frank J. The CanMEDS 2005 Physician Competency Framework. Better Standards. Better Physicians. Better Care. Ottawa: Royal College of Physicians and Surgeons of Canada; 2005.
3 Carlsson C, Keld D, van Gessel E, et al. Education and training in Anaesthesia: revised guidelines by the European Board of Anaesthesiology, Reanimation and Intensive Care – Section and Board of Anaesthesiology, European Union of Medical specialists. Eur J Anaesthesiol 2008; 25:528–530.
4 UEMS Policy Statement on Assessments during Specialist Postgraduate Medical Training; 2006. http://admin.uems.net/uploadedfiles/801.doc. [Accessed February 2010]
5 Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990; 65:S63–67.
6 Harden RM. Trends and the future of postgraduate medical education. Emerg Med J 2006; 23:798–802.
7 Chou S, Cole G, McLaughlin K, et al. CanMEDS evaluation in Canadian postgraduate training programmes: tools used and programme director satisfaction. Med Educ 2008; 42:879–886.
8 Murray DJ, Boulet JR, Avidan M, et al. Performance of residents and anesthesiologists in a simulation-based skill assessment. Anesthesiology 2007; 107:705–713.
9 Zorab JSM. European perspective (surgery). Med Groups 1994: 156a.
10 Karle H, Nystrup J. Comprehensive evaluation of specialist training: an alternative to board examinations in Europe? Med Educ 1995; 29:308–316.
11 Wragg A, Wade W, Fuller G, et al. Assessing the performance of specialist registrars. Clin Med 2003; 3:131–134.
© 2010 European Society of Anaesthesiology