Skip Navigation LinksHome > July 2001 - Volume 95 - Issue 1 > The Validity of Performance Assessments Using Simulation
Anesthesiology:
Clinical Investigation

The Validity of Performance Assessments Using Simulation

Devitt, J. Hugh M.D., M.Sc., F.R.C.P.C.*; Kurrek, Matt M. M.D.†; Cohen, Marsha M. M.D., M.H.Sc., F.R.C.P.C.‡; Cleave-Hogg, Doreen B.A., M.A., Ph.D.§

Free Access
Supplemental Author Material
Article Outline
Collapse Box

Author Information

Collapse Box

Abstract

Background: The authors wished to determine whether a simulator-based evaluation technique assessing clinical performance could demonstrate construct validity and determine the subjects’ perception of realism of the evaluation process.
Methods: Research ethics board approval and informed consent were obtained. Subjects were 33 university-based anesthesiologists, 46 community-based anesthesiologists, 23 final-year anesthesiology residents, and 37 final-year medical students. The simulation involved patient evaluation, induction, and maintenance of anesthesia. Each problem was scored as follows: no response to the problem, score = 0; compensating intervention, score = 1; and corrective treatment, score = 2. Examples of problems included atelectasis, coronary ischemia, and hypothermia. After the simulation, participants rated the realism of their experience on a 10-point visual analog scale (VAS).
Results: After testing for internal consistency, a seven-item scenario remained. The mean proportion scoring correct answers (out of 7) for each group was as follows: university-based anesthesiologists = 0.53, community-based anesthesiologists = 0.38, residents = 0.54, and medical students = 0.15. The overall group differences were significant (P < 0.0001). The overall realism VAS score was 7.8. There was no relation between the simulator score and the realism VAS (R = −0.07, P = 0.41).
Conclusions: The simulation-based evaluation method was able to discriminate between practice categories, demonstrating construct validity. Subjects rated the realism of the test scenario highly, suggesting that familiarity or comfort with the simulation environment had little or no effect on performance.
Back to Top | Article Outline

ArticlePlus

Click on the links below to access all the ArticlePlus for this article.
Please note that ArticlePlus files may launch a viewer application outside of your web browser.
* http://links.lww.com/ALN/A97
THE assessment of clinical competence is an imperfect art. 1,2 The best estimation of competence at present are measures of clinical performance where “competence” describes a physician’s capabilities and performance reflects that physician’s actual practice. 3
Although an evaluation tool must be practical and reliable, it must also be valid before being adopted into widespread use. 1,4 A test is said to be valid if it can actually measure what it is intended to measure. 1,4 Validity can be assessed by a number of different methods. A test is said to have construct validity if the test results are in keeping with expectations. 1,4,5 We previously demonstrated construct validity using a simple evaluation scheme in a small number of subjects. 6 This study focused on construct validity of a scenario using a large number of subjects with a wide range of expertise.
Over the last 25 years, the interest in simulation-based training has grown and expanded rapidly. 7 More recently, medical simulation has been used to develop evaluation methods for anesthesiologists. 6,8,9 Advantages of the simulation environment include no risk to patients, scenarios that can allow exploration of uncommon but serious clinical problems, scenarios that can be standardized for comparisons across practitioners, errors that could be allowed, which, in a clinical setting, would require intervention by a supervisor, and performance can be measured objectively. 10,11
Previous studies have demonstrated that simulator-based evaluation processes contain acceptable reliablity. 6,8,9 However, simulator-based evaluation processes still require validation. The objective of the current study was to determine whether a simulator-based evaluation technique assessing clinical performance could discriminate the level of training of a large and diverse group of anesthesiologists and thus demonstrate construct validity. In addition, we asked participants to rate the realism of their experience during the simulation-based evaluation process.
Back to Top | Article Outline

Methods

The study was approved by the Sunnybrook Health Sciences Centre research ethics board (Toronto, Ontario, Canada), and written informed consent was obtained from all subjects. In addition, all participating subjects were asked to sign a statement guaranteeing confidentiality of the content of the evaluation scenario and problems.
Our simulation center consists of a mock operating room containing an anesthesia gas machine, patient physiologic monitors, anesthesia drug cart, operating table, instrument table, and electrocautery machine. Drapes, intravenous infusions, and surgical instruments are used to enhance the realism of the simulation. The details of our simulation center have been described elsewhere. 8,12 The patient mannequin (Eagle Patient Simulator, version 2.4.1; Eagle Simulation, Inc., Binghamton, NY) was positioned on the operating table. The role of the circulating nurse was scripted and acted by a knowledgeable research assistant. The circulating nurse was instructed to provide the appropriate responses during the simulation and was prompted as necessary by means of a radio frequency communication system. The surgeon was a mannequin with a built-in speaker operated by the simulation director. Except where scripted, the “surgeon” only responded to direct questions or, on occasion, asked questions to clarify ambiguous responses or statements of the participants.
None of the subjects had actual previous simulator experience, although residents and teaching staff may have heard anecdotal reports of activities in the simulation center. Subjects were drawn from five different groups consisting of (1) final-year anesthesiology residents, (2) medical students, (3) community-based anesthesiologists, (4) university-based (academic) anesthesiologists, and (5) anesthesiologists referred for practice assessment. The first group consisted of final-year anesthesia residents (5 years of postgraduate training) who were within 6 months of finishing their residency. The second group was medical students in their final year of training. (All medical students take a mandatory 2-week course in anesthesiology at the University of Toronto and participated in the study during the second week of their anesthesia rotation). The third group consisted of community-based anesthesiologists from the greater Toronto area. All community-based anesthesiologists were engaged in active practice. The fourth group was drawn from the teaching faculty of the Department of Anesthesia at the University of Toronto. All members of this group were engaged in independent clinical practice. The fifth and final group consisted of anesthesiologists identified as having practice deficiencies and had been referred by their practice hospitals or provincial licensing authorities. Subjects in all five groups were volunteers, and the latter three groups were paid a stipend for their participation. We had hypothesized that scores from our evaluation process would be highest among university-based anesthesiologists, followed by anesthesiology residents, community-based anesthesiologists, practice-referred anesthesiologists, and medical students.
Demographic data, including age, training (residency and clinical fellowship), and location of practice, were collected from all participants. All participants received a 30-min familiarization of the mannequin, gas machine physiologic monitor, and simulation facility. All participants were given the same scenario and patient information in the form of a preoperative assessment form, electrocardiogram, and chest radiograph results. All subjects were asked to verbalize their thoughts and actions during the simulation as if there were a medical student present.
The simulated patient in the scenario was a 66-yr-old man weighing 79 kg who presented with a diagnosis of carcinoma of the colon for an elective left hemicolectomy. The anticipated duration of surgery was 2.5 h. The patient had no allergies, and his current medications included isosorbide dinitrate and diltiazem. His medical history was remarkable for an uneventful myocardial infarction 5 years previously with residual stable class I postinfarction angina, a 5-year history of mild hypertension, and a 30 pack-year history of smoking. The preoperative physical examination was unremarkable, and preoperative hematology and biochemistry were normal. The preoperative electrocardiogram documented a normal sinus rhythm, normal axis, QS complex in leads II, III, and AVF, and the preoperative chest radiograph was interpreted as normal with mild hyperinflation, consistent with chronic obstructive lung disease.
Table 1
Table 1
Image Tools
A 1.5-h clinically realistic scenario was developed, containing nine anesthetic problems (items). The problems were developed by a panel of four clinical anesthesiologists who were actively engaged in clinical practice at large university-based residency training programs (in the United States and Canada) and certified in anesthesiology by the American Board of Anesthesiology. All members of the panel were knowledgeable about the capabilities of simulator technology. The test items were chosen after panel discussion to reflect a variety of clinical problems, taking into consideration the capabilities and realism of the simulator hardware and software. The items were designed to evaluate problem recognition, formulation of a medical diagnosis, and the institution of treatment. The development of each clinical problem consisted of defining the problem, determining the appropriate computer settings, and developing a script for the roles of the surgeon and circulating nurse. Each of the items was reproducible so that there was standardization of the scenario and problems. The clinical problem description and identification are listed in table 1 (a detailed description of each item is presented in an appendix on the Anesthesiology Web site). Problems were presented in a sequential manner over the 1.5-h period of the simulated anesthetic. There was a specified time interval (5 min) between each problem where the patient’s physiological parameters were returned to normal, signifying the end of the problem and resulting in a period of relative inactivity before introduction of the next item.
For each item, a rating scale defined the appropriate score based on preset criteria (table 1). No response to the situation by the participant resulted in a score of 0; undertaking a compensating intervention resulted in a score of 1; and corrective treatment resulted in a score of 2 (“correct” score) recorded by the observer. A compensating intervention was defined as a maneuver undertaken to correct perceived abnormal physiologic values. A corrective treatment was defined as definitive management of the presenting medical problem. Appropriateness of compensating intervention and corrective treatment were defined by consensus after referencing with standard anesthesiology textbooks.
All participants were asked to anesthetize the “patient” for an elective surgical procedure as the first case at the beginning of the day. All subjects were expected to assess the patient, check anesthetic drugs and equipment, and induce and provide maintenance anesthesia for the scenario. All external cues were standardized, rehearsed, and presented in a similar manner to all study participants.
Each subject was evaluated by one of two trained raters certified in anesthesiology by The Royal College of Physicians and Surgeons of Canada and the American Board of Anesthesiology. A rating sheet that detailed possible responses and scores was given to the evaluators for each participant. The observers did not know in advance the background of each candidate as all appointments and preliminary data collection were made by a research assistant. As a result, subjects presented on a first-come, first-served basis over the course of the study, and it was not possible to predict based on scheduling the category of the subject. All participants wore operating room headgear and masks, but it was not possible to completely blind the raters as to the group in which the participant fell. All performances by participants were recorded on videotape for subsequent review and assessment.
After completion of the simulator scenarios, participants were asked to rate the realism of their experience on a 10-point visual analog scale (VAS), where a rating of 0 indicated an unrealistic experience and a rating of 10 indicated a completely realistic experience. A discussion of the experience was undertaken with each subject at the end of the simulation after each subject had rated the realism of the experience.
Back to Top | Article Outline
Statistical Analysis
Age, years of training, and years in practice were compared for each group using one-way analysis of variance. Internal consistency of the test items was estimated using the Cronbach coefficient α. A Cronbach coefficient α > 0.6 was considered adequate for internal consistency. 6 An item analysis was performed by recalculating the Cronbach coefficient α with each item deleted to determine if any of the items in the scenario contributed to poor internal consistency. Those items, which reduced the overall Chronbach α on the item analysis, were eliminated, and the Cronbach coefficient α was reestimated for the remaining items.
We determined the difference in simulator scores between the four groups as follows. For each subject, the proportion of items scored as 2 was calculated for each of the items. The sum of the proportions for participants in each group was determined and divided by the number of subjects in the group to calculate the mean proportion. The difference between the mean proportion of correct answers across the four groups was determined using a one-way analysis of variance followed by pairwise comparisons (Tukey).
Mean realism VAS scores for all groups were compared using a one-way analysis of variance followed by pairwise comparisons, with P < 0.05 considered significant. Years in practice for those individuals engaged in active clinical practice (community- and university-based anesthesiologists, and those referred for practice assessment) were compared with the simulation score using a Pearson correlation coefficient. The simulator score was compared with the realism VAS using the Pearson correlation coefficient, with P < 0.05 considered significant.
Back to Top | Article Outline

Results

Table 2
Table 2
Image Tools
A total of 142 subjects participated in the study. There were significant differences in mean age between the various groups (table 2), with medical students and residents being younger than all other groups. The anesthesiologists referred for practice assessment were the oldest, while university-based anesthesiologists were younger than those in the community. There was no significant difference in length of training for those engaged in clinical practice, but anesthesiologists referred for assessment of practice were in practice longer than those in community- or university-based practices. All subjects signed and agreed to the terms of the confidentiality statement.
Table 3
Table 3
Image Tools
The Cronbach coefficient α for the nine-item evaluation process was 0.62. The item analysis is presented in table 3. Problems 1 (carbon dioxide canister leak) and 2 (missing inspiratory valve) reduced overall internal consistency. Removing these problems increased the Cronbach coefficient α to 0.69. Because these items demonstrated a lack of internal consistency (i.e., lack of reliability), they were dropped from further consideration, leaving seven items in the analysis.
Fig. 1
Fig. 1
Image Tools
The practice assessment group was not considered in further group comparisons because of the small numbers (n = 3). There was a significant difference in the mean proportion of correct scores across the remaining four groups (P < 0.0001;fig. 1). University anesthesiologists scored significantly higher than community anesthesiologists (P < 0.003) and medical students (P < 0.0001). Community anesthesiologists scored significantly higher than medical students (P < 0.0001) Residents scored significantly higher than community anesthesiologists (P < 0.005) and medical students (P < 0.0001).
Fig. 2
Fig. 2
Image Tools
The distribution of scores by test item and group is presented in figure 2. There was a weak but significant correlation between years in clinical practice and simulator score (R = −0.49, P = 0.0001).
Fig. 3
Fig. 3
Image Tools
The overall mean realism VAS score was 7.8. No relation was found overall between the simulator score and the participants’ perception of realism as assessed by the VAS rating (R = −0.07, P = 0.41;fig. 3). Each group rated the realism of their evaluation experiences as follows: university anesthesiologists, 7.3 ± 1.2 (mean ± SD); community anesthesiologists, 7.7 ± 1.3; residents, 8.1 ± 1.2; and medical students, 8.2 ± 1.0. Group differences with respect to realism were significant (P < 0.01), with medical students rating the evaluation as more realistic than university-based anesthesiologists.
Back to Top | Article Outline

Discussion

Scores across groups differed significantly, with university-based anesthesiologists and residents scoring significantly higher than all other groups. This was not unexpected because the anesthesiology residents were in their final year of training and deemed eligible by the program director to take the national specialty examination in anesthesiology. In addition, we noted that community-based anesthesiologists had significantly different scores than medical students. There were significant differences in age and the number of years in practice for subjects engaged in clinical practice.
All groups rated the simulation evaluation environment as realistic. The lack of correlation with realism in the simulator environment and the score achieved in the simulator-based evaluation process would indicate that familiarity or comfort with the simulation environment had little or no effect on performance.
A test is said to have construct validity if the test results are in keeping with expectations. We came near to demonstrating our construct in that university-based anesthesiologists and residents scored higher than all groups, and community-based anesthesiologists scored higher than medical students. We were able to show construct validity in that an evaluation system using the simulator was able to differentiate a large group of individuals based on clinical experience or training. 1,4,5 In our practice setting, most university-based anesthesiologists are actively engaged in independent clinical practice with, on average, 90% of their time devoted to clinical activities and with a minimum clinical activity of at least 50%. Teachers of residents reported that 10% of their clinical activity is conducted with residents, resulting in the remaining 90% of their clinical activity being conducted on an independent basis, as there are no certified nurse anesthetists in the Canadian system. In addition, university-based anesthesiologists are exposed to more frequent rounds and educational activities than their community-based counterparts, and university-based anesthesiologists are closer to research and new developments. While university-based anesthesiologists may have been closer to simulator development and use, the university cohort of subjects was drawn from teaching hospitals that were physically remote from the simulator location.
The internal consistency on all but two items was acceptable in our study. One of our items (missing inspiratory valve) demonstrated poor internal consistency in an earlier report, while the other item (carbon dioxide canister leak) had an acceptable internal consistency in the same previous study. 6 There are several explanations for this discrepancy. The number of subjects participating in this study was increased by fivefold, thus increasing our power. Second, the breadth of experience and training of the subjects in this study encompassed a wider range than that of the previous study. 6,13 These differences in findings over the two studies suggest that internal consistency must be reviewed for each new evaluation process or study population.
A possible criticism of our study is that the simulation director (computer operator) also scored the subjects’ performance. To avoid bias, the scoring system (only 3 possible scores or points) was simple and clear. The criteria for scoring were not dependent on subjective assessments but on clear action by the participants. As a result, the assessment of interrater reliability of our evaluation process was excellent. 8 A multipoint scoring system for each item would likely require greater interpretation on the part of the evaluator, resulting in possible bias and poorer interrater reliability.
It has been suggested that an evaluator independent of the simulation director might reduce any potential for bias. However, the additional person required to perform the evaluation adds other complexities. The review of videotapes by an independent observer requires additional personnel, and the videotape presents a limited window for the observer. The videotape may obscure actions and conversations that were obvious to the simulator director in the control room. We have studied interrater reliability between scores assigned by the simulation director during observation of the live scenario and the score generated by observing the videotape of the same event. Although there was good to excellent interrater agreement on all of our test items, several of the items requiring gas machine manipulation tended to have lower but acceptable agreement when compared with other items in the scenario. Our review and interpretation of these discrepancies suggested that the window presented by the videotape limited the information available to the evaluator. 13,14 The use of multiple cameras and high-definition video technology could overcome many of the aforementioned disadvantages.
Several investigators have documented that simulator-based evaluation methods can differentiate subjects on the basis of training and experience. In a previous report, our group was able to document significant scoring differences between trainees and faculty using a similar simulator-based evaluation tool. 6 Gaba and DeAnda 11 were able to demonstrate differences in time to correct critical incidents but not for time to detection of critical incidents between first- and second-year anesthesiology residents. Byrne and Jones 15 were also able to demonstrate that anesthesiologists with less than 1-year of experience were significantly slower in dealing with anesthetic emergencies than those with greater experience. Both of the latter two studies noted a wide variation in responses by all groups of subjects. 11,15 These previous studies support the construct that increased clinical experience should improve performance on simulator-based evaluation processes.
Clinical practice assessment by direct observation has been used as a method of assessing performance and competence when a practicing anesthesiologist’s competence has been questioned. 16 This method of evaluation has not been subject to rigorous testing for reliability or validity. 17 The nature of the clinical anesthesia practice of those individuals referred for practice assessment, the elective nature of the scheduled practice assessment period, and the time constraints placed on the period of clinical observation results in the practice assessment being conducted on healthy elective patients. 16 Anesthesiology has advanced to the point that major adverse events are rare, so that individual practitioners are unlikely to have an actual clinical experience with such events during the assessment period. 18,19 There are a number of situations and emergencies that all clinically active anesthesiologists are expected to handle regardless of their practice situation, and yet these situations are unlikely to occur during the time-limited period of clinical observation. A simulator-based assessment process allows the creation of relevant standardized emergency and critical incidents for use as test situations. In the interest of patient safety, critical events cannot be left untreated in real life to see if there will be an appropriate response by the anesthesiologist undergoing practice assessment. A simulator-based assessment process allows for the observation of performance during critical incidents without putting the patient at risk.
We have documented construct validity of a simulator-based evaluation process. Nonetheless, the findings of this study will require comparison with other established and validated evaluation methods of performance for practicing anesthesiologists to document criterion validity. Agreement on what constitutes an established evaluation process (gold standard) for practicing anesthesiologists may be hard to obtain. Finally, we caution that the findings of our study can only be applied to our scenario and are not necessarily able to be generalized to other simulation-based evaluation processes. We believe that our simulator-based evaluation shows promise as an adjunct to existing evaluation processes for practicing anesthesiologists.
The authors thank Melissa Shaw, R.R.T. (Department of Respiratory Therapy), and Chris Lewczuk, R.R.T. (Department of Respiratory Therapy), for help with data collection during the study; and John Paul Szalai, Ph.D. (Director, Department of Research Design and Biostatistics), and Donna Ansara, B.A. (Centre for Research in Women’s Health), for assistance in performing the statistical analysis (all at Sunnybrook and Women’s College Health Sciences Centre, Toronto, Ontario, Canada).
Back to Top | Article Outline

References

1. Eagle CJ, Martineau R, Hamilton K: The oral examination in anaesthesic resident evaluation. Can J Anaesth 1993; 40: 947–53

2. Sivarajan M, Miller E, Hardy C, Herr G, Liu P, Willenkin R, Cullen B: Objective evaluation of clinical performance and correlation with knowledge. Anesth Analg 1984; 63: 603–7

3. Norman GR: Defining competence: A methodological review, Assessing Clinical Competence. Edited by Neufeld VR, Norman GR. New York, Springer Publishing Co., 1985, pp 15–35

4. Neufeld VR: An introduction to measurement properties, Assessing Clinical Competence. Edited by Neufeld VR, Norman GR. New York, Springer Publishing Co., 1985, pp 39–50

5. Nunnally JC: Validity, Psychometric Theory, 2nd edition. New York, McGraw Hill, 1978, pp 86–116

6. Devitt JH, Kurrek MM, Cohen MM, Fish K, Fish P, Noel AG, Szalai J-P: Testing internal consistency and construct validity during evaluation of performance in a patient simulator. Anesth Analg 1998; 86: 1160–4

7. Gaba DM: Human work environment and simulators, Anesthesia, 5th edition. Edited by Miller RD. Philadelphia, Churchill Livingston, 2000, pp 2613–68

8. Devitt JH, Kurrek MM, Cohen MM, Fish K, Fish P, Murphy PM, Szalai J-P: Testing the raters: Inter-rater reliability of standardized anaesthesia simulator performance. Can J Anaesth 1997; 44: 924–8

9. Gaba DM, Howard SK, Flanagan B, Smith BE, Fish KJ, Botney R: Assessment of clinical performance during simulated crises using both technical and behavioral ratings. A nesthesiology 1998; 89: 8–18

10. Schwid HA, O’Donnell D: The anesthesia simulator-recorder: A device to train and evaluate anesthesiologists’ reposes to critical incidents. A nesthesiology 1990; 72: 191–7

11. Gaba DM, DeAnda A: The response of anesthesia trainees to simulated critical incidents. Anesth Analg 1989; 68: 444–51

12. Kurrek MM, Devitt JH: The cost for construction and operation of a simulation centre. Can J Anaesth 1997; 44: 1191–5

13. Kapur PA, Steadman RH: Patient simulator competency testing: Ready for takeoff? Anesth Analg 1998; 86: 1157–9

14. Kurrek MM, Devitt JH, Cohen M, Szalai J-P: Inter-rater reliability between live-scenarios and video recordings in a realistic simulator (abstract). J Clin Monit 1999; 15: 253

15. Byrne AJ, Jones JG: Responses to simulated anaesthetic emergencies by anaesthetists with different durations of clinical experience. Br J Anaesth 1997; 78: 553–6

16. Devitt JH, Yee DA, deLacy JL, Oxorn DC: Evaluation of anaesthetic practitioner clinical performance (abstract). Can J Anaesth 1998; 45: A56-B

17. Wakefield J: Direct observation, Assessing Clinical Competence. Edited by Neufeld VR, Norman GR. New York, Springer Publishing Co., 1985, pp 51–70

18. Duncan PG, Cohen MM, Yip R: Clinical experiences associated with anaesthesia training. Ann RCPSC 1993; 26: 363–7

19. Cohen MM, Duncan PG, Pope WDB, Biehl D, Tweed WA, MacWilliam L, Merchant RN: The Canadian four-centre study of anaesthetic outcomes: II. Can outcomes be used to assess the quality of anaesthesia care? Can J Anaesth 1992; 39: 430–9

Cited By:

This article has been cited 61 time(s).

Minimally Invasive Therapy & Allied Technologies
Surgical simulators
Szekely, G
Minimally Invasive Therapy & Allied Technologies, 12(): 14-18.
10.1080/13645700310001540
CrossRef
Quality & Safety in Health Care
A brief history of the development of mannequin simulators for clinical education and training
Cooper, JB; Taqueti, VR
Quality & Safety in Health Care, 13(): I11-I18.
10.1136/qshc.2004.009886
CrossRef
Academic Emergency Medicine
The Assessment of Individual Cognitive Expertise and Clinical Competency: A Research Agenda
Spillane, L; Hayden, E; Fernandez, R; Adler, M; Beeson, M; Goyal, D; Smith-Coggins, R; Boulet, J
Academic Emergency Medicine, 15(): 1071-1078.
10.1111/j.1553-2712.2008.00271.x
CrossRef
International Journal for Quality in Health Care
The use of an anaesthetic simulator to assess single-use laryngoscopy equipment
Anderson, K; Gambhir, S; Glavin, R; Kinsella, J
International Journal for Quality in Health Care, 18(1): 17-22.
10.1093/intqhc/mzi091
CrossRef
Ergonomics
Comparison of anaesthetists' activity patterns in the operating room and during simulation
Manser, T; Dieckmann, P; Wehner, T; Rall, M
Ergonomics, 50(2): 246-260.
10.1080/00140130601032655
CrossRef
Pediatrics
A simulator-based tool that assesses pediatric resident resuscitation competency
Brett-Fleegler, MB; Vinci, RJ; Weiner, DL; Harris, SK; Shih, MC; Kleinman, ME
Pediatrics, 121(3): E597-E603.
10.1542/peds.2005-1259
CrossRef
Anesthesia and Analgesia
Identification of gaps in the achievement of undergraduate anesthesia educational objectives using high-fidelity patient simulation
Morgan, PJ; Cleave-Hogg, D; DeSousa, S; Tarshis, J
Anesthesia and Analgesia, 97(6): 1690-1694.
10.1213/01.ANE.0000086893.39567.D0
CrossRef
Anesthesia and Analgesia
A simulation-based acute skills performance assessment for anesthesia training
Murray, DJ; Boulet, JR; Kras, JF; McAllister, JD; Cox, TE
Anesthesia and Analgesia, 101(4): 1127-1134.
10.1213/01.ane.0000169335.88763.9a
CrossRef
Academic Emergency Medicine
The use of simulation in emergency medicine: A research agenda
Bond, WF; Lammers, RL; Spillane, LL; Smith-Coggins, R; Fernandez, R; Reznek, MA; Vozenilek, JA; Gordon, JA
Academic Emergency Medicine, 14(4): 353-363.
10.1197/j.aem.2006.11.021
CrossRef
Quality & Safety in Health Care
Beyond "see one, do one, teach one'': toward a different training paradigm
Rodriguez-Paz, JM; Kennedy, M; Salas, E; Wu, AW; Sexton, JB; Hunt, EA; Pronovost, PJ
Quality & Safety in Health Care, 18(1): 63-68.
10.1136/qshc.2007.023903
CrossRef
Medical Education
The quality of a simulation examination using a high-fidelity child manikin
Tsai, TC; Harasym, PH; Nijssen-Jordan, C; Jennett, P; Powell, G
Medical Education, 37(): 72-78.

Anaesthesia and Intensive Care
A comparison of screen-based simulation and conventional lectures for undergraduate teaching of crisis management
Tan, GM; Ti, LK; Tan, K; Lee, T
Anaesthesia and Intensive Care, 36(4): 565-569.

Postgraduate Medical Journal
A brief history of the development of mannequin simulators for clinical education and training (Reprinted from Quality & Safety in Health Care, vol 13, pg i11-i18, 2004)
Cooper, JB; Taqueti, VR
Postgraduate Medical Journal, 84(): 563-570.
10.1136/qshc.2004.009886
CrossRef
Journal of the Chinese Medical Association
Instructor-based real-time multimedia medical simulation to update concepts of difficult airway management for experienced airway practitioners
Chen, PT; Cheng, HW; Yen, CR; Yin, IW; Huang, YC; Wang, CC; Tsou, MY; Chang, WK; Yien, HW; Kuo, CD; Chan, KH
Journal of the Chinese Medical Association, 71(4): 174-179.

Postgraduate Medical Journal
Beyond "see one, do one, teach one": toward a different training paradigm
Rodriguez-Paz, JM; Kennedy, M; Salas, E; Wu, AW; Sexton, JB; Hunt, EA; Pronovost, PJ
Postgraduate Medical Journal, 85(): 244-249.
10.1136/qshc.2007.023903
CrossRef
Journal of Risk Research
Risk assessment in critical care medicine: a tool to assess patient safety
Eidesen, K; Sollid, SJM; Aven, T
Journal of Risk Research, 12(): 281-294.
10.1080/13669870802456914
CrossRef
Surgical Endoscopy and Other Interventional Techniques
Can skills assessment on a virtual reality trainer predict a surgical trainee's talent in laparoscopic surgery?
Rosenthal, R; Gantert, WA; Scheidegger, D; Oertli, D
Surgical Endoscopy and Other Interventional Techniques, 20(8): 1286-1290.
10.1007/s00464-005-0635-2
CrossRef
Intensive Care Medicine
Undergraduate training in the care of the acutely ill patient: a literature review
Smith, CM; Perkins, GD; Bullock, I; Bion, JF
Intensive Care Medicine, 33(5): 901-907.
10.1007/s00134-007-0564-8
CrossRef
Journal of Critical Care
Role of simulators, educational programs, and nontechnical skills in anesthesia resident selection, education and competency assessment
Matveevskii, AS; Gravenstein, N
Journal of Critical Care, 23(2): 167-172.
10.1016/j.jcrc.2007.11.009
CrossRef
Journal of Clinical Nursing
Review of effective advanced cardiac life support training using experiential learning
Kidd, T; Kendall, S
Journal of Clinical Nursing, 16(1): 58-66.
10.1111/j.1365-2702.2006.01571.x
CrossRef
Respiratory Care
Utilizing simulation technology for competency skills assessment and a comparison of traditional methods of training to simulation-based training
Tuttle, RP; Cohen, MH; Augustine, AJ; Novotny, DF; Delgado, E; Dongilli, TA; Lutz, JW; DeVita, MA
Respiratory Care, 52(3): 263-270.

Journal of Pediatric Surgery
Multidisciplinary pediatric trauma team training using high-fidelity trauma simulation
Falcone, RA; Daugherty, M; Schweer, L; Patterson, M; Brown, RL; Garcia, VF
Journal of Pediatric Surgery, 43(6): 1065-1071.
10.1016/j.jpedsurg.2008.02.033
CrossRef
Chest
Improving handoff communications in critical care: Utilizing simulation-based training toward process improvement in managing patient risk
Berkenstadt, H; Haviv, Y; Tuval, A; Shemesh, Y; Megrill, A; Perry, A; Rubin, O; Ziv, A
Chest, 134(1): 158-162.
10.1378/chest.08-0914
CrossRef
British Journal of Anaesthesia
Simulation as an additional tool for investigating the performance of standard operating procedures in anaesthesia
Zausig, YA; Bayer, Y; Hacke, N; Sinner, B; Zink, W; Grube, C; Graf, BM
British Journal of Anaesthesia, 99(5): 673-678.
10.1093/bja/aem240
CrossRef
Canadian Journal of Anaesthesia-Journal Canadien D Anesthesie
Full scale computer simulators in anesthesia training and evaluation
Wong, AK
Canadian Journal of Anaesthesia-Journal Canadien D Anesthesie, 51(5): 455-464.

Canadian Journal of Anaesthesia-Journal Canadien D Anesthesie
The role of simulator-based assessments in physician competency evaluations
Crosby, E
Canadian Journal of Anaesthesia-Journal Canadien D Anesthesie, 57(7): 627-635.
10.1007/s12630-010-9323-3
CrossRef
British Journal of Anaesthesia
High-fidelity patient simulation: validation of performance checklists
Morgan, PJ; Cleave-Hogg, D; DeSousa, S; Tarshis, J
British Journal of Anaesthesia, 92(3): 388-392.
10.1093/bja/aeh081
CrossRef
Circulation
Education in resuscitation - An ILCOR Symposium - Utstein Abbey - Stavanger, Norway - June 22-24, 2001
Chamberlain, DA; Hazinski, MF
Circulation, 108(): 2575-2594.
10.1161/01.CIR.0000099898.11954.3B
CrossRef
Canadian Journal of Anaesthesia-Journal Canadien D Anesthesie
Simulation performance checklist generation using the Delphi technique
Morgan, PJ; Lam-McCulloch, J; Herold-McIlroy, J; Tarshis, J
Canadian Journal of Anaesthesia-Journal Canadien D Anesthesie, 54(): 992-997.

Medical Teacher
New simulation-based airway management training program for junior physicians: Advanced Airway Life Support
Chen, PT; Huang, YC; Cheng, HW; Wang, CC; Chan, CY; Chan, KH; Kuo, CD
Medical Teacher, 31(8): E338-E344.
10.1080/01421590802641471
CrossRef
Academic Emergency Medicine
Human simulation in emergency medicine training: A model curriculum
McLaughlin, SA; Doezema, D; Sklar, DP
Academic Emergency Medicine, 9(): 1310-1318.

Acta Anaesthesiologica Scandinavica
Inefficacy of simulator-based training on anaesthesiologists' non-technical skills
Zausig, YA; Grube, C; Boeker-Blum, T; Busch, CJ; Bayer, Y; Sinner, B; Zink, W; Schaper, N; Graf, BM
Acta Anaesthesiologica Scandinavica, 53(5): 611-619.
10.1111/j.1399-6576.2009.01946.x
CrossRef
Resuscitation
Education in resuscitation
Chamberlain, DA; Hazinski, MF
Resuscitation, 59(1): 11-43.
10.1016/j.resuscitation.2003.08.011
CrossRef
Anesthesia and Analgesia
A method for measuring the effectiveness of simulation-based team training for improving communication skills
Blum, RH; Raemer, DB; Carroll, JS; Dufresne, RL; Cooper, JB
Anesthesia and Analgesia, 100(5): 1375-1380.
10.1213/01.ANE.0000148058.64834.80
CrossRef
Chest
How Many Unjustifiable Lectures Are Worth $2.4 Billion?
Dunn, WF; Armstrong, E
Chest, 134(6): 1117-1120.
10.1378/chest.08-2362
CrossRef
Prehospital Emergency Care
Simulation-based Assessment of Paramedic Pediatric Resuscitation Skills
Lammers, RL; Byrwa, MJ; Fales, WD; Hale, RA
Prehospital Emergency Care, 13(3): 345-356.
10.1080/10903120802706161
CrossRef
Anesthesia and Analgesia
The feasibility of sharing simulation-based evaluation scenarios in anesthesiology
Berkenstadt, H; Kantor, GS; Yusim, Y; Gafni, N; Perel, A; Ezri, T; Ziv, A
Anesthesia and Analgesia, 101(4): 1068-1074.
10.1213/01.ane.0000168272.10598.51
CrossRef
Medical Teacher
Design, validation and dissemination of an undergraduate assessment tool using SimMan (R) in simulated medical emergencies
Paskins, Z; Kirkcaldy, J; Allen, M; Macdougall, C; Fraser, I; Peile, E
Medical Teacher, 32(1): E12-E17.
10.3109/01421590903199643
CrossRef
Anaesthesia
Psychometric characteristics of simulation-based assessment in anaesthesia and accuracy of self-assessed scores
Weller, JM; Robinson, BJ; Jolly, B; Watterson, LM; Joseph, M; Bajenov, S; Haughton, AJ; Larsen, PD
Anaesthesia, 60(3): 245-250.

International Journal of Obstetric Anesthesia
Vanishing experience in training for obstetric general anaesthesia: an observational study
Searle, RD; Lyons, G
International Journal of Obstetric Anesthesia, 17(3): 233-237.
10.1016/j.ijoa.2008.01.007
CrossRef
Medical Education
Patient simulation for training basic and advanced clinical skills
Good, ML
Medical Education, 37(): 14-21.

Risk, Reliability and Societal Safety, Vols 1-3
Risk assessment in critical care medicine - a tool to assess patient safety
Sollid, SJM; Eidesen, K; Aven, T; Soreide, E
Risk, Reliability and Societal Safety, Vols 1-3, (): 195-199.

Journal of the Chinese Medical Association
Things we should know when designing simulator-based teaching in difficult airway management
Lui, PW
Journal of the Chinese Medical Association, 71(4): 163-165.

Canadian Journal of Anesthesia-Journal Canadien D Anesthesie
"GIOSAT": a tool to assess CanMEDS competencies during simulated crises
Neira, VM; Bould, MD; Nakajima, A; Boet, S; Barrowman, N; Mossdorf, P; Sydor, D; Roeske, A; Noseworthy, S; Naik, V; Doherty, D; Writer, H; Hamstra, SJ
Canadian Journal of Anesthesia-Journal Canadien D Anesthesie, 60(3): 280-289.
10.1007/s12630-012-9871-9
CrossRef
International Journal of Nursing Studies
Effect of improving the realism of simulated clinical judgement tasks on nurses' overconfidence and underconfidence: Evidence from a comparative confidence calibration analysis
Yang, HQ; Thompson, C; Bland, M
International Journal of Nursing Studies, 49(): 1505-1511.
10.1016/j.ijnurstu.2012.08.005
CrossRef
Bmc Medical Informatics and Decision Making
The effect of improving task representativeness on capturing nurses' risk assessment judgements: a comparison of written case simulations and physical simulations
Yang, HQ; Thompson, C; Hamm, RM; Bland, M; Foster, A
Bmc Medical Informatics and Decision Making, 13(): -.
ARTN 62
CrossRef
Journal of Emergency Medicine
Clinical Experience Does Not Correlate With the Perceived Need for Cardiopulmonary Resuscitation Training
Lunz, D; Brandl, A; Lang, K; Weiss, B; Haneya, A; Puhler, T; Graf, BM; Zausig, YA
Journal of Emergency Medicine, 44(2): 505-510.
10.1016/j.jemermed.2012.01.039
CrossRef
Academic Medicine
Evaluating Clinical Simulations for Learning Procedural Skills: A Theory-Based Approach
Kneebone, R
Academic Medicine, 80(6): 549-553.

PDF (51)
Anesthesiology
Evaluation of Patient Simulator Performance as an Adjunct to the Oral Examination for Senior Anesthesia Residents
Savoldelli, GL; Naik, VN; Joo, HS; Houston, PL; Graham, M; Yee, B; Hamstra, SJ
Anesthesiology, 104(3): 475-481.

PDF (244)
Anesthesiology
Development of an Objective Scoring System for Measurement of Resident Performance on the Human Patient Simulator
Scavone, BM; Sproviero, MT; McCarthy, RJ; Wong, CA; Sullivan, JT; Siddall, VJ; Wade, LD
Anesthesiology, 105(2): 260-266.

PDF (306)
Anesthesiology
What Makes a “Good” Anesthesiologist?
Gaba, DM
Anesthesiology, 101(5): 1061-1063.

PDF (234)
Anesthesiology
Acute Care Skills in Anesthesia Practice: A Simulation-based Resident Performance Assessment
Murray, DJ; Boulet, JR; Kras, JF; Woodhouse, JA; Cox, T; McAllister, JD
Anesthesiology, 101(5): 1084-1095.

PDF (365)
Anesthesiology
Evaluation of Anesthesia Residents Using Mannequin-based Simulation: A Multiinstitutional Study
the Anesthesia Simulator Research Consortium, ; Schwid, HA; Rooke, GA; Carline, J; Steadman, RH; Murray, WB; Olympio, M; Tarver, S; Steckner, K; Wetstone, S
Anesthesiology, 97(6): 1434-1444.

PDF (358)
Critical Care Medicine
Simulation-based training is superior to problem-based learning for the acquisition of critical assessment and management skills*
Steadman, RH; Coates, WC; Huang, YM; Matevosian, R; Larmon, BR; McCullough, L; Ariel, D
Critical Care Medicine, 34(1): 151-157.
10.1097/01.CCM.0000190619.42013.94
PDF (774) | CrossRef
Critical Care Medicine
The impact of prolonged continuous wakefulness on resident clinical performance in the intensive care unit: A patient simulator study*
Sharpe, R; Koval, V; Ronco, JJ; Dodek, P; Wong, H; Shepherd, J; FitzGerald, JM; Ayas, NT
Critical Care Medicine, 38(3): 766-770.
10.1097/CCM.0b013e3181cd122a
PDF (294) | CrossRef
Critical Care Medicine
Pulmonary catheter fear factor *
Alston, TA
Critical Care Medicine, 30(6): 1383-1384.

Critical Care Medicine
A pilot study using high-fidelity simulation to formally evaluate performance in the resuscitation of critically ill patients: The University of Ottawa Critical Care Medicine, High-Fidelity Simulation, and Crisis Resource Management I Study
Kim, J; Neilipovitz, D; Cardinal, P; Chiu, M; Clinch, J
Critical Care Medicine, 34(8): 2167-2174.
10.1097/01.CCM.0000229877.45125.CC
PDF (397) | CrossRef
European Journal of Anaesthesiology (EJA)
Teaching anaesthesia induction to medical students: comparison between full-scale simulation and supervised teaching in the operating theatre
Hallikainen, J; Väisänen, O; Randell, T; Tarkkila, P; Rosenberg, PH; Niemi-Murola, L
European Journal of Anaesthesiology (EJA), 26(2): 101-104.
10.1097/EJA.0b013e32831a6a76
PDF (93) | CrossRef
Journal of Trauma and Acute Care Surgery
Trauma Assessment Training with a Patient Simulator: A Prospective, Randomized Study
Lee, SK; Pardo, M; Gaba, D; Sowb, Y; Dicker, R; Straus, EM; Khaw, L; Morabito, D; Krummel, TM; Knudson, MM
Journal of Trauma and Acute Care Surgery, 55(4): 651-657.

PDF (515)
Journal of Trauma and Acute Care Surgery
Incorporation of a Computerized Human Patient Simulator in Critical Care Training: A Preliminary Report
Hammond, J; Bermann, M; Chen, B; Kushins, L
Journal of Trauma and Acute Care Surgery, 53(6): 1064-1067.

PDF (155)
Simulation in Healthcare
Setting Performance Standards for Mannequin-Based Acute-Care Scenarios: An Examinee-Centered Approach
Boulet, JR; Murray, D; Kras, J; Woodhouse, J
Simulation in Healthcare, 3(2): 72-81.
10.1097/SIH.0b013e31816e39e2
PDF (391) | CrossRef
Back to Top | Article Outline

Supplemental Digital Content

Back to Top | Article Outline

© 2001 American Society of Anesthesiologists, Inc.

Publication of an advertisement in Anesthesiology Online does not constitute endorsement by the American Society of Anesthesiologists, Inc. or Lippincott Williams & Wilkins, Inc. of the product or service being advertised.
Login

Article Tools

Images

Share