Transthoracic echocardiography (TTE) is a noninvasive and readily available diagnostic technique that is seeing increased perioperative use by anesthesiologists.1 A focused, goal-directed examination can be performed in less than 10 minutes, has been shown to alter patient management, and may improve outcomes.2–4 A prospective observational study demonstrated that a focused TTE performed by anesthesiologists resulted in alterations in management in 84% of patients and the results correlated with cardiologists’ formal results in 90% of cases.5 In the hands of a trained anesthesiologist, clinical information may be obtained quickly when sufficient time is lacking to obtain a formal TTE examination.
Currently there is no formalized training in TTE available for anesthesiologists. Training in TTE requires mastery of both cognitive and technical skills. Traditionally this is achieved through didactics, observations, and independently performed examinations.6 However, the amount of hands-on training with traditional TTE education can be limited by the availability of trainers (sonographers or cardiologists) as well as patients or subjects for performing TTE exams by trainees.
A proposed alternative to real-time patient-based learning is simulation-based training that allows anesthesiologists to learn complex concepts and procedures.7–11 Recently, a TTE simulation tool has become available (Heartworks, Inventive Medical Ltd, London, UK). Simulation allows interactive TTE learning using a virtual 3-dimensional model of the heart and may aid in the acquisition of the cognitive and technical skills needed to perform TTE.7 The ability to link probe manipulation, cardiac anatomy, and echocardiographic images using a simulator has been shown to be an effective model for training anesthesiology residents in transesophageal echocardiography.9
We hypothesized that simulation-based TTE training would be more effective than traditional teaching methods, and that the educational benefits of TTE simulation compared to traditional didactics would be sustained with subsequent addition of volunteer subject training in both groups. The goal of this prospective randomized study was to examine the use of simulation in teaching basic principles of echocardiography and TTE skills of image acquisition and anatomy recognition to anesthesiology residents.
METHODS
Study Protocol
After IRB approval and written informed consent, 61 residents in anesthesia clinical training years 1 to 3 (CA-1 to -3) at the David Geffen School of Medicine at the University of California at Los Angeles enrolled in this study. Data were analyzed on 59 residents; 2 did not complete the study and their data were removed from analysis. Before enrollment, all eligible residents were randomized to either a simulator or control group within their training year using the randomization function in Microsoft Excel. A written pretest, created by attending cardiac anesthesiologists, was administered to all residents. It consisted of 20 multiple-choice questions covering basic principles of echocardiography, indications for TTE, common transthoracic echocardiography views, cardiac anatomy identification, and clinical correlations (Appendix 1 ). Pretest scores were analyzed to ensure there were no significant differences in pretraining knowledge between the simulator and control groups. After the pretest, all residents participated in the first TTE training session followed by a written and volunteer subject posttest. A subset of residents then participated in a second training session followed by a second volunteer subject posttest (Fig. 1 ).
Appendix 1: Concepts Assessed by Pre- and Posttesting.
Figure 1: Flowchart of transthoracic echocardiography (TTE) training sessions and skills assessment.
First TTE Training Session
The first training session lasted 45 minutes for each group. Subjects were instructed on basic echocardiography principles, cardiac anatomy, indications for TTE, limitations of TTE, standard TTE views, instructions for probe manipulation, and the ability of TTE to aid in the diagnosis of volume status and ischemia. The 5 TTE views subsequently tested were taught during this session. The same information was given to each resident, and the same instructors gave the lectures to both the simulation and the control groups. The control group training session consisted of a video didactic created by an attending cardiac anesthesiologist. During the video, an attending cardiac anesthesiologist was present to answer questions. The simulator group training session used a Transthoracic Echocardiography Simulator (Heartworks, Inventive Medical Ltd) (Appendix 2 ) and included a 15-minutes introduction including the indications of TTE, limitations of TTE, and basic echocardiography principles. Residents were then split into pairs for 30 minutes of simulator time. Each participant had 15 minutes of directed individual simulator time while the other participant observed, and then they switched roles.
Appendix 2: Sample views of transthoracic echocardiography simulator showing probe manipulation, 3-dimensional simulated cardiac anatomy, and echo-anatomic image correlation.
Assessment of TTE Skills After First Training Session
After the first training session, each resident took a written posttest on subject matter taught in the training session and performed a volunteer subject TTE examination within 3 to 5 days. The written posttest was comprised of 20 questions testing cognitive skill acquisition including basic principles of echocardiography, indications for TTE, common transthoracic echocardiography views, cardiac anatomy identification, and clinical correlations (Appendix 1 ). For the volunteer subject posttest, residents obtained 5 TTE views in the parasternal and apical imaging windows. These 2 imaging windows were chosen because they show the highest yield for obtaining hemodynamic information during implementation of abbreviated Focused Assessed Transthoracic Echocardiography (FATE) protocols in postsurgical and intensive care patients.12 , 13 The 5 views included left parasternal long-axis, left parasternal right ventricular inflow, parasternal midpapillary left ventricle short-axis, apical 4-chamber, and apical 2-chamber. All examinations were done with adequate TTE imaging windows on 1 thin healthy volunteer in the department; the images were obtained with a GE vivid S6 echo system (General Electric, Milwaukee, WI). Each resident had 10 minutes to obtain the 5 views. A cardiac anesthesiologist assisted with technical aspects of recording the acquired images for subsequent anatomy identification but did not give any guidance on how to acquire a specific view. Once the resident acquired their “best” view, the image was frozen, and the residents identified the cardiac anatomical structures described in Appendix 3 . Two separate blinded expert cardiac anesthesiologists trained in TTE served as the graders. The first grader, blinded to the group assignment, was present during the volunteer subject examination. This grader gave each image a score based upon visualization of anatomic structures comprising the standard view and the overall quality of the image. The second grader viewed the images offline and was blinded to both resident identity and group assignment. The second grader gave each image a score based upon visualization of anatomic structures comprising the standard view and the overall quality of the image. Criteria for the score are included below.
Appendix 3: Standardized patient TTE skills assessment sheet.
Grading of Residents on TTE Views (Appendix 3 : Grading Sheet)
TTE examinations were graded by 2 experts blinded to the assigned group on the ability to acquire the correct image, image quality, anatomy identification, and time required to attain proper imaging. The view quality was scored from 0 (worst) to 5 (best) with a total score of 25 possible points. The score was based on the assessment of the quality of the image as well as an assessment of relevant cardiac structures included in each view. The number of views (of 5) that each resident was able to correctly obtain was reported as percent correct views obtained.
Once each view was acquired, the resident was prompted to identify the cardiac structures within that view, with a total anatomy identification score of 25 points possible. The time for each view was noted with the maximum time allowed being 120 seconds. If a participant did not obtain the view, a time of 120 seconds was assigned, and the participant was assigned a score of 0 points for anatomic identification and image quality.
Second TTE Training Session
Three weeks after the first training session, a convenience sample of 21 residents from our original cohort was given a second training session. Residents remained in their original groups and neither subgroup displayed significant divergence from pre- and posttest scores when compared to their original cohorts. Each group had an additional 45 minutes of TTE training. In the control group, the residents observed a 15-minute TTE examination on a volunteer subject performed by an attending cardiac anesthesiologist. The control group then had 15 minutes of directed training on a volunteer subject and 15 minutes spent observing their colleagues. The 5 TTE views subsequently tested were again taught during this session.
The simulation group observed a 15-minute TTE examination performed by an attending cardiac anesthesiologist on the simulator, which covered the same topics presented to the control group during their observed TTE examination. The simulator group then had 15 minutes of directed training on a volunteer subject and 15 minutes spent observing their colleagues.
Assessment of TTE Skills After Second Training Session
The residents took another volunteer subject posttest 3 to 5 days after the second TTE training session with the same guidelines and grading system as the first volunteer subject posttest. TTE examinations were graded by 2 experts blinded to the assigned group on the ability to acquire the correct image, image quality, anatomy identification, and time required to attain proper imaging. All examinations were done on 1 standardized patient and the images were obtained with a GE Vivid S6 echo system (General Electric, Milwaukee, WI). The same 2 attending cardiac anesthesiologists (A.M., J.H.) graded both the first and second posttests.
Statistical Analysis
Data are presented as mean ± SD. Statistical analyses were performed using SPSS 16.0 (SPSS, Chicago, IL). Comparisons between simulator and control groups for pre- and posttest scores and for time to obtain views were performed using 2 × 3 ANOVA methods. Tukey’s test for multiple comparisons was performed as a post hoc test for determining significance. Nonparametric data, including image quality and anatomy score by view, were compared using the Kruskal-Wallis test. A P value <0.05 was considered statistically significant.
The pre- and posttests were each given to 5 novices and 5 experts (not enrolled in the study) on 2 consecutive days for test validation. A Cronbach’s α coefficient was used to determine internal consistency of the tests. Interrater variability between the 2 independent experts for grading of the TTE views was assessed using the Krippendorf α analysis. A post hoc power calculation to determine 80% power using a 2-sided α and P < 0.05 was done for both the first and second training sessions.
RESULTS
There was no difference in baseline knowledge between the simulator and the control groups as assessed by the written pretest. The mean pretest score for the simulator group was 59.3% ± 11.0% and the mean pretest score for the control group was 56.0% ± 11.9% (P = 0.25). There was no difference as a total group and no difference within each training year (P = 0.28). After the first training session, there was a statistically significant difference between the simulator and control groups as evaluated by the written posttest. The mean posttest score for the simulator group was 68.2% ± 10.1%, and the mean posttest score for the control group was 57.9% ± 8.8% (P < 0.001). There was a statistically significant difference between simulator and control groups for the CA-2 and CA-3 trainees (Fig. 2 ).
Figure 2: Summary of study subjects’ written test scores. Scores are reported as mean percent (total possible 100) + SD. P values <0.05 are indicated by an asterisk.
Results from the first training session are summarized in Figures 3 and 4 . There were 30 trainees in the simulation group (12 CA-1, 8 CA-2, 10 CA-3) and 29 in the control group (11 CA-1, 9 CA-2, 9 CA-3). Residents in the simulation group obtained higher quality images (maximum score of 5 per view/total of 25) (image quality score 12.4 ± 4.2 versus control 6.4 ± 3.5; P = 0.003). The residents were given a list of cardiac anatomic structures to identify (Appendix 3 ) and residents in the simulation group were able to identify more anatomic structures correctly (25 total per view: left parasternal long-axis, 7; left parasternal right ventricular inflow; 3; parasternal midpapillary left ventricle short-axis, 6; apical 4-chamber, 6; and apical 2-chamber, 3) (anatomy identification score 17.8 ± 6.6 versus control 8.3 ± 6.6; P = 0.003) (Fig. 3a ). Residents in the simulator group correctly obtained 78% ± 21% of the TTE views versus 50% ± 19% in the control group (Fig. 3b ). Residents in the simulator group obtained views more efficiently as demonstrated by the average time per view (simulator 69.0 seconds ± 25.9 versus control 90.3 seconds ± 17.7; P < 0.001) (Fig. 3b ). When analyzed by each particular view, residents in the simulation group obtained significantly higher quality images and improved anatomy identification scores for all views (P values all = 0.003) (Fig. 4a ).
Figure 3: a, First training session results for image quality and anatomy identification during the volunteer subject posttest, showing improved performance by the simulation group. *P = 0.003. b, First training session results for percentage of correct views obtained by the participant and time to obtain each view, during the volunteer subject posttest. Simulation group acquired more correct views in a shorter period of time. *P < 0.001.
Figure 4: a, First training session summary of volunteer subject posttest scores by study group. Image quality score was subjectively assigned by 2 blinded, expert graders, 1 present during the examination and the other performing offline analysis. No significant divergence was present between examiners. Anatomy identification score was assigned by the grader present during the examination and was based objectively upon correct identification of anatomic structures. Data are presented as mean + SD. P values of <0.05 between control and simulator groups are indicated by an asterisk. b, First training session summary of scores, during volunteer subject posttest, by training level and study group. Values are presented as mean + SD. P values of <0.05 between control and simulator groups are indicated by an asterisk.
A breakdown of the first training session by training level showed that residents in the CA-1 and CA-3 simulator groups had statistically significantly better image quality than their colleagues in the control group. CA-2 residents in the simulator group showed improved image quality that was not statistically significant. In addition, residents in the CA-1, CA-2, and CA-3 classes had statistically significantly high anatomy identification scores than their control group counterparts (Fig. 4b ).
The second training session results are summarized in Figures 5 and 6 . There were 10 trainees in the simulation group (4 CA-1, 2 CA-2, 4 CA-3) and 11 in the control group (1 CA-1, 3 CA-2, 7 CA-3). Although both groups improved their overall image quality and anatomy identification scores, the simulation group performed better that the control group (Fig. 5a ). The mean image quality score for the simulator was 15.6 ± 2.8 versus the control group 9.6 ± 3.3 (P < 0.002). The mean anatomy identification was 22.8 ± 2.4 in the simulator group and 17.6 ± 3.8 in the control group (P = 0.003). Residents in the simulator group correctly obtained 96% ± 8% of the TTE views versus 80% ± 16% in the control group. The average time per view in the simulator group was 40.7 seconds ± 16.9 vs 62.4 seconds ± 16.1 (P = 0.007) (Fig. 5b ).
Figure 5: a, Second training session results for image quality and anatomy identification, during volunteer subject posttest, demonstrating continued superior performance by the simulation group. *P = 0.002, #P = 0.003. b, Second training session results for percentage of correct views obtained by the participant and time to obtain each view, during the volunteer subject posttest. Simulation group continued to perform better than the control group. †P = 0.009, #P = 0.007.
Figure 6: Second training session summary of volunteer subject posttest scores after second training session. Grading was performed in identical fashion to the examination after the first training session. Data are presented as mean + SD. P values of <0.05 between control and simulator groups are indicated by an asterisk.
An analysis of the second training session by view showed an improved image quality for the simulation group for the left parasternal right ventricular inflow, parasternal midpapillary left ventricle short-axis, the apical 4-chamber view, and the apical 2-chamber view, all of which were statistically significant. The control group showed improved image quality in the 4-chamber view and parasternal long-axis view, likely due to hands-on training. In addition, the simulator group showed improved anatomy identification for the parasternal midpapillary left ventricle short-axis, left parasternal right ventricular inflow, and the apical 2-chamber view, which was statistically significant (Fig. 6 ).
The interrater variability analysis showed an excellent correlation between the grades for TTE views provided by the 2 independent blinded experts. Krippendorf α analysis of 283 paired data sets between Grader 1 and Grader 2 showed an interrater reliability of 0.87 for the first training session and 0.83 for the second training session. The written pre- and posttests were shown to have a correlation of 1.0 between novice and expert echocardiography test takers. Post hoc sample size calculation in the first training session of the study showed a sample size of 32 per group would provide 80% power using the usual P < 0.05 2-sided α level. Even though the number of residents in the control and simulation group was <32, the results of the study were in fact statistically significant between our control and simulator groups. In the second training session, post hoc sample size calculation determined that a sample size of 8 per group provided an 80% power using a 2-sided α = 0.05.
DISCUSSION
This prospective randomized study demonstrated that anesthesia residents trained with TTE simulation acquired better basic cognitive and technical TTE skills. Residents performed better on the written posttest and demonstrated better anatomic TTE imaging skills, being more proficient at acquiring TTE images, and correctly identifying cardiac anatomy within each image than residents trained using traditional methods. The educational benefit of simulation persisted even with introduction of patient-based instruction in both groups. We found that the TTE simulator is an effective tool to provide basic skills for a focused TTE examination.
After 1 session of simulation, residents were able to obtain higher quality images, and correctly identify cardiac anatomical structures in the acquired views. In addition, after simulation training, residents were able to obtain these views more efficiently, getting a higher percentage of correct views in a shorter time. These differences in the posttest scoring may reflect improved integration of knowledge when the residents were taught with simulation. Residents also showed an improvement in their cognitive skills relating to clinical utility of the basic views, likely due to the interactive nature of simulation-based education. Previous reports have similarly suggested that as simulation becomes more realistic, it creates an environment where both knowledge and technical skills may be better integrated and practiced.10 , 11 After the second simulation training session, residents in the simulation group outperformed the control group again in image acquisition and anatomy identification even though the residents in the control group were given hands-on experience. This suggests that laying a foundation for echocardiography in simulation has a benefit that persists with subsequent training.
After simulation training, residents were able to acquire quality images and identify the correct cardiac anatomical structures >90% of the time. Although this is an initial study evaluating only 1 to 2 TTE training sessions, these results suggest that simulation may play a role in teaching anesthesia residents focused perioperative TTE examinations. The utility of focused TTE examinations is being increasingly seen in the intensive care unit and the emergency room settings.12–14 Focused TTE examinations have been already sucessfully used to aid in the diagnosis and therapeutic management of hypotension and hypoxemia, as well as during advanced cardiac life support.14–16 Future anesthesiolgists are likely to see an increasing demand for perioperative focused TTE skills in their practice. It seems prudent to start the education process early in their clinical training.
Considerations in TTE Training
The American Society of Echocardiography recommends comprehensive TTE training to include didactics, observed examinations, and hands-on training.6 , 17 Focused TTE examinations as used in the FATE examination and FEEL (Focused Echocardiography Evaluation in Life Support) protocols are abbreviated examinations used for hemodynamic evaluation.12 , 18 The quality and number of echocardiographic images needed for focused examination differs from a comprehensive examination and so will the necessary training. Anesthesia residents in this study were taught 5 common TTE views used in a focused TTE examination, which were chosen for their previously demonstrated clinical utility in hemodynamic assessment.12 , 13 , 18 Another cardiac window that is used in FATE protocols is the subcostal window, although the diagnostic yield from this window is substantially less than the apical and parasternal windows.12 , 13 Previous studies have shown that noncardiologists can be taught focused TTE examinations through traditional teaching methods.14 , 18 , 19 However, there is little published literature to assess the benefit of simulation-based training in teaching these focused TTE techniques to the novice echocardiographer.
Simulation is helpful for teaching TTE for 2 reasons: increased availability of hands-on training and knowledge integration. Anesthesia residents spend the majority of their training in the operating room environment and are limited in the amount of time that they can spend observing or performing echocardiography examinations. As resident work hours are limited because of Accreditation Council for Graduate Medical Education requirements, the flexibility of simulation allows it to be included in education curricula and allows a standard experience for all trainees.20 , 21 Recently, Niazi et al have demonstrated that simulation-based training for anesthesia residents improves their success in performing regional anesthesia techniques.22 Furthermore, TTE simulation integrates 3-dimensional anatomic relationships and image acquisition in real time. One side of the screen displays the simulated TTE image, whereas the other side of the screen shows a 3-dimensional model of the heart with the path of the transducer displayed (Appendix 2 ). The ability to correlate 3-dimensional anatomy and echocardiography is an important component in TTE education. This is demonstrated by the fact that residents in our simulation group had better image acquisition scores after their first training session compared to residents in the control group after their second training session.
Limitations
Residents were posttested on 1 volunteer subject with good transthoracic windows to provide consistency of imaging windows across the groups. This study did not assess the ability of residents to obtain images over a large range of patients of differing body habitus or in clinical scenarios. Although this was not one of the aims of this study, it does serve as a limitation. In addition, the first and second training sessions were spaced 3 weeks apart to minimize confounding outside echocardiography education. Although the simulation group outperformed the control group in the short-term, it is unclear if this superiority will persist.
At this institution each anesthesia class consists of 22 residents for a total of 66 anesthesia residents (CA 1 to 3 years) in the program. Although we were able to analyze data from 59 of the possible 66 residents in the first training session of the study, multicenter studies will be needed to assess the benefit of TTE simulation in clinical training of anesthesiology residents nationally. Although the financial obligation of a TTE simulator is currently significant, teaching anesthesia residents TTE on volunteer subjects is time consuming and logistically difficult because of operating room commitments. Because the second training session involved volunteer subjects, a smaller subset of residents was enrolled. Although this is a limitation of this study and an aim to improve in future studies, it does highlight the benefit of simulation over traditional patient examination-based teaching.
CONCLUSIONS
Our study demonstrates that the increased practice and knowledge integration from simulation leads to improved focused TTE skills among anesthesiology residents compared to traditional teaching methods. This may translate into more efficient TTE education and better clinical diagnosis; however, the impact of these short-term educational approaches on longer-term retention and actual clinical application warrants further investigation.
DISCLOSURES
Name: Jacques Neelankavil, MD.
Contribution: This author helped design the study, conduct the study, analyze the data, and prepare the manuscript.
Name: Kimberly Howard-Quijano, MD.
Contribution: This author helped design the study, conduct the study, analyze the data, and prepare the manuscript.
Name: Tyken C. Hsieh, MD.
Contribution: This author helped conduct the study and review the manuscript.
Name: Davinder Ramsingh, MD.
Contribution: This author helped conduct the study and review the manuscript.
Name: Jennifer C. Scovotti, MA.
Contribution: This author helped conduct the study, analyze the data, and prepare the manuscript.
Name: Jason H. Chua, MD.
Contribution: This author helped conduct the study and prepare the manuscript.
Name: Jonathan K. Ho, MD.
Contribution: This author helped conduct the study, design the study, and review the manuscript.
Name: Aman Mahajan, MD, PhD.
Contribution: This author helped design the study, conduct the study, analyze the data, and prepare the manuscript.
This manuscript was handled by: Martin J. London, MD.
ACKNOWLEDGMENTS
We thank Yue Ming Huang, EdD., and Jamie Stiner, BS. for technical assiatnce with the simulator. We acknowledge the help provided by Dr. Jeffrey Gornbein, Department of Statistics and Biomathematics, UCLA, for assisting with the statistical analysis in the study.
REFERENCES
1. Beaulieu Y. Bedside echocardiography in the assessment of the critically ill. Crit Care Med. 2007;35:S235–49
2. Canty DJ, Royse CF. Audit of anaesthetist-performed echocardiography on perioperative management decisions for non-cardiac surgery. Br J Anaesth. 2009;103:352–8
3. Manasia AR, Nagaraj HM, Kodali RB, Croft LB, Oropello JM, Kohli-Seth R, Leibowitz AB, DelGiudice R, Hufanda JF, Benjamin E, Goldman ME. Feasibility and potential clinical utility of goal-directed transthoracic echocardiography performed by noncardiologist intensivists using a small hand-carried device in critically ill patients. J Cardiothorac Vasc Anesth. 2005;19:155–9
4. Breitkreutz R, Price S, Steiger HV, Seeger FH, Ilper H, Ackermann H, Rudolph M, Uddin S, Weigand MA, Müller E, Walcher F. Focused echocardiographic evaluation in life support and peri-resuscitation of emergency patients: a prospective trial. Resuscitation. 2010;81:1527–33
5. Cowie B. Focused cardiovascular ultrasound performed by anesthesiologists in the perioperative period: feasible and alters patient management. J Cardiothorac Vasc Anesth. 2009;23:450–6
6. Cahalan M, Abel M, Goldman M, Pearlman A, Sears-Rogan P, Russell I, Shanewise J, Stewart W, Troianos C. American Society of Echocardiography and Society of Cardiovascular Anesthesiologists Task Force guidelines for training in perioperative echocardiography. Anesth Analg. 2002;94:1384–8
7. Matyal R, Bose R, Warraich H, Shahul S, Ratcliff S, Panzica P, Mahmood F. Transthoracic echocardiographic simulator: normal and the abnormal. J Cardiothoracic Vasc Anesth. 2011;25:177–81
8. Sinz E. Simulation based education for cardiac, thoracic, and vascular anesthesiology. Semin Cardiothorac Vasc Anesth. 2005;9:291–307
9. Bose R, Matyal R, Warraich H, Summers J, Subramaniam B, Mitchell J, Panzica PJ, Shahul S, Mahmood F. Utility of a transesophageal echocardiographic simulator as a teaching tool. J Cardiothorac Vasc Anesth. 2011;25:212–5
10. Sturm LP, Windsor JA, Cosman PH, Cregan P, Hewett PJ, Maddern GJ. A systematic review of skills transfer after surgical simulation training. Ann Surg. 2008;248:166–79
11. Good ML. Patient simulation for training basic and advanced clinical skills. Med Educ. 2003;37(Suppl 1):14–21
12. Jensen MB, Sloth E, Larsen KM, Schmidt MB. Transthoracic echocardiography for cardiopulmonary monitoring in intensive care unit. Eur J Anesthesiol. 2004;21:9:700–7
13. Mark DG, Ku BS, Carr BG, Everett WW, Okusanya O, Horan A, Gracias VH, Dean AJ. Directed bedside transthoracic echocardiography: preferred cardiac window for left ventricular ejection fraction estimation in critically ill patients. Am J Emerg Med. 2007;25:894–900
14. Moore CL, Rose GA, Tayal VS, Sullivan DM, Arrowood JA, Kline JA. Determination of left ventricular function by emergency physician echocardiography of hypotensive patients. Academic Emergency Med. 2008;9:3:186–93
15. Joseph MX, Disney JS, Da Costa R. Transthoracic echocardiography to identify or exclude cardiac cause of shock. Chest. 2004;126:1592–7
16. Cowie B. Three years’ experience of focused cardiovascular ultrasound in the peri-operative period. Anaesthesia. 2011;66:268–73
17. Ehler D, Carney D, Dempsey A, Rigling R, Kraft C, Witt SA, Kimball TR, Sisk EJ, Geiser EA, Gresser CD, Waggoner A. Guidelines for cardiac sonographer education: Recommendations of the American Society of Echocardiography Sonographer Training and Education Committee. J Am Soc Echocardiogr. 2001;14:77–84
18. Breitkreutz R, Uddin S, Steiger H, Ilper H, Steche M, Walcher F, Via G, Price S. Focused echocardiography entry level: new concept of a 1-day training course. Minerva Anesthesiol. 2009;75:285–92
19. Jones A, Tayal V, Kline J. Focused training of emergency residents in goal directed echocardiography: a prospective study. Acad Emerg Med. 2003;10:1054–8
20. Issenberg SB, McGaghie WC, Hart IR, Mayer JW, Felner JM, Petrusa ER, Waugh RA, Brown DD, Safford RR, Gessner IH, Gordon DL, Ewy GA. Simulation technology for health care professional skills training and assessment. JAMA. 1999;282:861–6
21. Castanelli DJ. The rise of simulation in technical skills teaching and the implications for training novices in anaesthesia. Anaesth Intensive Care. 2009;37:903–10
22. Niazi AU, Haldipur N, Prasad AG, Chan VW. Ultrasound guided regional anesthesia performance in the early learning period: effect of simulation training. Reg Anesth Pain Med. 2012;37:51–4