iPhones (placed on a camera stand) were used as cameras within the hospital simulation center to facilitate the teledebriefing. iPads were used by the faculty in remote locations of the hospital to visualize the simulations live and also provide live debriefing using FaceTime. The simulation technician physically moved the iPhone camera in the simulation center (at the conclusion of the simulation) to an adjacent debriefing area. This was subsequently plugged into a large mobile television or a computer projecting to an overhead screen, using an audiovisual connection, to maximize visualization of the teledebriefer, as described by previous scholars (Fig. 1).15
The date, form of debriefing (in person vs teledebriefing), name of faculty debriefer, and PGY of training were collected from participants with the DASH-SV for each debriefing. The DASH-SV uses an ordinal scale ranging from 1 (extremely ineffective/abysmal) to 7 (extremely effective/outstanding) for all survey measures. This tool has demonstrated preliminary evidence of reliability and validity.24,25 Its full psychometric properties are still under investigation. The residents evaluated the MSF on 23 behaviors with six elements of evaluation (See Appendix, Supplemental Digital Content 1, http://links.lww.com/SIH/A282). The DASH scorers’ names were blinded to the debriefers, and the collection of data and subsequent analysis was performed by other faculty members.
Descriptive statistics were analyzed with SPSS (Version 19; Chicago, Ill). A significance level of a P value of less than 0.05 was chosen. A two-tailed t test was used for the differences between forms of debriefing. The scores from each DASH form were averaged to a single score per evaluation, and the means of those scores were compared. Analysis of variance was used for comparisons of each component of the DASH scale by month. With power set at 0.80 and an α value at 0.05, two-tailed, power analyses indicated that approximately 31 individuals are needed for a power of 0.80 on the basis of the expected mean for a between-groups comparison.
Thirty EM residents participated in these training sessions.
A total of 44 debriefings (22 teledebriefing and 22 in-person composed of four groups, 11 scenarios) occurred during the study period with a total number of 246 DASH-SVs completed (132 teledebrief, 114 in-person). Ten DASH-SVs that did not identify the faculty or the form of debriefing were excluded from the study.
Overall, the data revealed a difference between the effectiveness of traditional in-person debriefing [6.64 (0.45)] and teledebriefing [6.08 (0.57), P < 0.001]. These differences were also analyzed across PGY. No effect of PGY was found. Residents regularly evaluated both traditional debriefing and teledebriefing as “consistently effective/very good.”
We further analyzed the data by month to examine any trends over time or with certain scenarios. Interestingly, the first 3 months demonstrated no significant differences between conditions (August: P = 0.21; September: P = 0.34; October: P = 0.10). Starting in January, however, differences between conditions began to emerge. Table 2 displays the values for each component of the DASH scale by group over the course of the study period. The number for each month and group was the following: August: 12 in-person, 13 teledebriefing; September: 13 in-person, 14 teledebriefing; October: 10 in-person, 13 teledebriefing; January: 17 in-person, 29 teledebriefing; February: 28 in-person, 26 teledebriefing; March: 26 in-person, 21 teledebriefing; April: 8 in-person, 16 teledebriefing.
Because the demands on simulation centers, faculty, and staff continue to increase, educators must find creative ways to provide learners the simulation-based training opportunities they need and desire. Some institutions have attempted to address the demand or shortage of trained debriefers and content experts by offering faculty development programs for nonsimulation faculty in the form of “train-the-trainer” curricula. The aim of these programs is to provide greater independence of nonsimulation faculty in the use of their respective simulation centers. However, despite these faculty development programs, many institutions rely on their simulation faculty and staff to provide most educational content and debriefing for simulation sessions. This study was undertaken to explore the perceived effectiveness of faculty teledebriefers as a potential solution for programs unable to meet the increased demands of simulation-based medical education.
Our results indicate that learners felt that in-person debriefing was more effective compared with teledebriefing. This finding is typical across specialties in various other studies.26,27 We postulate that this may be for several reasons. First, most of the residents had previous experience with the same faculty and simulation center and had always had in-person debriefings before this study. Although we would have expected this difference to present itself across all months of data collection, novelty in the first months may have accounted for no difference between the debriefing approaches, but in subsequent months, the novelty seems to have worn off and in-person debriefing was rated higher. Next, it should be noted that although debriefing does focus on the group’s discussion of the events, much can be conveyed by the debriefer through nonverbal cues. Perhaps, our learner’s perceived effectiveness may be attributed in part to some of the nonverbal communication provided by the debriefer. Many of these nonverbal cues may have been lost during teledebriefing because the debriefer was only being viewed from the shoulders up. In addition, the telepresent faculty member does not have the opportunity to demonstrate appropriate maneuvers or procedures required during the EM simulations that a physically present instructor could demonstrate at the bedside (eg, cricothyrotomy, fasciotomy, etc). Also, even minute technical issues can have an impact on the resident’s perception of the effectiveness of the debriefing. Some of these issues include the telepresent faculty member being unable to visualize or hear subtle forms of communication during the actual simulations (that may have been identified by a physically present faculty members), the rare technology anomaly (eg, volume issues, frozen screen, loss of signal), or, as suggested by some authors, the camera placement not picking up all the movements and actions of the learners, limiting the ability to fully discuss the actions of the learners in the scenario during the debriefing.26,28
Importantly, however, the higher rating for in-person debriefing, although statistically significant, should be interpreted with caution. The overall rating of teledebriefing was still very high with an overall rating of 6.07 (vs. in-person debriefing of 6.64), representing a score of “consistently effective/very good” on the DASH 1 to 7 rating scale. This reflects an effective mean score for almost an entire academic year. Despite teledebriefing being rated lower than traditional debriefing, it does seem that teledebriefing is an effective and practical solution to providing education when other resources or faculty are not physically present. Recent studies have demonstrated effective telepresence training and guidance for EM and surgical procedures.16,18,29,30 This provides a convenient solution to simulation programs that are also struggling to keep up with the growing demands of learners or to off-site or rural locations, especially when the alternative is not being able to provide any education at all. This is especially true given that most of the telecommunication equipment used in this study is widely available and likely already present in most simulation centers.31,32
A score of 4 or higher is considered acceptable for the DASH.23 The debriefing scores by the residents in both categories was higher than 6.0 for the study period. These atypically high scores may be the result of several factors. Both debriefers were well-respected EM faculty members, and both completed fellowships in medical simulation. This may have contributed to a halo effect on the basis of previously positive interactions in the training environment. In addition, no training on use of the DASH tool was provided other than providing adequate time to become familiar with the instrument. The developers of the tool, however, report that once raters are “thoroughly familiar with the elements,” they report ease of use with reliable and valid scoring without the need for training.23 Another reason for the potentially high scores was that the training environment was set up to minimize factors that contribute to lower debriefing scores (abbreviated time to debrief/self-reflect, restricted space in the training environment, and clinical distractions/responsibilities during protected time).23 The assessment of the debriefing sessions by outside faculty members unfamiliar with the debriefers and formally trained in debriefing evaluation may potentially yield lower scores. Another opportunity for further evaluation of teledebriefing would provide faculty who are unknown to the resident evaluators throughout the study period, potentially eliminating the previously mentioned limitation of familiarity with the debriefer.
The simple use of an iPhone, iPad with FaceTime, and projection equipment provided a viable teleconferencing solution that was both cost-effective and practical. Skype technology has previously been used effectively for telementoring surgeons across a variety of subspecialties in similar fashion.16,33,34 We did use our own Wi-Fi hotspot to overcome issues with the hospital’s firewalls and network connectivity. This provided a reliable and inexpensive solution ($80 per month) to connectivity issues identified during initial pilot testing of the program. This equipment did not require any special expertise and was easy to set up and test. There were instances when a resident unintenionally positioned themselves in front of a camera, blocking the faculty’s view of the simulation. This was usually easily addressed by sending a text message to the on-site simulation technician during the simulation who, as discretely as possible, reminded the resident of the position of the iPad or iPhone camera. In addition, we only had one camera view of the entire simulated environment (positioned approximately 10 feet back from the foot end of the bed and approximately 7 feet high, angled downward), limiting our ability to visualize subtle nonverbal communication, and all actions that may have occurred outside of the camera angle.
This study had several limitations. This study included EM residents at a single institution, limiting generalizability. In addition, the outcome measures of this study are limited to the T1 level (learner’s perception only) and do not evaluate the effectiveness of the debriefing methods to affect knowledge and skill. In addition, the DASH-SV has not had the same psychometric scrutiny because the DASH-Rater Version and the scores of this study demonstrate that the scores may be inflated. In addition, we did not pursue qualitative data from the participants, potentially missing an opportunity to further illuminate perceived barriers or limitations in either form of debriefing. Use of only one system of teleconferencing (equipment, interface) also poses a limitation to the study. An additional limitation was that most residents had extensive experience in the traditional form of in-person debriefing before this study. Previous experience with one type of debriefing may have influenced the learners’ perception of a novel form of debriefing. One final limitation was that both of the expert debriefers in this study were well-known to the residents as established members of the EM faculty. However, the faculty, in this study, were not responsible for supervising or grading the residents.
There are a number of areas in which future research could elucidate the efficacy of the strategies described here. For example, studying the role of learner disposition and experience may be a fruitful endeavor. Specifically, examining the effect on learners at various stages of training, various types of skills (individual skills, communication, faculty’s perceptions of other faculty’s ability to effectively debrief) might be beneficial. In addition, evaluation of various well-accepted learning theories in simulation such as experiential learning,5 deliberate practice,35 and mastery learning environments36 should be evaluated in a similar side-by-side comparison. Deliberate practice environments, although frequently discussed, are infrequently available to learners because this is a resource (faculty)-intensive environment that is difficult to set up with the increasing demands on faculty. Further exploration to determine whether these environments may be more easily established using these novel technologies should be pursued.
Teledebriefing was found to be rated lower than on-site debriefing but was still consistently effective. Teledebriefing potentially provides a practical alternative for simulation-based medical education if faculty or staff is unavailable to provide traditional in-person debriefing. Future studies should further explore this application in variety of settings and in a variety of learning theories.
The authors thank Dr Jenny Rudolph and Dr Robert Simon for their guidance in establishing the methods and refinement of the study design.
1. Levine AI, DeMaria S, Schwartz AD, Sim AJ. The Comprehensive Textbook of Healthcare Simulation
. New York: Springer; 2013.
2. Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach
3. Marks M, Sabella MJ, Burke CS, Zaccaro SJ. The impact of cross-training on team effectiveness. J Appl Psychol
4. Fanning RM, Gaba DM. The role of debriefing
in simulation-based learning. Simul Healthc
5. Kolb DA. Experiential Learning: Experience as the Source of Learning and Development
. Englewood Cliffs, NJ: Prentice-Hall; 1984.
6. Schon D. The Reflective Practitioner: Toward a New Design for Teaching and Learning in the Professions
. San Francisc: Jossey-Bass; 1987.
7. Cannon-Bowers JA, Salas E, Tannenbaum SI, Mathieu JE. Toward theoretically-based principles of training effectiveness: A model and initial empirical investigation. Military Psychol
8. Rudolph JW, Simon R, Dufresne RL, Raemer DB. There’s no such thing as a non-judgmental debriefing
: a theory and method for debriefing
with good judgment. Simul Healthc
9. Rudolph JW, Simon R, Raemer DB, Eppich WJ. Debriefing
as formative assessment: closing the performance gaps in medical education. Acad Emerg Med
15. Ahmed R, King Gardner A, Atkinson SS, Gable B. Teledebriefing
: connecting learners to faculty members. Clin Teach
16. Ponsky TA, Bobanga ID, Schwacter M, et al. Transcontinental telementoring
with pediatric surgeons: proof of concept and technical considerations. J Laparoendosc Adv Surg Tech A
17. Walker RB, Underwood PK, Bernhagen M, Markin N, Boedeker BH. Virtual intubation training at remote military site. Stud Health Technol Inform
18. Mikrogianakis A, Kam A, Silver S, et al. Telesimulation: an innovative and effective tool for teaching intraosseous insertion techniques in developing countries. Acad Emerg Med
19. Hortos K, Sefcik D, Wilson SG, McDaniel JT, Zemper E. Synchronous videoconferencing: impact on achievement of medical students. Teach Learn Med
20. Markova T, Roth LM, Monsur J. Synchronous distance learning as an effective and feasible method for delivering residency didactics. Fam Med
21. Raemer D, Anderson M, Cheng A, Fanning R, Nadkarni V, Savoldelli G. Research regarding debriefing
as part of the learning process. Simul Healthc
24. Brett-Fleegler M, Rudolph J, Eppich W, et al. Debriefing
assessment for simulation in healthcare: development and psychometric properties. Simul Healthc
25. Dreifuerst KT. Using debriefing
for meaningful learning to foster development of clinical reasoning in simulation. J Nurs Educ
26. Hayden EM, Navedo DD, Gordon JA. Web-conferenced simulation sessions: a satisfaction survey of clinical simulation encounters via remote supervision. Telemed J E Health
27. Kramer NM, Demaerschalk BM. A novel application of teleneurology: robotic telepresence
in supervision of neurology trainees. Telemed J E Health
28. Ikeyama T, Shimizu N, Ohta K. Low-cost and ready-to-go remote-facilitated simulation-based learning. Simul Healthc
29. Prescher H, Grover E, Mosier J, et al. Telepresent intubation supervision is as effective as in-person supervision of procedurally naïve operators. Telemed J E Health
30. Ali J, Sorvari A, Camera S, Kinach M, Mohammed S, Pandya A. Telemedicine as a potential medium for teaching the advanced trauma life support (ATLS) course. J Surg Educ
31. Sclafani J, Tirrell TF, Franko OI. Mobile tablet use among academic physicians and trainees. J Med Syst
32. Franko OI, Tirrell TF. Smartphone app use among medical providers in ACGME training programs. J Med Syst
33. Miller JA, Kwon DS, Dkeidek A, et al. Safe introduction of a new surgical technique: remote telementoring
for posterior retroperitoneoscopic adrenalectomy. ANZ J Surg
34. Choy I, Fecso A, Kwong J, Jackson T, Okrainec A. Remote evaluation of laparoscopic performance using the global operative assessment of laparoscopic skills. Surg Endosc
35. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med
36. Barsuk JH, McGaghie WC, Cohen ER, O’Leary KJ, Wayne DB. Simulation based mastery learning reduces complications during central venous catheter insertion in a medical intensive care unit. Crit Care Med
Teledebriefing; Debriefing; Telepresence; DASH; Telementoring
Supplemental Digital Content
© 2016 Society for Simulation in Healthcare