Skip Navigation LinksHome > May/June 2014 - Volume 33 - Issue 3 > Eye Tracking as a Debriefing Mechanism in the Simulated Sett...
Dimensions of Critical Care Nursing:
doi: 10.1097/DCC.0000000000000041
Educational DIMENSION

Eye Tracking as a Debriefing Mechanism in the Simulated Setting Improves Patient Safety Practices

Henneman, Elizabeth A. PhD, RN; Cunningham, Helene MS, RN; Fisher, Donald L. PhD; Plotkin, Karen PhD, RN; Nathanson, Brian H. PhD; Roche, Joan P. PhD, RN, GCNS-BC; Marquard, Jenna L. PhD; Reilly, Cheryl A. PhD, RN; Henneman, Philip L. MD

Free Access
Article Outline
Collapse Box

Author Information

Elizabeth A. Henneman, PhD, RN, is an associate professor of nursing at the University of Massachusetts Amherst. Her research focuses on the nurse’s role in the recovery of error and the early recognition of adverse events. She has developed numerous simulation scenarios and evaluation instruments that have been used in both research and educational settings with nursing students, novice nurses, and experienced nurses.

Helene Cunningham, MS, RN, is a clinical assistant professor of nursing at the University of Massachusetts Amherst and serves as the director of the state-of-the-art Nursing Clinical Simulation Laboratory on Amherst’s campus. She is an expert in training nurse educators, staff nurses, and students in using simulation technology.

Donald L. Fisher, PhD, is the department head and professor of mechanical and industrial engineering at the University of Massachusetts Amherst. His research is directed at uncovering why human operators make errors in high-stress environments and evaluating existing interfaces designed both to reduce errors and to the performance of human operators.

Karen Plotkin, PhD, RN, is a clinical assistant professor of nursing at the University of Massachusetts. She has expertise in the development and evaluation of simulation scenarios for undergraduate nursing students.

Brian H. Nathanson, PhD, is the cofounder and chief executive officer of OptiStatim, LLC. His research focuses on critical care medicine, sepsis, benchmarking, and patient safety.

Joan P. Roche, PhD, RN, GCNS-BC, is an associate clinical professor of nursing at the University of Massachusetts Amherst. Her program of research is focused on the relationship between the healthcare system and patient outcomes. She is also involved in studies examining the use of human patient simulation in nursing education.

Jenna L. Marquard, PhD, is an associate professor of industrial engineering at the University of Massachusetts Amherst. Her research interests include developing behavioral and cognitive models of physicians’, nurses’, and consumers’ interactions with health information technology.

Cheryl A. Reilly, PhD, RN, is a faculty member in the Nursing Informatics program in the School of Nursing at Walden University. Her research focuses on understanding and improving patient-centered care, healthcare quality, and patient safety by developing and applying health information technologies.

Philip L. Henneman, MD, is a professor in the Department of Emergency Medicine for Tufts University School of Medicine and Baystate Medical Center, Springfield, Massachusetts. His research interests include computer modeling of emergency department and hospital processes as well as patient safety.

This study was supported by NSF Award CCF-0820198 and the University of Massachusetts Academic Technology Grant.

The authors have disclosed that they have no significant relationship with, or financial interest in, any commercial companies pertaining to this article.

Address correspondence and reprint requests to: Elizabeth A. Henneman, PhD, RN, 226 Skinner Hall, 651 North Pleasant Street, Amherst, MA 01003-92299 ( Elizabeth.henneman@gmail.com).

Collapse Box

Abstract

Introduction:

Human patient simulation has been widely adopted in healthcare education despite little research supporting its efficacy. The debriefing process is central to simulation education, yet alternative evaluation methods to support providing optimal feedback to students have not been well explored. Eye tracking technology is an innovative method for providing objective evaluative feedback to students after a simulation experience. The purpose of this study was to compare 3 forms of simulation-based student feedback (verbal debrief only, eye tracking only, and combined verbal debrief and eye tracking) to determine the most effective method for improving student knowledge and performance.

Methods:

An experimental study using a pretest-posttest design was used to compare the effectiveness of 3 types of feedback. The subjects were senior baccalaureate nursing students in their final semester enrolled at a large university in the northeast United States. Students were randomly assigned to 1 of the 3 intervention groups.

Results:

All groups performed better in the posttest evaluation than in the pretest. Certain safety practices improved significantly in the eye tracking–only group. These criteria were those that required an auditory and visual comparison of 2 artifacts such as “Compares patient stated name with name on ID band.”

Conclusions:

Eye tracking offers a unique opportunity to provide students with objective data about their behaviors during simulation experiences, particularly related to safety practices that involve the comparison of patient stated data to an artifact such as an ID band. Despite the limitations of current eye tracking technology, there is significant potential for the use of this technology as a method for the study and evaluation of patient safety practices.

The use of human patient simulation (HPS) as an educational and evaluation tool has gained unprecedented acceptance in the healthcare arena over the past decade.1-10 Despite the widespread use of HPS, significant challenges remain in teaching and evaluating students in the simulated environment.

The effective use of simulation as an educational technique requires that participants receive timely and appropriate feedback about their performance. In fact, students may be negatively affected by simulation exercises if they do not receive feedback about substandard or error-prone behaviors. Verbal debriefing is a common method of providing feedback to students during or after the simulation experience.11 This debriefing method requires that instructors be able to adequately evaluate the student to have enough information about the student’s performance to be able to give him/her specific feedback.

Current evaluation methods used in simulation have significant limitations. For example, a simulation scenario may require that an instructor observe the student directly, making it a resource-intensive endeavor. Another evaluation approach is to videotape the student and then have the instructor and/or student watch the videotape together to provide feedback about the student’s performance. This may take less instructor time, but there are limitations to this method as well. The videotaped material is limited by the view allowed by the camera, which may limit a complete picture of the student’s performance. In addition, fine details of the student’s actions may be unobservable and not allow the instructor and student the ability to fully appraise the student’s actions. These drawbacks limit the usefulness of simulation as a teaching and evaluation strategy. New methods are needed to improve simulation-based student evaluation if simulation is to be used to its maximum potential.

Eye tracking technology has primarily been used outside healthcare in areas such as aviation, education, and marketing and has been shown to provide useful safety-related feedback for automobile drivers.12,13 In the healthcare field, eye tracking has been used to teach and evaluate surgeons as they perform procedures such as laparoscopic surgery and has been shown to be useful in measuring the surgeon’s vigilance during an operation.14 Eye tracking has also been used in the emergency department setting to evaluate whether providers verify patient identification during routine procedures.6,15

Eye tracking data allow the educator/evaluator to determine a subject’s field of gaze as he/she engages in a simulated process performing a procedure and hence offers a novel approach for improving the objectivity of the evaluation process and provides the instructor and student with additional data about the student’s performance.

The eye tracker device consists of goggles that are connected to a video recorder that records the subject’s eye movements. A software program then analyzes data from the computer video. When the steps of interest are not all observable by traditional methods, eye movements can be used to evaluate the scanning behavior of the operator to better infer the likelihood that the operator is in a particular state at a particular point in time.12 The software program used with the eye tracking device places crosshairs over the exact location where the student is looking (Figure). The video recording of the subject’s field of gaze throughout the simulation can be integrated into the evaluation process as a form of feedback on the subject’s performance.

Figure.
Figure.
Image Tools

The purpose of this study was to compare 3 forms of simulation-based student feedback (verbal debriefing only, eye tracking only, and a combination of verbal debriefing and eye tracking) to determine the most effective method for improving student knowledge and performance related to patient safety practices.

Back to Top | Article Outline

THEORETICAL FRAMEWORK

A modified near-miss model served as the theoretical framework for the study.16,17 This model describes how organizational, technical, and human failures, alone or in combination, can lead to dangerous situations that ultimately may negatively affect the patient. Human failures may be avoided by addressing skill, knowledge, and rule-based errors.18 Structures and processes such as technology and educational methods have the potential to promote safe practice and decrease adverse patient outcomes.

Back to Top | Article Outline

METHODS

Design

A pretest-posttest experimental design was used to compare the effectiveness of 3 types of feedback. The 3 types of feedback were (1) verbal debrief only, (2) eye tracker only, and (3) eye tracker plus verbal debrief (Table 1).

TABLE 1
TABLE 1
Image Tools
  •  Verbal debrief only: Students assigned to the verbal debrief–only group received a verbal debriefing session immediately after the pretest simulation. Trained research assistants conducted this debriefing. The debriefing process consisted of a one-on-one review with the student of his/her observed actions during the simulation, identifying whether the student successfully performed the set of predetermined safety-related actions. There was no other discussion in the debriefing other than whether he student performed the desired action (ie, if the student made an incorrect decision, the rationale for that decision was not investigated or discussed). The verbal debriefing, although possibly less comprehensive than a traditional debrief, was purposely scripted to allow for reproducibly, decrease variation between evaluators, and ultimately improve the rigor of the study.
  •  Eye tracking only: Students assigned to the eye tracking–only group received a DVD of their eye tracker results. The eye tracker DVD was given to the students 4 days after the pretest period, which was the length of time required for the research assistant to process the eye tracking videos and record them to DVDs. Students receiving the eye tracking videos were given instructions to watch the video and pay attention to the cross-hairs on the screen, which indicated where they had been looking during the simulation.
  •  Eye tracking plus verbal debrief: Students assigned to the eye tracking plus verbal debrief group received the immediate postsimulation verbal debrief and their eye tracking DVD.

Institutional review board approval was obtained before the conduct of this study. All students signed an informed consent before participating and were compensated for their participation.

Back to Top | Article Outline
Subjects/Setting

The subjects were senior baccalaureate nursing students in their final semester enrolled at a large university in the northeast United States. All students had previous experience with HPS and were familiar with the safety practices being used in the study. The study setting was the nursing simulation laboratory at the university.

Back to Top | Article Outline
Patient/Environment/Roles

The human patient simulation (HPS), SimMan, was lying on a hospital bed with various equipment and monitoring devices depending on the scenario. The environment was set up to resemble an emergency department setting. The environment included the patient, medication cart, oxygen, monitors, and routine and emergency supplies. All SimMan patients had patient identification and allergy bands (as appropriate) on their wrists. A researcher who sat behind a 1-way mirror provided the voice of the patient, physician, and secretary.

Back to Top | Article Outline
Scenarios

Four scenarios were developed specifically for the study by the research team. The scenarios were based on typical emergency department cases. Every scenario included at least 1 embedded actual error related to patient identification and 1 potential error related to a treatment (eg, medication administration) (Table 2).

TABLE 2
TABLE 2
Image Tools
Back to Top | Article Outline
Procedure

Students were randomly assigned to 1 of the 3 groups. Each student participated in a pretest, intervention, and posttest experience. Simulations were held over a period of 6 days (3 days pretest, 3 days posttest). Each student participated on 2 days for approximately 1 hour per day. Students were assigned 2 different, yet similar, patient scenarios for their pretest and posttest simulations.

An investigator-developed tool was used to evaluate student performance. The tool was developed by content experts and determined to have an interrater reliability of 95%. Only evaluators trained in the use of the tool participated in the evaluation process. The tool captured 4 major “safety practices,” including student introduction, verifying patient identification, allergy assessment, and verifying appropriateness of ordered treatments (eg, noting any contraindications). These safety practices translated into a total of 13 criteria, which are measured with the evaluation tool (Table 3).

TABLE 3
TABLE 3
Image Tools

Before starting the pretest simulation, each student viewed a short video explaining the simulation process and what he/she would find in the patient’s room. After viewing the video, the student received a short written “report” on his/her patient, which was a paragraph describing the patient’s medical history, chief complaint, vital signs, and current treatment plan. After viewing the video and reading the patient report, a research assistant calibrated the eye tracking device to the student’s eyes.

Back to Top | Article Outline
Statistical Analysis

A power analysis with an α of .05, a β of .80, and a moderate effect size suggested that a sample of 8 students per group would be sufficient to detect significant differences between groups. We recruited a greater number of students, based on our previous experience that as many as 27% of student eye tracking data would not be usable.15

Student performance was scored using direct observation by trained observers and was based on the number of correct actions as per the evaluation criteria. The number of correct actions made by the students in the posttest simulation was compared with the number made in the pretest simulation to determine the effectiveness of each intervention. Because we used binary data that were paired (pretest vs posttest), we used the McNemar test for calculating exact P values within each group and overall. Analyses comparing the 3 groups’ posttest accuracy were done via the Fisher exact test (a nonparametric version of the χ2 test) because of the sample size. Results for each participant on each task were classified as 1 of 4 outcomes: does perform the task, does not perform the task, not applicable, or unobservable. However, results were combined to 2 outcomes: does perform the task versus any of the other 3 categories. A P value of <.05 was considered statistically significant.

Back to Top | Article Outline

RESULTS

A total of 42 students participated in the study (Table 4). Data from 11 students (25%) were unable to be used because of technical difficulties (eg, problems calibrating the eye tracker because of light iris color, presence of eyeglasses, or too much movement during the simulation). Of the 31 remaining students, 8 were enrolled in the verbal debrief–only group (group 1), 12 in the eye tracker–only group (group 2), and 11 in the combined eye tracker/verbal debrief group (group 3).

TABLE 4
TABLE 4
Image Tools

Several trends were evident in our data, most notably is that the postintervention percentage was higher (ie, these nurses were more likely to perform the required task) for every behavior. This is particularly evident when examining if the nurse stopped or questioned the treatment. Here, the overall postintervention percentage was 90% compared with the preintervention percentage of 23% (P < .001). The 1 question where the overall improvement was not statistically significantly higher in the posttest was “Compares patient date of birth with date on ID band,” but this was also the question with the highest (83%) pretest accuracy, making it mathematically more difficult to attain a statistically significant improvement. The simulation experience was generally quite successful regardless of the type of debriefing. The treatment was stopped or questioned during the postintervention in 88% (7/8) of the verbal debrief–only students, in 100% (12/12) of the eye tracker–only students, and in 82% (9/11) of the eye tracker plus verbal debrief patients. There was no significant difference in these outcomes (P = .34). Certain safety practices improved significantly in only the eye tracker–only or eye tracker and verbal debrief groups, although these 2 groups had slightly larger sample sizes. These criteria were those that required an auditory and visual comparison of 2 artifacts such as “Compares patient stated name with name on ID band” and “Compares stated allergies to allergy bracelet.”

We also noted that the researchers who were evaluating the students classified some of the patient safety criteria on the evaluation tool as “unobservable,” meaning that they were unable to ascertain the student’s action using direct observation.

Back to Top | Article Outline

DISCUSSION

The results of this study suggest that HPS provides significant improvement in patient safety practices by nursing students regardless of the debrief mechanism used. We also observed that pretest scores were quite low on these routine, basic safety processes, which highlights the need for more safety-focused educational experiences. Our results are similar to those demonstrated in a pilot study of nursing students that showed improvement in patient identification practices after a simulation experience with a focused debrief.19

Eye tracking technology appears to be useful for improving certain patient safety practices such as those relating to patient identification but does not appear to influence other behaviors such as those related to knowledge of medications or other treatments. This finding is logical in that eye tracking technology allows the student to visualize artifacts but would not necessarily impact nursing knowledge or the decision-making process.

These findings have important implications for educators who use simulation as a means of teaching and evaluating students. Little is known about the most effective means of providing feedback after a simulation experience. The fact that our trained investigators were unable to directly observe behaviors in some instances lends further support to the need for other methods of objective evaluation.

There are many opportunities for educators and researchers to use eye tracking in the simulated setting. In addition to functioning as a teaching and evaluation tool, eye tracking offers the opportunity to study aspects of healthcare practice that to date have not been elucidated. For example, research using eye tracking technology has demonstrated differences in the visual scanning patterns of subjects who do and do not identify errors related to patient identification.20 Future studies in this area are needed to provide insight into optimal scanning patterns that reduce error and adverse events.

Back to Top | Article Outline

LIMITATIONS

The primary limitation of the study is the use of a single site. How student nurses would perform at another institution is not known. A more diverse and larger multisite study would allow for findings that could be generalized to other settings. In addition, we were unable to use 11 subjects, although this rate of “unusable” data is typical with the eye tracking device. In the future, an eye tracker that can better handle subjects with lighter irises or a great deal of quick head movements would make data collection easier. However, we do not believe the subjects excluded from the study have biased our results in any way as there is no plausible correlation between eye tracker failure and performing these safety tasks/behaviors. Finally, in some instances, a student’s actions were unobservable. We assumed that the student did not perform the task(s) because most did not during this stage. In the postintervention stage, we had no unobservable data. Thus, our results are slightly optimistic. However, the inferences drawn from the overall results for each behavior would have been the same had we removed these subjects instead.

Back to Top | Article Outline

CONCLUSION

Simulation-based education using verbal debriefing, eye tracking, or both resulted in an overall improvement in the performance of subjects in our study. Eye tracking as a singular intervention significantly improved behaviors that required comparing information from 2 artifacts (eg, patient-stated data to wristband data). Eye tracking, although more technologically complex than standard methods such as videotaping or verbal debriefing, provides detailed and objective data to the students and does not require an evaluator to monitor the student in real-time. The findings of this study suggest that eye tracking technology offers intriguing possibilities for future patient-safety, simulation-based studies.

Back to Top | Article Outline

References

1. Gaba DM. The future vision of simulation in healthcare. Qual Saf Health Care. 2004; 13: 2–10.

2. Rosen MA, Salas E, Wilson KA., et al. Measuring team performance in simulation-based training: adopting best practices for healthcare. Simul Healthc. 2008; 3: 33–41.

3. Robertson B, Schumacher L, Gosman G, et al. Simulation-based training for multidisciplinary obstetric providers. Simul Healthc. 2009; 4:(2): 203–208.

4. Issenberg SB. The scope of simulation-based healthcare education. Simul Healthc. 2006; 1:(4); 203–208.

5. Henneman EA, Roche JP, Fisher DL., et al. Error identification and recovery by student nurses using human patient simulation: opportunity to improve patient safety. Appl Nurs Res. 2010; 23: 11–21.

6. Henneman PL, Fisher DL, Henneman EA, et al. Providers do not verify patient identity during computer order entry. Acad Emerg Med. 2008; 15: 641–648.

7. Henneman EA, Cunningham H. Using clinical simulation to teach patient safety in an acute/critical care nursing course. Nurse Educ. 2005; 30: 172–177.

8. Henneman EA, Cunningham H, Roche JP, et al. Human patient simulation: teaching students to provide safe care. Nurse Educ. 2007; 32: 212–217.

9. Katz GB, Peifer KL, Armstrong G. Assessment of patient simulation use in selected baccalaureate nursing programs in the United States. Simul Healthc. 2010; 5: 46–51.

10. Levett-Jones T, McCoy M, Lapkin S, et al. The development and psychometric testing of the Satisfaction with Simulation Experience Scale [published online ahead of print January 31, 2011]. Nurse Educ Today.

11. Fanning RM, Gaba DM. The role of debriefing in simulation based-learning. Simul Healthc. 2007; 2: 115–125.

12. Fisher DL, Wisher RA, Ranney T. Static and dynamic training strategies: a framework for optimizing training strategies. J Math Psychol. 1996; 40: 30–47.

13. Pradhan AK, Pollatsek A, Knodler M, Fisher DL. Can younger drivers be trained to scan for information that will reduce their risk in roadway traffic scenarios that are hard to identify as hazardous? Ergonomics. 2009; 62: 657–673.

14. Zheng B, Tien G, Atkins SM., et al. Surgeon’s vigilance in the operating room. Am J Surg. 2011; 201: 667–671.

15. Henneman PL, Fisher DL, Henneman EA, et al. Patient identification errors are common in a simulated setting. Ann Emerg Med. 2010; 55: 503–509.

16. Van der Schaff TW. Near Miss Reporting in the Chemical Process Industry [master’s thesis]. Eindhoven, the Netherlands. Eindhoven University of Technology; 1992; .

17. Henneman EA, Gawlinski A. A near-miss model for describing the nurse’s role in the recovery of medical errors. J Prof Nurs. 2004; 20: 196–201.

18. Rasmussen J. The definition of human error and a taxonomy for technical systems design. In: Rasmussen J, Duncan K, Leplat J. , eds. New Technology and Human Error. London, England: John Wiley & Sons Ltd.; 1987; : 23–30.

19. Radhakrishnan K, Roche J, Cunningham H. Measuring clinical practice parameters with human patient simulation: a pilot study. Int J Nurs Educ Scholarsh. 2007; 4:(1): 1–11.

20. Marquard JL, Henneman PL, Ze H, Junghee J, Fisher D, Henneman E. Nurses’ behaviors and visual scanning patterns may reduce patient identification errors. J Exp Psychol Appl. 2011; 17: 247–256.

Keywords:

Debriefing; Eye tracking; Healthcare education; Nursing education; Patient safety; Simulation

Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins

Login

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.

 Error

Web Part Error: A Web Part or Web Form Control on this Page cannot be displayed or imported. The type could not be found or it is not registered as safe.

Error Details:
[UnsafeControlException: A Web Part or Web Form Control on this Page cannot be displayed or imported. The type could not be found or it is not registered as safe.]
  at Microsoft.SharePoint.ApplicationRuntime.SafeControls.GetTypeFromGuid(Guid guid, Guid solutionId, String assemblyFullName, String typeFullName, Boolean throwIfNotFound)
  at Microsoft.SharePoint.WebPartPages.SPWebPartManager.CreateWebPartsFromRowSetData(Boolean onlyInitializeClosedWebParts)