Secondary Logo

Journal Logo

Surgical Perspectives

How Wearable Technology Can Facilitate AI Analysis of Surgical Videos

Pugh, Carla M. MD, PhD*; Ghazi, Ahmed MD, FEBU, MSc; Stefanidis, Dimitrios MD, PhD; Schwaitzberg, Steven D. MD§; Martino, Martin A. MD; Levy, Jeffrey S. MD

Author Information
doi: 10.1097/AS9.0000000000000011
  • Open

INTRODUCTION

Instant replays and video review have a long history as part of the training process for professional athletes. However, legal discoverability is a major barrier to adoption of video-based training and assessment in the surgical profession. Seamless video capture and editing has also been a major barrier. In the early 2000s, in-light cameras were installed in most operating rooms; however, it was quickly noted that videos captured using this technology were frequently obstructed by the surgeon’s head. Hence, it was not uncommon to miss the most important aspects of the operation when using in-light cameras in the operating room. The broad adoption of minimally invasive surgery has significantly improved the availability of high quality, unobstructed views of the surgical workflow during an operation. As such, there has been a significant increase in surgical video capture and editing for operative procedures conducted using minimally invasive techniques.

The increase in availability of operative video has sparked an interest in the use of artificial intelligence (AI) to analyze surgical video.1 In 2017, computer vision engineers at Johns Hopkins released the first public dataset for helping to advance the potential use of AI for automated task recognition. The dataset contains synchronized video and motion data for three tasks using the Da Vinci robot: suturing, needle passing, and knot tying.2 Despite the increased interest in using AI to analyze surgical video, accurate and meaningful video segmentation remains a significant hurdle. Early research using wearable technology for surgeons has shown promising results in the ability to tag critical decision points in an operation. When wearable technologies are synchronized with surgical video, the combination offers a potential solution to the vexing problem of video segmentation when using AI. Four examples of wearable technology approaches in surgery are presented.

AUDIO, EEG, AND MOTION TECHNOLOGY

Figure 1 shows a surgeon performing a simulated bowel repair using porcine intestines. During the repair, the surgeon is wearing an electroencephalogram (EEG) sensor, motion tracking sensors, and audio-recording equipment. Data streams from each of these technologies are synchronized with a video-recording of the procedure. Preliminary results show that each data stream provides contextual information about the surgical workflow that could not be gleaned from human observation or AI analysis of surgical video. The audio data captures team communication, workflow announcements (procedure steps, anatomy, etc), and instrument/equipment requests. Prior work analyzing audio data have shown that How a surgeon talks during an operation and What is said, correlates with procedure outcomes.3,4 The EEG sensor is being tested for the first time in surgeons. The company that produces the EEG sensors originally designed the sensor to evaluate cognitive impairment in elderly individuals. Early results from the EEG sensor during a surgical pilot study show clear signal delineations when the data are presented as a spectrogram. The blue areas represent low cognitive processing, whereas orange to red areas represent higher cognitive processing including memory and executive function. These results are still being evaluated for evidence of validity.

FIGURE 1.
FIGURE 1.:
Images and data from the American College of Surgeons Surgical Metrics Project. A, Surgeon instrumented with multiple sensors (forehead EEG, lapel audio, hand motion) during simulated bowel repair. B, EEG spectrograms for 2 different surgeons. One surgeon was visually and verbally engaged with their operative assistant and the other one was not hence mostly blue signal. C, Close up of motion tracking wires beneath surgical gloves and motion diagram showing hand positions near the bowel and tool tray as well as distance away from the bowel when pulling suture through. D, Motion data from right and left hand synchronized with video.

The motion data capture both movement and lack of movement as well as location and velocity of movement. Prior work using motion technology has largely focused on movement data; however, recent work has shown that lack of movement or lower velocity may represent those “slowing down” moments in an operation that tend to be associated with intraoperative decision-making.5,6 Analysis of high-velocity motion outputs verses low or no velocity outputs allows for efficient identification of different phases of an operation, thus facilitating video segmentation and enabling instant replays of short video-clips of critical intraoperative decisions.

Eye Tracking

Eye-tracking technology allows for the measurement of an observer’s point of gaze based on where their pupil is focused. The use of camera technology to analyze eye motion is a well-established concept, dating back to 1950, during which the use of picture cameras to study the gaze behavior of pilots was first described.7 In aviation, differences in the gaze behavior of experienced and novice pilots have been clearly described and extensively studied.8 These include differences across a variety of gaze-based metrics including the number of times pilots look at key areas (fixation count) and the duration of time spent looking at each of these areas (dwell time).

In recent years, considerable progress in eye-tracking technologies has made eye trackers less intrusive and has increased their usability in the surgical field.9 Studies using wearable eye-tracking systems (eg, mounted onto lightweight eyeglass frames) have been able to quantitatively observe the movements of an individual’s eyes to identify their attentional focus during a task.9 Most commercial eye-trackers rely on locating the reflection of an infrared beam of light from the subject’s eyes with an infrared-sensitive camera. These record the corneal reflection of infrared lighting to track pupil position, mapping the subject’s focus of attention on video recordings of the subject’s field of view (Gaze).10 In addition to tracking gaze, software enables the measurement of various eye metrics including fixation frequency, dwell time, and pupil diameter (a marker of subject effort and concentration).11 The eye tracker used at the Simulation Innovation Laboratory is a remote computer-attached remote eye-tracker (Pupil Labs glasses; Pupil Labs, Berlin, Germany). The Pupil Labs glasses have 3 cameras: one world camera to record the participant’s view and one eye-camera for each eye.

Previous research has suggested that experts, compared with nonexperts, have more focused attention and elaborate visual representation during performance of a task.12,13 In eye tracking, this can be represented by increased fixation rates and a higher proportion of fixation within an area of interest.12 A higher proportion of fixations in a certain area of interest suggest a greater focused attention to that particular area. Dwell time is the duration of stay in an area, and it has been found that more important areas resulted in longer dwell times.14 Saccades patterns have been demonstrated to reveal underlying cognitive mechanisms that guide decisions for executing tasks.15 Visual processing incorporates both visually salient (areas that stand out regardless of importance) and cognitively salient points (areas that are relevant to performing a task or identifying a structure), and the performer’s ratio of attention to these categories of points can be used as a marker for expertise. Novices tend to spend a greater proportion of time looking at visually salient points (areas that stand out regardless of importance) as compared with an expert who should focus on cognitively salient points (areas that are relevant to performing a task or identifying a structure). Experts also tend to focus on fewer locations in contrast to novices who shift gaze more often and to more locations.16,17 When comparing the differences in gaze behaviors between 13 experts (>500 caseload) and novices (<25 caseload) during a simulated bleeding task (dissection of a bleeding hydrogel vessel within a ellipsoid trench and suturing of the defect, Fig. 2B), our results demonstrated that novices spent a greater proportion of time looking for instruments, assistants, suction, suture, and gazing at pooling of blood in the most dependent area (visually salient points). Experts mainly focused exclusively on the defect in the vessel which was the source of the bleeding (cognitively salient point). Experts also focused on fewer locations in contrast to novices who shift gaze more often and to more locations.

FIGURE 2.
FIGURE 2.:
Eye tracking technology and data. A, Remote computer-attached eye tracker. B, Simulated hydrogel task. C, Analysis of results utilizing Scanpaths-ordered set of fixations points (depicted by circles) connected by saccades (depicted by lines), and dwell time (denoted by a larger diameter of blue circle) from a representative individual from each group. D, Analysis of results utilizing Scanpaths-ordered set of fixations points.

Current advanced robotic systems have started to incorporate eye tracker systems as standard instrumentation (eg, ALF–X Robot).18 Gaze-based metrics in robotic systems might represent a truly revolutionary tool for the development of new methodologies that can reliably measure the cognitive demands of surgeons, monitor their learning curve, and facilitate video segmentation.

Mental Imagery

Mental imagery, which is synonymous with mental rehearsal and mental practice, is the creation of a quasisensory experience in the mind in the absence of physical stimuli that can produce genuine sensory experiences.19,20 Application of this technique in the context of surgery might include imagining performing a surgical task or procedure or addressing a potential operative complication remote to the real training or operative environment. If proven effective, the benefits of this technique for skill acquisition and optimization can be immense as it can be implemented under any condition or environment and at no cost.21 Research has shown that mental imagery can effectively improve surgical skills, confidence, knowledge and team skills when used as a single modality22,23 and confers even bigger benefits when used as adjunct to physical practice.24,25 Nevertheless, the literature also suggests that similar to other skills, mental imagery ability improves with increasing surgical experience.26 Consequently, providing training in mental imagery to surgical trainees promises to improve not only their mental imagery but also their surgical performance as we and others have previously demonstrated.22,25,27

Being able to measure mental imagery ability accurately is therefore important for the provision of performance feedback and monitoring of individual trainee progress. Mental imagery has been traditionally measured via the use of self-reported assessment tools such as the Vividness of Visual Imagery Questionnaire, the Sport Imagery Questionnaire, and the Mental Imagery Questionnaire which is more specific to Laparoscopic surgery.28–30 To obtain a more objective measure of mental imagery ability, we have utilized a 16-lead EEG headset from OpenBCI (Ultracortex “Mark IV” EEG Headset, OpenBCI, Brooklyn, NY), which is a dry electrode EEG system (Fig. 3A). Electrodes were placed in standard locations according to the international 10-20 system (Fig. 3B). Open-source software from OpenViBE (v2.1.0, Inria Rennes, Inria, France) was used to capture EEG signals. OpenViBE presented a random series of arrows pointing to the left and right (ie, from center) on a computer monitor that the user viewed and responded to. Users were instructed to imagine holding the needle robotically with whichever hand the OpenViBE arrow was pointing to on the screen and imagine rotating the needle to the center as if they were driving the needle through tissue. Analysis of our preliminary results, shown in Figure 3, demonstrated significant higher activation of different areas of the brain in experienced attending surgeons versus residents and students. Our findings have implications for training and assessment of surgeon mental imagery skill but could also be applied to the analysis of surgical videos for segmentation or for training purposes. For example, monitoring of surgeon EEG activity during an operation may allow for the identification of procedural segments that were more demanding, thus leading to higher EEG activation and allowing for easy segmentation of videos and identification of those segments that need to be reviewed for assessment or training purposes. Finally, a comparison of EEG activity between experienced attendings and inexperienced surgical trainees during similar segments of the same operation might also allow for easier segmentation and focus on those segments where the largest differences exist for training purposes. Given the large amount of data generated during this process, AI can help automate the EEG assessment and video segmentation process.

FIGURE 3.
FIGURE 3.:
Full-cap EEG technology and data. A, study participant wearing EEG cap, (B) EEG electrode locations on participant’s scalp, and (C) EEG neural activity differences among students, residents, and attendings during mental imagery of the same robotic suturing task. X-axis labels correspond to the sensor locations shown in (B).

FUNCTIONAL NEUROIMAGING

Functional neuroimaging is a noninvasive, portable, low-cost technology that measures changes in near-infrared light. Blood flow in the brain is monitored by specifically measuring changes in the concentration of oxyhemoglobin and deoxyhemoglobin (Hb).

Neurovascular coupling (increase in oxyhemoglobin and simultaneous decrease in deoxyhemoglobin) relates neural activation with vascular response. This is essentially measuring the equivalent hemodynamic changes as functional magnetic resonance imaging.

Evidence from nonmedical fields has shown that simultaneous execution of motor and cognitive tasks results in low prefrontal activation and motor performance deterioration. This implies that multitasking using both cognitive and motor resources may disrupt executive centers in the brain and result in loss of attention, task disengagement and performance deterioration. Early work investigating similar changes in brain function for surgeons shows similar results.

In addition, previous research (Fig. 4) demonstrates changes in activation of specific areas of the brain as the subject in a proficiency training paradigm—Fundamentals of Laparoscopy Surgery skills (pattern cutting), advanced from novice to expert on the measured task.31

FIGURE 4.
FIGURE 4.:
fNIRS FLS study and data. A, fNIRS physiology diagram and set up during FLS. B, Heatmap data showing that surgical experts generate more intense activation as shown in red.31 FLS indicates fundamentals of laparoscopic surgery; fNIRS, functional neuroimaging.

Further research is focused on analyses geared toward correlating classical scoring to brain metric measurement, skills degradation, and the impact of varying surgical environments on brain-based metric performance.32,33 Combined with advances in procedural segmentation by AI neuroimaging, we anticipate being able to reproducibly identify those portions of procedures that are mastered from those where further training might be beneficial.

CONCLUSIONS

The real-time wearable monitoring technology will provide invaluable data that will interact with AI-driven video analysis to drive surgeon performance metrics and patient safety. Access to a video database of critical decisions could improve information exchange amongst practicing surgeons and also improve training resources. For the last 6 years, the Institute for Surgical Excellence, a 501(c)(3) public charity, has been developing standards in surgical education, training, assessment, credentialing, feedback, and long-term data collection by conducting several consensus conferences and publishing their results. Their mission is to create lasting solutions for complex healthcare problems related to emerging technologies, with the ultimate goal to improve patient care and surgical outcomes. The most recent consensus conference focused on AI and Surgical Metrics. Meeting attendees included robotic surgeons, educators, computer scientists, government agencies, surgical societies, and the robotic manufacturing industry to discuss the impact of AI on surgical education, training, assessment, and feedback. Transdisciplinary collaborations are necessary to more fully understand the possible ways in which advanced technologies and AI can streamline the video review process for surgeons.

REFERENCES

1. Lalys F, Jannin P. Surgical process modelling: a review. Int J Comput Assist Radiol Surg. 2014; 9:495–511
2. Ahmidi N, Tao L, Sefati S, et al. A Dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. IEEE Trans Biomed Eng. 2017; 64:2025–2041
3. Ruis AR, Rosser AA, Quandt-Walle C, et al. The hands and head of a surgeon: Modeling operative competency with multimodal epistemic network analysis. Am J Surg. 2018; 216:835–840
4. D’Angelo AD, Ruis AR, Collier W, et al. Evaluating how residents talk and what it means for surgical performance in the simulation lab. Am J Surg. 2020; 220:37–43
5. D’Angelo AL, Rutherford DN, Ray RD, et al. Idle time: an underdeveloped performance metric for assessing surgical skill. Am J Surg. 2015; 209:645–651
6. Moulton CA, Regehr G, Mylopoulos M, et al. Slowing down when you should: a new model of expert judgment. Acad Med. 2007; 8210 SupplS109–S116
7. Fitts PM, Jones RE, Milton JL. Eye movements of aircraft pilots during instrument-landing approaches. Aeronautical Eng Rev. 1950; 9:1–6
8. Haslbeck A, Zhang B. I spy with my little eye: analysis of airline pilots’ gaze patterns in a manual instrument flight scenario. Appl Ergon. 2017; 63:62–71
9. Merali N, Veeramootoo D, Singh S. Eye-Tracking Technology in Surgical Training. J Invest Surg. 2019; 32:587–593
10. Duchowski A. Eye Tracking Methodology - Theory and Practice | Andrew Duchowski | Springer. 2007. London, UK: Springer. Available from: https://www.springer.com/gp/book/9781846286087. Accessed July 20, 2020
11. Thomas LE, Lleras A. Covert shifts of attention function as an implicit aid to insight. Cognition. 2009; 111:168–174
12. Richstone L, Schwartz MJ, Seideman C, et al. Eye metrics as an objective assessment of surgical skill. Ann Surg. 2010; 252:177–182
13. Ericsson KA. An expert-performance perspective of research on medical expertise: the study of clinical performance. Med Educ. 2007; 41:1124–1130
14. Koh RY, Park T, Wickens CD, et al. Differences in attentional strategies by novice and experienced operating theatre scrub nurses. J Exp Psychol Appl. 2011; 17:233–246
15. Law B, Atkins MS, Lomax AJ, et al. Eye trackers in a virtual laparoscopic training environment. Stud Health Technol Inform. 2003; 94:184–186
16. Myles-Worsley M, Johnston WA, Simons MA. The influence of expertise on X-ray image processing. J Exp Psychol Learn Mem Cogn. 1988; 14:553–557
17. Humphrey K, Underwood G. Domain knowledge moderates the influence of visual saliency in scene recognition. Br J Psychol. 2009; 100Pt 2377–398
18. Bozzini G, Gidaro S, Taverna G. Robot-assisted laparoscopic partial nephrectomy with the ALF-X robot on pig models. Eur Urol. 2015776–779
19. Richardson A.. Mental Imagery. 1969. London: Routledge & Kegan Paul
20. Murphy SM. Imagery interventions in sport. Med Sci Sports Exerc. 1994; 26:486–494
21. Cocks M, Moulton CA, Luu S, et al. What surgeons can learn from athletes: mental practice in sports and surgery. J Surg Educ. 2014; 71:262–269
22. Anton NE, Bean EA, Hammonds SC, et al. Application of Mental Skills Training in Surgery: A Review of Its Effectiveness and Proposed Next Steps. J Laparoendosc Adv Surg Tech A. 2017; 27:459–469
23. Arora S, Aggarwal R, Sirimanna P, et al. Mental practice enhances surgical technical skills: a randomized controlled study. Ann Surg. 2011; 253:265–270
24. Sanders CW, Sadoski M, Bramson R, et al. Comparing the effects of physical practice and mental imagery rehearsal on learning basic surgical skills by medical students. Am J Obstet Gynecol. 2004; 191:1811–1814
25. Rao A, Tait I, Alijani A. Systematic review and meta-analysis of the role of mental training in the acquisition of technical skills in surgery. Am J Surg. 2015; 210:545–553
26. Korovin LN, Farrell TM, Hsu CH, et al. Surgeons’ expertise during critical event in laparoscopic cholecystectomy: An expert-novice comparison using protocol analysis. Am J Surg. 2020; 219:340–345
27. Stefanidis D, Anton NE, Howley LD, et al. Effectiveness of a comprehensive mental skills curriculum in enhancing surgical performance: results of a randomized controlled trial. Am J Surg. 2017; 213:318–324
28. Marks DF. Visual imagery differences in the recall of pictures. Br J Psychol. 1973; 64:17–24
29. Hall CR, Stevens DE, Paivio A.. Sport Imagery Questionnaire: Test Manual. 2005. Morgantown, WV: Fitness Information Technology
30. Arora S, Aggarwal R, Sevdalis N, et al. Development and validation of mental practice as a training strategy for laparoscopic surgery. Surg Endosc. 2010; 24:179–187
31. Nemani A, Yücel MA, Kruger U, et al. Assessing bimanual motor skills with optical neuroimaging. Sci Adv. 2018; 4:eaat3807
32. Nemani A, Kruger U, Cooper CA, et al. Objective assessment of surgical skill transfer using non-invasive brain imaging. Surg Endosc. 2019; 33:2485–2494
33. Gao Y, Kruger U, Intes X, et al. A machine learning approach to predict surgical learning curves. Surgery. 2020; 167:321–327
Keywords:

Artificial intelligence; Performance assessment; Objective metrics; Wearable technology

Copyright © 2020 The Author(s). Published by Wolters Kluwer Health, Inc.