Secondary Logo

Journal Logo

Institutional members access full text with Ovid®

Computer Vision Analysis of Intraoperative Video

Automated Recognition of Operative Steps in Laparoscopic Sleeve Gastrectomy

Hashimoto, Daniel A. MD, MS*,†,⊠; Rosman, Guy PhD*,‡; Witkowski, Elan R. MD, MPH*,†; Stafford, Caitlin BS*; Navarette-Welton, Allison J. BA*; Rattner, David W. MD; Lillemoe, Keith D. MD; Rus, Daniela L. PhD; Meireles, Ozanan R. MD*,†

doi: 10.1097/SLA.0000000000003460
PAPERS OF THE 139TH ASA ANNUAL MEETING
Buy

Objective(s): To develop and assess AI algorithms to identify operative steps in laparoscopic sleeve gastrectomy (LSG).

Background: Computer vision, a form of artificial intelligence (AI), allows for quantitative analysis of video by computers for identification of objects and patterns, such as in autonomous driving.

Methods: Intraoperative video from LSG from an academic institution was annotated by 2 fellowship-trained, board-certified bariatric surgeons. Videos were segmented into the following steps: 1) port placement, 2) liver retraction, 3) liver biopsy, 4) gastrocolic ligament dissection, 5) stapling of the stomach, 6) bagging specimen, and 7) final inspection of staple line. Deep neural networks were used to analyze videos. Accuracy of operative step identification by the AI was determined by comparing to surgeon annotations.

Results: Eighty-eight cases of LSG were analyzed. A random 70% sample of these clips was used to train the AI and 30% to test the AI's performance. Mean concordance correlation coefficient for human annotators was 0.862, suggesting excellent agreement. Mean (±SD) accuracy of the AI in identifying operative steps in the test set was 82% ± 4% with a maximum of 85.6%.

Conclusions: AI can extract quantitative surgical data from video with 85.6% accuracy. This suggests operative video could be used as a quantitative data source for research in intraoperative clinical decision support, risk prediction, or outcomes studies.

*Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Boston, MA

Department of Surgery, Massachusetts General Hospital, Boston, MA

Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA.

dahashimoto@mgh.harvard.edu.

DAH and GR are co-first authors.

DAH was partly funded for this work by NIH grant T32DK007754-16A1 and the MGH Edward D. Churchill Research Fellowship.

DAH, GR, ORM, and DLR have a patent pending on technology derived from the work presented in this manuscript.

DAH, GR, ORM, AJN-W, CS, and DLR receive research support from Olympus Corporation. DAH is a consultant for Verily Life Sciences, Johnson & Johnson Institute, and Gerson Lehrman Group. ORM and DWR are consultants for Olympus Corporation. DLR and GR receive research support from Toyota Research Institute (TRI). No commercial funding or support was received for this project. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH, TRI, or any other entity.

DAH, MD, MS (co-first author): study conception and design, data collection, data analysis and interpretation, drafting and critical revision of manuscript.

GR, PhD (co-first author): study conception and design, data collection, data analysis and interpretation, drafting and critical revision of manuscript.

ERW, MD, MPH: study design, data collection, data analysis, critical revision of manuscript.

CS: data collection, data analysis, critical revision of manuscript

AJN-W: data collection, data analysis, critical revision of manuscript.

DWR, MD: study conception, critical revision of manuscript.

KDL, MD: analysis and interpretation of data, critical revision of manuscript.

DLR, PhD: study design, data interpretation, critical revision of manuscript.

ORM, MD: study conception, data collection, data interpretation, critical revision of manuscript.

The authors report no conflicts of interest.

Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.