Secondary Logo

Journal Logo

ROBOTIC SURGERY IN UROLOGY: FACTS AND REALITY: Edited by Firas Abdollah and Alexandre Mottrie

Artificial intelligence and robotic surgery

current perspective and future directions

Bhandari, Mahendraa; Zeffiro, Trevorb; Reddiboina, Madhub

Author Information
doi: 10.1097/MOU.0000000000000692
  • Free

Abstract

A primer comparing the existing technology demonstrated in autonomous vehicles to the future of robotic surgery is available in the supplementary materials.

Box 1
Box 1:
no caption available

INTRODUCTION

Current status: Robotic surgery is a well-established modality that is preferred by surgeons and patients alike. Since the introduction of da Vinci surgical system in 2000, the first successful surgical robot for clinical application, as many as 1037 000 procedures have been performed across 67 countries [1].

Recently, several newer robotic platforms have been introduced of these ALF-X, Transenterix [2], single port (single-port system) [3], and Ion Endoluminal System [4] by Intuitive Surgical Company and Monarch, Auris robotic endoscopy system [5] have been approved by FDA. The Monarch and Ion platforms compete for pulmonary use and are still in early clinical trials. PROCEPT is an FDA-approved aquablation robotic system developed for the resection of benign prostate gland [6] Revo-I (Korea) [7], Single Port Orifice Robotic Technology (SPORT) (Titan) [8], Medicaroid (Kawasaki and Sysemex) [9], Versius (Cambridge Medical Robotics) [9], and AVRA (German Aerospace Center) are being developed by different countries [10]. These companies are likely to jostle for the market space against da Vinci of Intuitive Surgical Company. All these robotic platforms vary from each other by the kind of instruments used (flexible and rigid), number of ports (multiport/single port systems), and the availability of haptic feedback. These robots are still in different developmental stages and none appear to have any compelling disruptive feature that would pose any imminent threat to the monopoly of Intuitive Surgical Inc.

The programmable robotic system such as Accuray's cyberknife capable of implementing a predefined treatment plans to refocused energy with a point source to destroy tumors in specific location [11], whereas the Mako (Stryker) [12] robot used for joint replacements and the Yomi (Neocis) [13] used for dental implants function by implementing the plan defined preoperatively by 3D reconstruction of computed tomography (CT) images. Mazor robotic guidance for spine surgery has been gaining popularity with 54 installations and 4000 procedures to its credit [14]. This kind of robot ensures a precise implementation of preplanned steps of operative surgery to stay out of harm's way by avoiding deviation and, thereby, achieving improved patient outcomes. Despite overall growth of intelligent robotics, the artificial intelligence technology has taken longer to permeate through to the world of surgery, partly owing to the complex nature of interaction with the human tissue but also because of lack of perceived necessity. The full potential of interaction between intelligent systems, the surgeon, and the patient has yet to be exploited [15▪].

THE FUTURE SURGICAL ROBOT

Harnessing its true potential, we are fast approaching an era of robotic surgery where a robot could either perform preprogrammed tasks, or learn from its own experience through a feedback pipeline of good and not-so-good outcomes (reinforcement learning) [16▪] In these robots, the automation would be driven by deep-learning models (DLM) that are designed, defined, and continuously evolved by the application of artificial neural networks (ANN). ANNs are the digital equivalent to the biological nervous system. DLM built with ANN are the intermediate stage for building autonomous robots. An intelligent robot will recognize organs, tissues, and surgical targets to execute a task that is either supervised by a surgeon or robot automatically, thereby complementing human performance. To build DLM, large amounts of high-quality annotated data would be required; ideally, these data would be sourced from multiple centers following uniform standards (Fig. 1). It has been observed that DLM, when deployed for clinical use, learn on their own, and learn much faster than the human brain can ever do. DLM have a voracious appetite for data before their performance starts plateauing when the law of diminishing returns comes into force. A driverless car continuously captures and processes data through multiple sources, and, thereby, constantly improves its own performance. Similarly, it is feasible to collect surgical data through intraoperative sensors, external and internal videos, and as a direct feed from machines used for monitoring the patient during anesthesia [17▪▪]. These sensors could also potentially highlight blood vessels, nerve cells, tumor margins, or other important structures that could be hard to visualize [18]. The massive data obtained through relevant sources is profoundly rich and its immense potential to underline the indicators of surgical performance. DLM built with this big data would be able to preempt unexpected events, and correspondingly, lends an opportunity to the surgeon to preempt, intervene, and prevent potential complications. The futurist robotic systems would be able to recognize the presence of a specific surgeon sitting at the console and provide him/her the instant access to one's own analyzed performance data in the backdrop of the global data relevant to the procedure displayed in real time for an instant and smarter surgical decision-making.

FIGURE 1
FIGURE 1:
A schematic for data capture, and construction of deep-learning models. The process for multicenter data capture, building, testing, and validating deep-learning models, the precursors of autonomous robots. (a) Individual center data capture; (b) merged data engineering and model building in the secured cloud environment; and (c) feedback loop for model deployment and validation.

With the advent of cloud services, low-latency, 5G internet, it has been possible to instantly exchange information between machine–machine and machine–human, human–human [19▪▪]. The large image repository of big data and the libraries of past case information along with the experience of the master surgeons are rich ingredients of building robust DLM. At the simplest level, a surgeon could view the data, animation, videos, simulation for real-time interaction and, accordingly, would harness its immense potential in much improved surgical decision-making [20].

Validated DLM would be stored in the cloud to access on demand. It would not only overlay clearly laid blood vessels in relation to a tumor but also provide ‘pearls of wisdom’ how an expert surgeon would negotiate tricky bends in troubled waters. Furthermore, intelligent robots would be capable of selecting appropriate instruments and provide a high-quality support in decision-making of the surgeon.

‘Digital surgery.’ A health technology start-up company based in London launched the first dynamic artificial intelligence system as a live-operating tool [21▪]. The reference tool helps support surgical teams through complex medical procedures that are described akin to ‘Google Map for surgery.’ Digital surgery system aims at 5 billion people around the world who do not have access to well-tolerated surgical care. The platform leverages cameras and computer vision to recognize what is happening during surgery while cross-referencing a vast library of surgical guides and, thereupon, helps in predicting difficult situations to choose a correct approach. Surgical teams get real-time analysis and feedback via audio and visual cues, and thereby, they can guide using a wireless pedal. This is a true intersection of technology and surgery.

Verb surgical (Verb surgical Inc., J&J/Alphabet, Mountain View, CA, USA). Verb is a digitally enabled surgical platform with advanced instrumentation, low-latency connectivity, data analytics, advanced robotics, advanced visualization, simulation, and machine learning. The company is projecting its goal to democratize surgery and increase information to the surgeon is given during a procedure [16▪].

IRIS 1.0 System, Intuitive SurgicalR. Recently, Intuitive Surgical Company obtained FDA 510(k) approval for IRIS 1.0 System which processes medical images and delivers personalized segmented image studies (3D anatomical models) to the surgeons as a road map to the surgery of the patient. The surgeon would be able to manipulate the labeled multiplanar reconstructions on their iOS device to develop a surgical plan. It would also be possible for using da Vinci surgical system Tile Pro input to display 3D models and high-resolution stereo viewer via hardwire connection from the iOS device. This tool will allow image processing, review, analysis, communication, and media interchange of multidimensional digital images acquired from CT images [22].

DEVELOPING AUTONOMOUS ROBOTS

The primary prerequisite for developing autonomous robots is the availability of reliable, relevant, and robust data. It is from here that additional building blocks, generally reliant upon Computer Vision, maybe lain. Computer vision is a deep-learning technique to understand image data and deal with tasks such as object detection, classification, and segmentation. Convolutional neural networks are a type of deep-learning algorithm designed to process data that exhibits natural spatial invariance (images whose meaning does not change under translation). Object detection and segmentation algorithms identify specific parts of an image corresponding to objects.

Currently, some of these tools required to build autonomous robots are thought to be 2D surgical scene segmentation [23], Depth-map reconstruction [23], surgical skill [23] evaluation [23], and surgical simulation and planning [24]. Owing to the limited availability to high-quality data, many of these building blocks are being built from data from a select few competitions. In our excitement at the public release of data, we competed in two separate competitions [23,24]; One for 3D segmentation and the other for 2D segmentation. The prior competition being geared toward presurgical planning and diagnostics, and the latter oriented toward real-time object detection.

PRESURGICAL PLANNING

3D virtual and printed reconstructions and 2D cross-sectional imaging have been increasingly adopted to facilitate optimal surgical planning and effective navigation during surgery [25]. In yet another approach, Indocyanine green, a fluorescent dye, has emerged as a well-tolerated technology to identify vascular and other anatomical structures during surgery [26].

Recently, DLM has been used as a tool for preoperative imaging reconstruction as a surgical guide. However, the lack of annotated data is the biggest hurdle for the researcher to build DLM as a precursor to autonomous robots. We circumvented this problem by participating in two-challenge competitions.

Challenge 1: KiTS 2019, sponsored by the University of Minnesota, Intuitive Surgical Inc., the National Institute of Health and the Climb 4 kidney cancer. They provided us with 300 CT images of which 210 were annotated image. As one of the 107 contestants, we built a deep-learning model to identify the kidney and the kidney tumor from the bulk CT. Ninety nonannotated images were used as a test data set [27]. The images are used for pattern recognition. The objects were segmented in 3D space, where individual voxels, unit areas of volume, are allocated to one of a set of possible groups. In the case of the KiTS2019 Challenge [27], the volumes were associated with either kidney, tumor, or background. From here, a surface can be reconstructed of the kidney so that its position with the body may more easily be visualized in three dimensions (Fig. 2). It is this 3D rendering of a kidney that allows surgeons to plan their surgical approach, minimizing the chances of unexpected outcomes or to practice their approach in a low-risk environment. We could build a model with 0.824 dice coefficient. The model performed well when benchmarked against top 35 competitors.

FIGURE 2
FIGURE 2:
3D segmentation and reconstruction of kidney surface from CT scan. 3D segmentation and reconstruction of kidney and tumor [27]. Our model C performance is of dice coefficient 0.824 benchmarked against the winning team with a score of 0.912. (a) A coronal slice of the subject 069 demonstrating ground-truth segmentation. Kidneys are maroon and right upper polar tumor is pink. (b) The ground truth as provided by the organizers. (c) Our models inference corresponds to the kidney and tumor.

Challenge 2: 2D surgical segmentation. Under this challenge, hosted by the Medical Image Computing and Computer Assisted Intervention Society in 2018 [28], we were provided 16 videos of porcine kidney surgery and required to build a two-dimensional segmentation model to identify 11 distinct objects (kidney, covered kidney, clasper, wrist of the clasper, shaft of the instrument, thread, ultrasound probe, vascular clamp, small intestine, large intestine, and background).

Image segmentation is the process by which individual pixels are associated with a finite set of objects. Currently, state-of-the-art models produce good results only on near-ideal circumstances; the mere existence of smoke, blood, or the object being out of focus dramatically reduce segmentation performance, in our experience. Such a result only highlights the necessity of further development on DLM on environments including these aberrations. Once robust 2D segmentation is achieved, the next step of interest would be in the reconstruction of 3D scenes from a binocular surgical view, depth-map reconstruction. Such a step would allow the comparison of the expected location and view of the interior as collected from a presurgical scan to the current view and location from an intrasurgical perspective. Examples of our prior work in this arena are in Fig. 3.

FIGURE 3
FIGURE 3:
Comparison of semantic segmentation of surgical tools, and kidney. Model built under the aegis of the 22nd International conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2018) from our entry into MICCAI robotic scene segmentation sub challenge of 2018 Endo Vision competition. From left to right 2D segmentation model: (a) input, (b) ground truth, (c) RediMinds model output. The top row corresponds to frame 83 of sequence 2 and the lower row corresponds to frame 59 of sequence 12.

SURGICAL SKILL EVALUATION

Ershad et al.[29] studied stylistic behavioral traits of the trainees in robotic surgery on 14 trainees or four different levels of expertise, ranging from novice to expert. The authors applied machine learning and data-driven feature extraction methods for assessing the tasks performed on the da Vinci simulator. The proposed automatic evaluation of trainee's movement style could provide the trainee online personalized feedback with areas of improvement for specific surgical tasks [29]. Fard et al.[30] proposed a predictive framework for objective skill assessment based on kinematic data collected by the da Vinci robot. Their model could differentiate between expert and novice surgeons with accuracies of 82.3 and 89.9% on the tasks of knot tying and suturing, respectively.

Evaluating the skill of a surgeon based on an example of their work has more applications than merely ranking surgeons according to their skill based on some criteria. It also allows autonomous robots to simulate surgeries beforehand and provide an outlook as to the expected results and path of the surgery. Thus, when combined with the other building blocks, an ecosystem in which to train machine learning models to perform surgeries goes from a glimmer in an engineer's eye to a practicable path to advance surgery.

BIG-DATA CAPTURE AND THE LOGISTICS OF DATA SHARING

The healthcare is expected to generate 2314 exabytes (one billion gigabytes) by 2020 of data in the United States that is growing 48% annually (Stanford report), but it is also true that most of the data remain uncaptured or unutilized. Currently, we have unstructured, heterogeneous data available in silos not ready to be used meaningfully. Organizing and cleaning data are labor-intensive, immensely costly, and time-consuming task [16▪].

In the current medicolegal climate and data conscious society, organizing such a repository is an endeavor of astronomical complexity and cost. As the balance between investment for scientific innovation and keeping it business-wise is often difficult to achieve, even for technology companies with deep pockets, it is di cult to build such repositories that are integral to building autonomous surgical robots. Gaining the consent of patients (current and past) would be an uphill task that has potential to be a legal and ethical minefield.

As a corollary to this, the world-class artificial intelligence researchers have extremely limited access to high-quality surgical data for their research which is a huge missed opportunity for the growth of autonomous robotic surgery.

It is worth highlighting that the legal agreements between hospitals and the companies investing in innovations have been challenged by the regulatory bodies guarding the interest of patients, for example, Google and the University of Chicago Medical Center data sharing agreement has been challenged legally raising privacy concerns [31].

LIMITATIONS AND CHALLENGES

Despite all the promises of artificial intelligence technology, there are formidable challenges and pitfalls. An overriding issue for the future of artificial intelligence in Medicine rests with the assurance of privacy and secrecy of data. The risk of hacking an algorithm could harm people at a large scale. New models of health data ownership with the rights to the individual, high-security platforms and potential governmental intervention would create natural tension between the advancement of science and the rights of the individual patient [32▪▪].

CONCLUSION

Though DLM have been successfully deployed for robots used outside the field of surgery, their entry into the operating room remains elusive. The major hurdles include the lack of real time big data collection culture in surgical sciences, and the issues related to data sharing and its utilization. For actionable deployment of DLM into surgical practice, there is a dire need for data sharing between hospitals, surgeons and research groups who must follow high ethical and legal standards and be viable from the business point of view of investment of money.

Acknowledgements

The authors thank Intuitive Surgical for the MICCAI 2018 Challenge and for access to data; KiTS 2019 for data access; Dave Meinhard, Vattikuti Foundation for Videography; and Ajay Sharma, Anubhav Reddy Nallabasannagari, and Hamid Ali for professional support, and guidance.

Financial support and sponsorship

None.

Conflicts of interest

There are no conflicts of interest.

REFERENCES AND RECOMMENDED READING

Papers of particular interest, published within the annual period of review, have been highlighted as:

  • ▪ of special interest
  • ▪▪ of outstanding interest

REFERENCES

1. Annual report 2018. Intuitive surgical.
2. Samalavicius NE, Janusonis V, Siaulys R, et al. Robotic surgery using SenhanceR robotic platform: single center experience with first 100 cases. J Robot Surg 2019.
3. Chan JY, Tsang RK, Holsinger FC, et al. Prospective clinical trial to evaluate safety and feasibility of using a single port exible robotic system for transoral head and neck surgery. Oral Oncol 2019; 94:101–105.
4. Ion lung biopsy system from Intuitive Surgical wins FDA approval. Available from: https://www.therobotreport.com/ion-lung-biopsy-intuitive-surgical-fda/.
5. J&J's Auris touts Monarch robotic bronchoscopy feasibility study: MassDevice. Available from: https://www.massdevice.com/jjs-auris-touts-monarch-robotic-bronchoscopy-feasibility-study/.
6. Misrai V, Rijo E, Zorn KC, et al. Waterjet ablation therapy for treating benign prostatic obstruction in patients with small- to medium-size glands: 12-month results of the first French Aquablation Clinical Registry. Eur Urol 2019; S0302283819305147.
7. Kang CM, Chong JU, Lim JH, et al. Robotic cholecystectomy using the newly developed Korean Robotic Surgical System, Revo-I: a preclinical experiment in a porcine model. Yonsei Med J 2017; 58:1075.
8. Seeliger B, Diana M, Ruurda JP, et al. Enabling single-site laparoscopy: the SPORT platform. Surg Endosc 2019; 33:3696–3703.
9. Rassweiler JJ, Autorino R, Klein J, et al. Future of robotic surgery in urology. BJU Int 2017; 120:822–841.
10. Home-AVRA Medical Robotics. Available from: https://www.avramedicalrobotics.com/.
11. Alexander R, Schwartz C, Ladisich B, et al. CyberKnife radiosurgery in recurrent brain metastases: do the Benets outweigh the risks? Cureus 2018.
12. Robinson PG, Clement ND, Hamilton D, et al. A systematic review of robotic-assisted unicompart-mental knee arthroplasty: prosthesis design and type should be reported. Bone Joint J 2019; 101-B:838–847.
13. Wu Y, Wang F, Fan S, Chow JK. Robotics in dental implantology. Oral Maxillofac Surg Clin N Am 2019; 31:513–518.
14. Khan A, Meyers JE, Siasios I, Pollina J. Next-generation robotic spine surgery: first report on feasibility, safety, and learning curve. Oper Neurosurg 2019; 17:61–69.
15▪. Mirnezami R, Ahmed A. Surgery 3.0, artificial intelligence and the next-generation surgeon. Br J Surg 2018; 105:463–465.

This article discusses the confluence of pressures on doctors, ethical frameworks by which robotic surgical devices may be implemented, and highlights certain philosophical questions critical to the responsible deployment of autonomous robotic devices.

16▪. Peters BS, Armijo PR, Krause C, et al. Review of emerging surgical robotic technology. Surg Endosc 2018; 32:1636–1655.

This article highlights the features and limitations of various modern robotic surgery platforms.

17▪▪. Chand M, Ramachandran N, Stoyanov D, Lovat L. Robotics, artificial intelligence and distributed ledgers in surgery: data is key!. Tech Coloproctol 2018; 22:645–648.

This article compares the requirements and potential avenues for the application of emerging technologies to transition into truly robotic surgery.

18. Rai S. Cognitive computing and artificial intelligence systems market in healthcare. BCC research.
19▪▪. Kim SS, Dohler M, Dasgupta P. The Internet of Skills: use of fifth-generation telecommunications, haptics and artificial intelligence in robotic surgery. BJU Int 2018; 122:356–358.

This article highlights the futuristic technological advances such as low-latency fifth generation network (5G), Internet of Things (ultrasensitive miniaturized sensors needed for real.

20. How robots and AI are creating the 21st-century surgeon: robotics business review. Available from: https://www.roboticsbusinessreview.com/health-medical/how-robots-and-ai-are-creating-the-21st-century-surgeon/.
21▪. Digital Surgery's AI platform guides surgical teams through complex procedures jVentureBeat. Available from: https://venturebeat.com/2018/07/16/digital-surgerys-ai-platform-guides-surgical-teams-through-complex-procedures/.

This article highlights the current use of machine learning in surgery.

22. Intuitive's IRIS 1.0 Medical Image Processing Software. Available from: http://surgrob.blogspot.com/2019/04/intuitives-iris-10-medical-image.html.
23. EndoVisSub2019-SCARED: Home. Available from: https://endovissub2019-scared.grand-challenge.org/.
24. KiTS19: Home. Available from: https://kits19.grand-challenge.org/.
25. Porpiglia F, Amparore D, Checcucci E, et al. Current use of three-dimensional model technology in urology: a road map for personalised surgical planning. Eur Urol Focus 2018; 4:652–656.
26. Veccia A, Antonelli A, Hampton LJ, et al. Near-infrared fluorescence imaging with indocyanine green in robot-assisted partial nephrectomy: pooled analysis of comparative studies. Eur Urol Focus 2019; S240545691930080X.
27. Heller N, Sathianathen N, Kalapara A, et al. The KiTS19 challenge data: 300 kidney tumor cases with clinical context, CT semantic segmentations, and surgical outcomes. arXiv:190400445 [cs, q-bio, stat]. 2019 Mar;ArXiv: 1904.00445. Available from: http://arxiv.org/abs/1904.00445.
28. EndoVisSub 2018: Robotic scene segmentation: Home. Available from: https://endovissub2018-roboticscenesegmentation.grand-challenge.org/Home/.
29. Ershad M, Rege R, Majewicz Fey A. Automatic and near real-time stylistic behavior assessment in robotic surgery. Int J Comput-Assist Radiol Surg 2019; 14:635–643.
30. Fard MJ, Ameri S, Darin Ellis R, et al. Automated robot-assisted surgical skill evaluation: predictive analytics approach. Int J Med Robot Comput-Assist Surg 2018; 14:e1850.
31. Google and the University of Chicago are sued over data sharing, The New York Times. Available from: https://www.nytimes.com/2019/06/26/technology/googleuniversity-chicago-data-sharing-lawsuit.html.
32▪▪. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019; 25:44–56.

This is a complete and the latest review on overall role of the convergence of artificial intelligence and the human intelligence. Dr. Topol is an authority on the subject.

Keywords:

artificial intelligence; big data; deep-learning models; machine learning; robotic surgery

Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.