Journal Logo

SURGICAL PERSPECTIVES

Artificial Intelligence and the Future of Surgical Robotics

Panesar, Sandip MD, MSc; Cagle, Yvonne MD†,§; Chander, Divya MD, PhD‡,§; Morey, Jose MD§,||,¶,#,∗∗,††; Fernandez-Miranda, Juan MD; Kliot, Michel MD

Author Information
doi: 10.1097/SLA.0000000000003262
  • Free

In 2016, Shademan et al reported complete in vivo, autonomous robotic anastomosis of porcine intestine using the Smart Tissue Autonomous Robot (STAR).1,2 Although conducted in a highly controlled experimental setting, STAR quantitatively outperformed human surgeons in a series of ex vivo and in vivo surgical tasks. These trials demonstrated nascent clinical viability of an autonomous soft-tissue surgical robot for the first time. Unlike conventional surgical robots which are controlled in real-time by humans and which have become commonplace in particular subspecialties, STAR was controlled by artificial intelligence (AI) algorithms, and received input from an array of visual and haptic sensors.

Applications of AI to clinical data for diagnostic purposes have already begun to demonstrate capability approximating that of specialist physicians.3,4 Consequentially, clinical AI has received much attention from within and outside the medical community.5 The STAR trials give clinical AI a surgical context and provide a glimpse into the future, should autonomous surgical devices be further developed. Nevertheless, their development must be rationalized and, for widespread utilization, they must confer either technical or financial advantages over conventional surgical techniques. We henceforth expand upon how this may unfold.

DEFINITIONS OF AUTONOMY

The International Organization for Standardization (ISO 8373:2012) defines autonomy as “an ability to perform intended tasks based on current state and sensing without human intervention.” However, “autonomy” is not a singular state, but rather a scale in which the degree of human intervention is traded against full independence (Fig. 1). Examples of robotic surgical devices of variable autonomy include the DaVinci (Intuitive Surgical, Sunnyvale, CA) a “master-slave” robot completely dependent upon human control; the TSolution-One (previously ROBODOC; THINK Surgical, Fremont, CA) orthopedic robot; and the Mazor X (Mazor Robotics, Caesarea, Israel) spinal robot. The latter 2 offer reduced levels of human input for a limited range of surgical tasks. Partially autonomous robotic devices such as the CyberKnife (Accuray, Sunnyvale, CA) are already in clinical use at present; however, as this uses external radiation beams, it cannot be truly considered a “surgical robot” in the context of this piece.

FIGURE 1
FIGURE 1:
A comparison between the evolution of autonomous vehicles and autonomization of surgery, adapted from Topol (2019), Figure 5. This concept is based upon differing levels (0–5) of autonomy based upon technology and requirements for vehicles, with analogies drawn between vehicles and the performance of surgery. Level 0 encompasses the traditional and historical practice of surgery as it exists today: a human surgeon performs all aspects of the operation using hand-held tools. At level 1, intraoperative image guidance may be performed in real time, for example, intraoperative fluoroscopy or stereotactic navigation, but humans still perform all aspects of physical intervention. At level 2, robotics combined with image guidance may assist in the surgical procedure, for example, the TSolution-One or Mazor X robots. These permit a reduced level of human input by automating critical components of the procedure (such as guiding trajectories of instruments), to reduce errors. At level 3, the device is capable of both navigating and performing limited surgery. The real-life analogy to this level of automation is the CyberKnife stereotactic oncology robot, which plans and conducts “surgery” autonomously. As it uses external radiation beams, it is not strictly “surgery” and its clinical versatility is limited. For level 4 autonomy, the robot is capable of performing a wide-range of surgical procedures largely unaided. Humans may be required for the most complex portions of the procedure, or alternatively solely for supervision (or for legal purposes) and assistance should the robot require it. Level 5 automation is unlikely in the near future, and would require a surgical device to be extensively trialed and proven efficacious. In theory, this device would be able to perform all components of a range of surgical procedures effectively and safely, and human monitoring or intervention would not be necessary. Interestingly, Topol (2019) stated that level 4 and 5 autonomy were undesirable for medical AI because they excluded human input. Nevertheless, full autonomy may be beneficial for surgical devices in situations where human physicians are unavailable such as on space missions or conflict zones.

RATIONALE FOR AUTONOMOUS SURGICAL DEVICES

Human surgical performance is dictated by numerous physical, mental, and technical variables, meaning that surgical consistency is difficult to both quantify and achieve. These factors may contribute to the high variability in terms of functional outcomes, complication rates, and survival observed across institutions and geographies. Conventional surgical robots possess certain advantages over humans (insusceptibility to fatigue, tremor resistance, scalable motion, greater range of axial movement6), which have been shown to produce enhanced margins and lower morbidity rates7 for certain procedures. Combination of AI control algorithms with the inherent advantages of surgical robots may therefore benefit surgical practice by reducing technical errors and operative times, enhancing access to hard-to-reach body areas, and improving outcomes by removing (or reducing) the potential for human error.2

Sociopolitical issues may provide a catalyst for further development and refinement of autonomous surgical robots. A device controlled by AI-based algorithms may permit rapid dissemination of surgical skills via the Internet or mobile platforms, potentially democratizing surgical care and standardizing surgical outcomes independent of geographic or economic constraints. A clinically capable robot may also be able to provide surgical care in environments where care provision is lacking, for example, aboard a spacecraft in deep space,8 where access to surgical care will be severely restricted, and following environmental disasters or in war zones, where healthcare infrastructure has sustained damage or is unavailable.

PROPOSED FRAMEWORK FOR AUTONOMOUS SURGICAL ROBOTS

Future autonomous surgical robots will have ability to “see,” “think,” and “act” without active human intervention to achieve a predetermined surgical goal safely and effectively. Three parameters define the task of an autonomous surgical robot: mission complexity, environmental difficulty, and human independence9 (Fig. 2A). To enable this, the autonomous robot possesses visual and physical sensors that perceive the environment, a central processor that receives sensory input and calculates outputs, and mechanical actuators that permit physical task completion. Due to the highly deformable nature of soft tissue environments, the presence of hollow organs susceptible to rupture, and the delicacy of tissues, achieving a clinically viable, versatile autonomous surgical device will require considerable development and integration of control algorithms, robotics, computer vision, and smart sensor technology, in addition to extensive trial periods.

FIGURE 2
FIGURE 2:
(A) Diagram representing operational framework for autonomous surgical devices adapted from the Autonomy Levels For Unmanned Systems (ALFUS)9 framework. Three factors define the operation of the device: surgical tasks may be of variable complexity (eg, retraction vs suturing), and a total procedure may be composed of a series of simple and complex tasks that must be completed in sequence (mission complexity). The environment in which the robot must operate may be variable, for example, a cavity containing soft tissue structures versus the bones of a limb. These environments must be monitored in real time by the robot's sensors, and appropriate action must be taken to modify actions to avoid danger while simultaneously advancing the robot toward the ultimate surgical goal. The devices’ level of autonomy is determined by the independence level. For example, the presently available T Solution-One system partially automates parts of an arthroplasty, reducing requirement for human input. The STAR trials offered near-complete independence from human surgeons, albeit in highly controlled experimental settings. (B) A simplified schematic demonstrating how a robots’ innate ML algorithms may be trained to perform surgery: the untrained robot (1) is trained using multimodal sources of data (2), and tested using various forms of skill analysis (3) (see Kassahun et al, 2016),10 to yield an appropriately trained surgical robot (4). (C) The surgical robot possesses an array of sensors to appropriately provide a real-time stream of multimodal sensory data. The robots’ processors and algorithms integrate these data sources, in addition to data from the environment (eg, patient vitals) to produce the surgical output via the robots’ actuators. These physical outputs allow the robot to achieve its surgical goal within its environment, which is subsequently modified physically by its actions. The robots’ sensory apparatus thus monitors all subsequent changes in real time, to modify its future actions.

CONTROLLING AND TEACHING THE ROBOT

The robot must perform 2 intrinsic functions: first, the preprogrammed goal of the procedure it has been tasked with (its mission), and second, the ability to dynamically respond to the ever-changing surgical environment. The robot's “surgical skill” consists of its ability to first map its perception (ie, sensory inputs) to an estimated environmental state, and then, map that estimate to a future action (ie, robotic outputs) in the most efficient way possible. Machine learning (ML), a form of AI, is the ability of a machine to learn from prior experiences, and has been proposed as a means to control the actions of autonomous devices. Appropriately trained algorithms can therefore enable robots presented with novel yet similar data (Figs. 2B and 3), to predict an outcome10,11 and thus achieve its tasks in real-time based upon its “experience.”

FIGURE 3
FIGURE 3:
A schematic demonstrating the types of machine learning relevant to training autonomous surgical devices. Both unsupervised and supervised learning may be applied to continuous or categorical data sources, for which each method has specialized techniques. Some ML techniques are rooted in traditional statistical principles (eg, regression), whereas others involve decision mathematics and computer science principles. Reinforcement learning is rooted in psychological principles, where the agent (robot) performs its actions within its environment to achieve an end goal. Each action that brings it closer to its goal yields a positive reward (and vice versa). Positive reinforcement strengthens the algorithms by rewarding task-positive behavior. An outside observer (surgeon) may also monitor the actions and offer rewards. It is likely that combinations of supervised, unsupervised, and reinforcement techniques will be required to produce an autonomous device with the versatility required to perform a range of soft-tissue surgical procedures.

ML is most beneficial when applied to large, unwieldy datasets that are otherwise uninterpretable by humans. The robot's sensory apparatus produces a continuous stream of quantifiable data, to which ML algorithms will be applied in real time, so its processors can modify actions in synchrony with environmental changes and based upon its training (Fig. 2C). If the sensory stream is of comparable fidelity to human senses, such analytical algorithms will, at some point, demonstrate superiority over human perception. AI algorithms may therefore be able to delineate “occult” information in the sensory data that are otherwise imperceptible to humans, thereby predicting or detecting adverse events at a level exceeding human ability. The demonstrated performance of AI in detecting pathologies from radiological data is an early example of this phenomenon. Nevertheless, collating and analyzing multimodal sensory information to mimic a human surgeon's perception in real time is markedly more complex than applying AI algorithms to a single type of radiological scan for diagnostic purposes.

The robot must also be taught how to perform surgery. Proposed methods to “teach” the robots are by directly programming it (explicit learning), or by having the robot directly observing a surgeon or video (implicit learning); the robot may even train in virtual reality. Nevertheless, mimicking a human surgeon effectively requires not only ability to judge all relevant sensory inputs (ie, visual and tactile features of the surgical field) and positional information, but also a database of explicit knowledge on how to safely proceed to achieve the surgical goal. Consequently, it is unlikely that implicit or explicit techniques can be used exclusively, and a combination of both, with continuous reinforcement and modification by domain experts (ie, human surgeons) will be required (see review by Kassahun et al, 2016).10 In theory, however, the rate of learning of a robot provided with a suitable training database of procedures and teaching examples used would be limited only by its hardware and software capabilities. This is in contrast to humans, whose learning ability is limited by mental, physical, and time constraints.

AUTONOMOUS ROBOTIC SURGERY IN ADVERSE ENVIRONMENTS

In space, war zones, or in conditions of environmental disaster, communication latencies and bandwidth restrictions may render human-controlled telemedical solutions unfeasible. For example, on a proposed Mars exploration mission, which is intended to last 2.5 years and will take humans over 50 million miles from Earth, communication latency is anywhere between 4 and 24 minutes. Crew medics may also not have appropriate medical training to counter the full range of potential pathologies explorers may face.8 A fully autonomous surgical approach may therefore be utilized for an entire procedure if the technology is available. Otherwise, a partially autonomous approach may be used, mimicking the STAR trials: crew with minimal medical training may be able to perform the access and closure portions of the procedure, whereas the robot performs the more complex parts, for example, bowel anastomosis.

INCREASING GLOBAL ACCESS TO SURGICAL CARE

The da Vinci robot (Intuitive Surgical, Sunnyvale, CA), first introduced in 2000, is the predominant commercially available robotic surgery system. Despite almost 20 years of commercial availability, its acquisition (US$1–2.5 million per device) and upkeep costs remain high.12 Alternative models to the da Vinci are anticipated in the near future. These competitors are expected to drive down costs while advancing robotic technology.13 If cost reductions combined with concurrent technological advances (ie, Moore's Law14) occur in concert with widespread adoption by clinical communities and public markets, a clinically feasible autonomous surgical device may become commercially available in the future. As with all new technologies, early designs are likely to be costly, with a small number of initial users. Nevertheless, if their clinical efficacy is proven and their technology continuously refined, economies of scale may make them affordable for emerging economies and underserved medical environments.

CONCLUSIONS

The realization of clinically feasible surgical robots will likely occur by the end of the 21st century. The combination of AI with surgical robotics may permit the augmentation of surgical capability to optimize outcomes and increase access to care.

REFERENCES

1. Leonard S, Wu KL, Kim Y, et al. Smart tissue anastomosis robot (STAR): a vision-guided robotics system for laparoscopic suturing. IEEE Trans Biomed Eng 2014; 61:1305–1317.
2. Shademan A, Decker RS, Opfermann JD, et al. Supervised autonomous robotic soft tissue surgery. Sci Transl Med 2016; 8:337ra64.
3. Rajpurkar P, Irvin J, Zhu K, et al. CheXNet: radiologist-level pneumonia detection on chest x-rays with deep learning. ArXiv Prepr ArXiv171105225 2017.
4. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017; 542:115–118.
5. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019; 25:44.
6. Lanfranco AR, Castellanos AE, Desai JP, et al. Robotic Surgery. Ann Surg 2004; 239:14–21.
7. Ramsay C, Pickard R, Robertson C, et al. Systematic review and economic modelling of the relative clinical benefit and cost-effectiveness of laparoscopic surgery and robotic surgery for removal of the prostate in men with localised prostate cancer. Health Technol Assess Winch Engl 2012; 16:1–313.
8. Panesar SS, Ashkan K. Surgery in space. Br J Surg 2018; 105:1234–1243.
9. Huang HM. The autonomy levels for unmanned systems (ALFUS) framework: interim results. In Performance Metrics for Intelligent Systems (PerMIS) Workshop; August 2006.
10. Kassahun Y, Yu B, Tibebu AT, et al. Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions. Int J Comput Assist Radiol Surg 2016; 11:553–568.
11. Moustris GP, Hiridis SC, Deliparaschos KM, et al. Evolution of autonomous and semi-autonomous robotic surgical systems: a review of the literature. Int J Med Robot 2011; 7:375–392.
12. Barbash GI, Glied SA. New technology and health care costs—the case of robot-assisted surgery. N Engl J Med 2010; 363:701–704.
13. Novellis P, Alloisio M, Vanni E, et al. Robotic lung cancer surgery: review of experience and costs. J Vis Surg 2017; 3:39.
14. Moore GE. Cramming more components onto integrated circuits. Proc IEEE 1998; 86:82–85.
Keywords:

artificial intelligence; autonomous robotic surgery; future of surgery; machine learning; surgical robotics

Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.