In 2016, Shademan et al reported complete in vivo, autonomous robotic anastomosis of porcine intestine using the Smart Tissue Autonomous Robot (STAR).1,2 Although conducted in a highly controlled experimental setting, STAR quantitatively outperformed human surgeons in a series of ex vivo and in vivo surgical tasks. These trials demonstrated nascent clinical viability of an autonomous soft-tissue surgical robot for the first time. Unlike conventional surgical robots which are controlled in real-time by humans and which have become commonplace in particular subspecialties, STAR was controlled by artificial intelligence (AI) algorithms, and received input from an array of visual and haptic sensors.
Applications of AI to clinical data for diagnostic purposes have already begun to demonstrate capability approximating that of specialist physicians.3,4 Consequentially, clinical AI has received much attention from within and outside the medical community.5 The STAR trials give clinical AI a surgical context and provide a glimpse into the future, should autonomous surgical devices be further developed. Nevertheless, their development must be rationalized and, for widespread utilization, they must confer either technical or financial advantages over conventional surgical techniques. We henceforth expand upon how this may unfold.
DEFINITIONS OF AUTONOMY
The International Organization for Standardization (ISO 8373:2012) defines autonomy as “an ability to perform intended tasks based on current state and sensing without human intervention.” However, “autonomy” is not a singular state, but rather a scale in which the degree of human intervention is traded against full independence (Fig. 1). Examples of robotic surgical devices of variable autonomy include the DaVinci (Intuitive Surgical, Sunnyvale, CA) a “master-slave” robot completely dependent upon human control; the TSolution-One (previously ROBODOC; THINK Surgical, Fremont, CA) orthopedic robot; and the Mazor X (Mazor Robotics, Caesarea, Israel) spinal robot. The latter 2 offer reduced levels of human input for a limited range of surgical tasks. Partially autonomous robotic devices such as the CyberKnife (Accuray, Sunnyvale, CA) are already in clinical use at present; however, as this uses external radiation beams, it cannot be truly considered a “surgical robot” in the context of this piece.
RATIONALE FOR AUTONOMOUS SURGICAL DEVICES
Human surgical performance is dictated by numerous physical, mental, and technical variables, meaning that surgical consistency is difficult to both quantify and achieve. These factors may contribute to the high variability in terms of functional outcomes, complication rates, and survival observed across institutions and geographies. Conventional surgical robots possess certain advantages over humans (insusceptibility to fatigue, tremor resistance, scalable motion, greater range of axial movement6), which have been shown to produce enhanced margins and lower morbidity rates7 for certain procedures. Combination of AI control algorithms with the inherent advantages of surgical robots may therefore benefit surgical practice by reducing technical errors and operative times, enhancing access to hard-to-reach body areas, and improving outcomes by removing (or reducing) the potential for human error.2
Sociopolitical issues may provide a catalyst for further development and refinement of autonomous surgical robots. A device controlled by AI-based algorithms may permit rapid dissemination of surgical skills via the Internet or mobile platforms, potentially democratizing surgical care and standardizing surgical outcomes independent of geographic or economic constraints. A clinically capable robot may also be able to provide surgical care in environments where care provision is lacking, for example, aboard a spacecraft in deep space,8 where access to surgical care will be severely restricted, and following environmental disasters or in war zones, where healthcare infrastructure has sustained damage or is unavailable.
PROPOSED FRAMEWORK FOR AUTONOMOUS SURGICAL ROBOTS
Future autonomous surgical robots will have ability to “see,” “think,” and “act” without active human intervention to achieve a predetermined surgical goal safely and effectively. Three parameters define the task of an autonomous surgical robot: mission complexity, environmental difficulty, and human independence9 (Fig. 2A). To enable this, the autonomous robot possesses visual and physical sensors that perceive the environment, a central processor that receives sensory input and calculates outputs, and mechanical actuators that permit physical task completion. Due to the highly deformable nature of soft tissue environments, the presence of hollow organs susceptible to rupture, and the delicacy of tissues, achieving a clinically viable, versatile autonomous surgical device will require considerable development and integration of control algorithms, robotics, computer vision, and smart sensor technology, in addition to extensive trial periods.
CONTROLLING AND TEACHING THE ROBOT
The robot must perform 2 intrinsic functions: first, the preprogrammed goal of the procedure it has been tasked with (its mission), and second, the ability to dynamically respond to the ever-changing surgical environment. The robot's “surgical skill” consists of its ability to first map its perception (ie, sensory inputs) to an estimated environmental state, and then, map that estimate to a future action (ie, robotic outputs) in the most efficient way possible. Machine learning (ML), a form of AI, is the ability of a machine to learn from prior experiences, and has been proposed as a means to control the actions of autonomous devices. Appropriately trained algorithms can therefore enable robots presented with novel yet similar data (Figs. 2B and 3), to predict an outcome10,11 and thus achieve its tasks in real-time based upon its “experience.”
ML is most beneficial when applied to large, unwieldy datasets that are otherwise uninterpretable by humans. The robot's sensory apparatus produces a continuous stream of quantifiable data, to which ML algorithms will be applied in real time, so its processors can modify actions in synchrony with environmental changes and based upon its training (Fig. 2C). If the sensory stream is of comparable fidelity to human senses, such analytical algorithms will, at some point, demonstrate superiority over human perception. AI algorithms may therefore be able to delineate “occult” information in the sensory data that are otherwise imperceptible to humans, thereby predicting or detecting adverse events at a level exceeding human ability. The demonstrated performance of AI in detecting pathologies from radiological data is an early example of this phenomenon. Nevertheless, collating and analyzing multimodal sensory information to mimic a human surgeon's perception in real time is markedly more complex than applying AI algorithms to a single type of radiological scan for diagnostic purposes.
The robot must also be taught how to perform surgery. Proposed methods to “teach” the robots are by directly programming it (explicit learning), or by having the robot directly observing a surgeon or video (implicit learning); the robot may even train in virtual reality. Nevertheless, mimicking a human surgeon effectively requires not only ability to judge all relevant sensory inputs (ie, visual and tactile features of the surgical field) and positional information, but also a database of explicit knowledge on how to safely proceed to achieve the surgical goal. Consequently, it is unlikely that implicit or explicit techniques can be used exclusively, and a combination of both, with continuous reinforcement and modification by domain experts (ie, human surgeons) will be required (see review by Kassahun et al, 2016).10 In theory, however, the rate of learning of a robot provided with a suitable training database of procedures and teaching examples used would be limited only by its hardware and software capabilities. This is in contrast to humans, whose learning ability is limited by mental, physical, and time constraints.
AUTONOMOUS ROBOTIC SURGERY IN ADVERSE ENVIRONMENTS
In space, war zones, or in conditions of environmental disaster, communication latencies and bandwidth restrictions may render human-controlled telemedical solutions unfeasible. For example, on a proposed Mars exploration mission, which is intended to last 2.5 years and will take humans over 50 million miles from Earth, communication latency is anywhere between 4 and 24 minutes. Crew medics may also not have appropriate medical training to counter the full range of potential pathologies explorers may face.8 A fully autonomous surgical approach may therefore be utilized for an entire procedure if the technology is available. Otherwise, a partially autonomous approach may be used, mimicking the STAR trials: crew with minimal medical training may be able to perform the access and closure portions of the procedure, whereas the robot performs the more complex parts, for example, bowel anastomosis.
INCREASING GLOBAL ACCESS TO SURGICAL CARE
The da Vinci robot (Intuitive Surgical, Sunnyvale, CA), first introduced in 2000, is the predominant commercially available robotic surgery system. Despite almost 20 years of commercial availability, its acquisition (US$1–2.5 million per device) and upkeep costs remain high.12 Alternative models to the da Vinci are anticipated in the near future. These competitors are expected to drive down costs while advancing robotic technology.13 If cost reductions combined with concurrent technological advances (ie, Moore's Law14) occur in concert with widespread adoption by clinical communities and public markets, a clinically feasible autonomous surgical device may become commercially available in the future. As with all new technologies, early designs are likely to be costly, with a small number of initial users. Nevertheless, if their clinical efficacy is proven and their technology continuously refined, economies of scale may make them affordable for emerging economies and underserved medical environments.
The realization of clinically feasible surgical robots will likely occur by the end of the 21st century. The combination of AI with surgical robotics may permit the augmentation of surgical capability to optimize outcomes and increase access to care.
1. Leonard S, Wu KL, Kim Y, et al. Smart tissue anastomosis robot (STAR): a vision-guided robotics system for laparoscopic suturing. IEEE Trans Biomed Eng
2. Shademan A, Decker RS, Opfermann JD, et al. Supervised autonomous robotic soft tissue surgery. Sci Transl Med
3. Rajpurkar P, Irvin J, Zhu K, et al. CheXNet: radiologist-level pneumonia detection on chest x-rays with deep learning. ArXiv Prepr ArXiv171105225
4. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature
5. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med
6. Lanfranco AR, Castellanos AE, Desai JP, et al. Robotic Surgery. Ann Surg
7. Ramsay C, Pickard R, Robertson C, et al. Systematic review and economic modelling of the relative clinical benefit and cost-effectiveness of laparoscopic surgery and robotic surgery for removal of the prostate in men with localised prostate cancer. Health Technol Assess Winch Engl
8. Panesar SS, Ashkan K. Surgery in space. Br J Surg
9. Huang HM. The autonomy levels for unmanned systems (ALFUS) framework: interim results. In Performance Metrics for Intelligent Systems (PerMIS) Workshop; August 2006.
10. Kassahun Y, Yu B, Tibebu AT, et al. Surgical robotics
beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions. Int J Comput Assist Radiol Surg
11. Moustris GP, Hiridis SC, Deliparaschos KM, et al. Evolution of autonomous and semi-autonomous robotic surgical systems: a review of the literature. Int J Med Robot
12. Barbash GI, Glied SA. New technology and health care costs—the case of robot-assisted surgery. N Engl J Med
13. Novellis P, Alloisio M, Vanni E, et al. Robotic lung cancer surgery: review of experience and costs. J Vis Surg
14. Moore GE. Cramming more components onto integrated circuits. Proc IEEE