Secondary Logo

Journal Logo

Review Articles

The present and future role of artificial intelligence and machine learning in anesthesiology

Alexander, John C. MD, MBAa; Romito, Bryan T. MD, MBAa; Çobanoğlu, Murat Can PhDb

Author Information
International Anesthesiology Clinics: Fall 2020 - Volume 58 - Issue 4 - p 7-16
doi: 10.1097/AIA.0000000000000294
  • Free

Artificial intelligence (AI) has recently become a ubiquitous term, occurring frequently in as diverse media as newspapers to political debates to medical articles. However, there is often a lack of clear understanding around the exact definitions of the term. Here, we will first seek to clarify some of the confusion surrounding AI and then consider its potential use in the specialty of anesthesiology based on evaluation of its use in other specialties and industries.

AI is a rather ambitious name. Human intelligence enables an individual to drive a car, speak in multiple languages, perform complex manual manipulations (such as perform surgery), and make long-term plans, possibly all within the same day. Humans perform these various tasks using the same, unitary intelligence of their consciousness. The term artificial intelligence can lead to perceptions that AI recreates similar intelligence, albeit in silico. We argue that this is a fundamentally flawed perception. Instead, we distinguish between 2 kinds of AI: narrow and full. Briefly, narrow AI remains restricted to the performance of a single, well-defined task. Common examples include a program that can only translate text between 2 languages, or another one that solely recognize faces in images. Full, or strong, AI, in contrast, would be capable of performing multiple tasks using a unitary intelligence. Such a system is nonexistent and arguably remains beyond the scope of any credible modern AI research.

The illusion of strong AI can sometimes be created by chaining together isolated programs that are each capable of one task but are combined to complete a series of tasks. However, there is a fundamental difference between a series of programs interfacing with each other to solve a series of disparate problems, and one human mind that can simultaneously drive a car and perform surgery. The human mind remains unmatched in its ability to perform a series of intelligent tasks.

To best describe the existing set of technologies that are collectively termed AI, we should define their nature. Work in this field often centers around using an existing set of data to deduce wider knowledge. For instance, consider a program that recognizes faces. Even a large training dataset is finite, whereas the space of all possible images that contain faces is infinite. Therefore, the goal of the software is to deduce the concept using a limited data set. Extended, this deduction can occur on more latent concepts. For instance, a software that looks at all user preferences on an online streaming platform to recommend new items needs to deduce the latent preferences of every individual and the characteristics of the items with no explicit human labeling of either.1 The unifying theme in these is that a limited set of data is utilized to deduce knowledge that would be more widely applicable.

The research field that focuses on extracting knowledge from data using computing is called “machine learning” (ML). The field can be defined in broad terms as algorithms for enabling a machine (ie, the computer) to learn knowledge from data. Most of the modern technological advances that stimulate conversation fit this pattern. Software that automatically detects cancer in pathology or radiology images essentially learns to recognize a prelabeled pattern. Predicting patient outcomes based on different features also falls within this paradigm, where a limited data set is used to extract broader knowledge. The limited amount of research that exists on AI outside of ML prevents achieving major utility.

ML is closely related to statistical inference, which focuses on using data to estimate the unknown parameters of a probabilistic model. Most of the work applications of AI in medicine utilize one specific branch of this statistical inference/ML field, namely, supervised learning. Supervised learning is the specific field of ML that focuses on learning to associate a set of descriptive features to a set of outcome variables. These outcome variables can be discrete or continuous, in which case the task is, respectively, classification or regression. The overwhelming majority of AI applications in medicine, as reviewed later in this work, falls under supervised learning. There is often a preexisting dataset where some digital data containing descriptors (eg, pathology images) with relevant signals are matched with an annotated outcome (eg, pathologist’s diagnosis). In this example, the labels provide the supervision and the learning task consists of associating the signal in the input data to the prescribed outcome.

The challenges in supervised learning revolve around extracting that information which can best generalize from the limited set of data at hand to all the data that has not yet been seen. The key pitfall to avoid here is that of “overfitting,” which is essentially the model fitting to the intricacies of the limited data set it has been provided (which cannot generalize) as opposed to the underlying principles (which can generalize). The fundamental idea here is that of the bias-variance tradeoff. A flexible model that can fit any input data and describe its input data with minimal error (low bias) will vary dramatically (high variance) when the input data are changed (possibly just subsampling a small portion of the training data). This concept is highly useful when critically thinking about supervised learning work.

One of the most common approaches in supervised learning in contemporary applied ML research is deep learning (DL). In DL models, the data are processed through a series of layers, each of which performs a specific mathematical operation on the input. The combined activity of a series of such layers generates a nonlinear model that typically learns intricate, high-level representations relevant to the prescribed label. These models are very expressive, and as such, remain in the low-bias, high-variance side of the aforementioned tradeoff. This leads to some subtle yet potentially significant effects. For instance, if all the data available to the original researchers have been collected with one set of devices (say, for pathologists, the same labels, the same technician, and the same imaging device), even if the performance looks stellar in a hold-out evaluation, the model might fail catastrophically when used on new data from a different institution. This is directly a result of the low-bias, high-variance nature of such models.

The first and foremost idea to enable generalization is regularization. Regularization involves imposing a series of restrictions on the model to limit overfitting. These restrictions are designed to lower performance on the training set (ie, higher bias) to reduce the variability of the model (ie, lower bias), with the intent being better generalization. Regularization comes in many forms, some with theoretical guarantees and others without, and is applicable to all ML models including DL. One practice commonly used to enable generalization of DL models is data augmentation. The motivation here is to take the limited set of existing training data, and apply many small transformations (such as rotations, scaling, translations) to create a much larger set of training instances.

The key change that has enabled the rapid proliferation of ML is computing technology. A fast graphical processing unit today would be the world’s best supercomputer in 2004.2,3 The central ideas behind most of the modern applications of ML in supervised learning, such as convolutional neural networks (CNNs), DL, and others, have actually existed since the 1980s. Rather, it is the advent of significantly cheaper computing coupled with modern ML libraries (TensorFlow, Torch, Caffe, etc.) that enables easy and efficient ML model training on these architectures and has created the proliferation that we observe.

Indeed, this observation underlines how much room for future improvement exists. Specifically, there exists the future opportunity of deploying ideas in ML that are currently cost-prohibitive (due to computation costs, data costs, etc.), but may not be so in the future. To elaborate, consider that the excitement around AI in today’s medical circles stems from deploying mostly just one domain of ML (supervised learning), which is just one field of the broader field of AI. We can only then imagine the implications of such future applications. From here, we will consider the present state of AI across multiple industries, including health care, and use these examples to inform a consideration of the possible future state of AI within anesthesiology.

Current uses of AI outside of the health care industry

AI impacts almost every business or research domain. These range from agriculture4,5 or retail,6 which are as old as human civilization, to transformative new industries that only emerged recently thanks to AI, such as search engines7 or self-driving cars.8

The impact that AI can achieve in a domain depends on the nature of the tasks in that domain. AI often helps by performing a combination of automation, detection, and decision. AI is impervious to repetition fatigue. Therefore, domains that benefit most from the deployment of AI tend to have repetitive tasks that require a limited degree of adaptation in each iteration.9–11 Almost all existing AI algorithms require training data to be useful, and the more data, the better. Thus, areas with vast amounts of digitized, cheap, unrestricted-access data stand to benefit the most from AI. Similarly, the new businesses that AI creates (eg, web search, self-driving vehicles) will require a combination of perception, decision, and big data-handling capabilities.12,13 Although AI tools have distinct advantages, they also have potential weaknesses, such as a vulnerability to hacking.14 Therefore, the exact impact of AI in any field depends on domain-specific advantages and disadvantages.

Search engines are one of the applications of AI with the highest impact. The objective in these algorithms is to match a query text that the user inputs to a set of relevant items.7 There is generally no universal definition of “relevance”; therefore, the data on the behavior of past users are often crucial. Another application of ML with major real-world impact is advertising.15 The revenues of almost all the big platform technology companies (such as Google, Twitter, Facebook, etc.) are driven by advertisement. Therefore, effectively matching the users to advertisement options remains key to the financial stability of these companies. Self-driving cars were essentially created by ML. Classification of objects detected by cameras and sensors, prediction of their future states (will that pedestrian cross the road or remain on the sidewalk?), and planning a series of responses require integrated solution of a number of challenging problems. Furthermore, these must be done in real time, which means that efficiency is, literally, vital. The defense sector is another major area with potential to benefit from ML. Automatically detecting, classifying, and predicting the future actions of potential combatants based on sensor data such as those in drones are clearly useful. However, there are other applications that are perhaps rather more elusive, but potentially even more relevant. Recent work in gaming shows that AI can compete with the best humans in the world in Chess, Go, and even StarCraft.16–18 In the future, if simulations demonstrate that an AI system automatically commanding actions can reliably outperform humans, this would create interesting dilemmas.

These represent only a small cross section of some of the most important impacts that AI can achieve or has already achieved. Perhaps the most important aspect of AI is that it can enable entirely novel approaches. To illustrate, when methods for generating and using electricity were first invented, it took decades before the impact of this technology was felt dramatically. Initial attempts to substitute electric power directly for steam power were not useful. The impact of electric power materialized only after it was used to design entirely new industrial complexes that were not otherwise possible. Similarly, computing has only recently achieved an impact in many arenas of the modern world such as communication or media, despite decades of availability of the fundamental technologies.19 This is the perspective from which we argue that the biggest potential impacts of AI are yet to materialize. The potential impacts of AI and ML will only be fully realized after AI is used to design new systems that were impossible without it and that were never built, or even conceived of, previously.

Current uses of AI in non-anesthesiology health care

The initial uses of AI in health care have taken advantage of the increased digitization, and therefore volume and availability, of medical images as a source of data. Thus, health care tasks involving image analysis and interpretation have been the initial targets of development of AI. Radiology and pathology in particular are specialties predicated on the extraction of health care information from images.20 Radiologists have utilized computer-aided detection and computer-aided diagnosis since the 1990s to flag concerning mammography images for closer review by radiologists. Despite mixed early results,21–23 a recent study showed that an AI system outperformed radiologists in both the United States and United Kingdom on breast cancer identification in screening mammography images, and was noninferior in performance to and substantially reduced the workload of a second reader.24 Two other recent studies showed algorithm parity with radiologists in screening for lung cancer with low-dose computed tomography (CT)25 and with detection of pneumothorax, nodule or mass, airspace opacity, and fracture on chest radiographs.26

The attributes of ML algorithms that provide such an advantage in besting humans in games with structured rules and well-defined winning conditions do not carry over to many health care tasks as there is substantial nuance in image analysis and “winning” is not an easily defined state.27 Furthermore, the current iterations of ML exemplify the “narrow AI” moniker. An algorithm may far exceed the capabilities of a human in a specific cognitive task, but to date, there is minimal ability to carry that expertise over to another task. This means that a separate narrow AI algorithm must be developed for each of the myriad cognitive tasks that a radiologist performs to outperform the radiologist at the level needed to serve as a potential replacement.27,28 This brings up another barrier in the development of such narrow AI systems. The process of creating labeled training data sets for supervised ML is time and labor intensive and becomes more so if there exists substantial ambiguity in the task in question. Within the specialty, it seems that a dominant prediction is that over time, as more narrow AI algorithms are developed and validated, they will become tools incorporated into the workflow of radiologists to augment their productivity and range of skills, allowing for greater contextualization and integration of image-based information into the health care value chain to improve patient outcomes.27,29

Pathologists too face challenges in the future due to AI-driven automation of image analysis and interpretation. Their specialty has more recently become highly digitized with the development of high-throughput scanning and whole-slide imaging platforms.30 Before this, though, many common tasks, such as cell counts and blood typing, became more automated, which freed pathologists to pursue more cognitively complex tasks.20 Analysis of tissue biomarkers and cancer diagnosis are the most commonly pursued use cases, but other active areas of research within the specialty include clinical decision support, automation of testing/treatment algorithms, and analysis of utilization trends.30,31 For instance, one study showed substantial improvement over a pathologist in detection of breast cancer metastasis in lymph nodes, which may improve detection and reduce false-negative rates.32 For prostate cancer, a DL system in one recent study outperformed pathologists in Gleason scoring of whole-slide images from prostatectomies that could improve subsequent therapeutic decisions.33 Another recent study showed that an unsupervised ML algorithm identified novel predictors of prostate cancer recurrence on the basis of histopathological tissue evaluation, which outperformed standard clinical criteria developed by pathologists and, furthermore, that the combination of pathologist and AI predictions improved accuracy even more.34,35 In both radiology and pathology contexts, we see examples where the combination of artificial and human intelligences together exceeds the performance of either one alone.

Another way in which AI is transforming the health care landscape is with innovative predictive analytics. Predictive modeling is not new to the health care field. Clinicians have historically generated forecasts for the trajectory of a disease or the response to a treatment intervention. Traditionally, these estimates were based on previous experience, personal knowledge, or expert opinion.36 The advent and widespread adoption of the electronic health record (EHR) have created a vast repository of demographic and clinically relevant information ripe for decoding and processing.

Traditional prediction models based on regression analysis are impacted by human involvement and knowledge of the subject needed for accurate model refinement.37 Conversely, ML techniques (eg, artificial neural networks, support vector machines, random forests) can be used to identify complex patterns within large data sets and solve problems without specific computer programming.37,38 Theoretically, these approaches can more precisely process multifaceted, nonlinear relationships between predictors compared with older prediction models based on regression.39 In this manner, ML-derived prognostic algorithms represent the link between the raw, unfiltered data housed in the EHR and the promise of accurately predicting future events. For example, humans innately prefer to communicate narratively rather than via a rigidly structured format, but computers excel with structured data and struggle to derive meaning or knowledge from unstructured data, such as free-text notes within the medical record. One subfield of AI, Natural Language Processing (NLP), seeks to make these types of unstructured data easier to parse for computers. This has proven to be more difficult than the gains realized in fields such as machine vision.40

The possibilities arising from the integration of health care and advanced predictive analytics are enormous. In addition to forecasting individual patient-related outcomes, these ML tools can be applied to translational medicine, pharmaceutical development, and other areas across the continuum of care. Recognition of AI’s potential has generated an outpouring of diverse health care initiatives using predictive modeling.

In April 2018, the United States Food and Drug Administration (FDA) approved the use of IDx-DR (IDx, Coralville, IA), a software program that uses an ML-derived algorithm to screen for diabetic retinopathy, without the need for a clinician to also interpret the image or confirm the result.41 ML approaches have also been used to accurately predict clinical deterioration resulting in unplanned transfers to the intensive care unit (ICU) in pediatric patients,42 up to 16 hours in advance. Such a tool could allow for earlier interventions by care team members with subsequent avoidance of ICU admissions in an especially vulnerable patient population. In another application, support vector ML techniques applied to CT scans better predicted which patients would develop symptomatic intracranial hemorrhage after tissue plasminogen activator administration compared with conventional, radiology-based methods.43,44

ML techniques have also found significant application within the field of cardiology. An AI-enabled electrocardiogram (ECG) obtained during normal sinus rhythm was able to accurately identify individuals with a high likelihood of atrial fibrillation.45 In a separate study applied to routine 12-lead ECG, asymptomatic left ventricular dysfunction could be detected with excellent accuracy, sensitivity, and specificity.46 These augmentations to an inexpensive, near-ubiquitous, point-of-care screening test such as the ECG illustrate the power of ML techniques to vastly improve our clinical prediction ability.

ML-based analytics have been developed to aid with improving the accuracy of future diagnosis formulation. Using recurrent neural networks applied to historical data in the EHR, a predictive model named “Doctor AI” was able to accurately make predictions about subsequent diagnoses or medications in the future based on previously input codes.47 Similarly, a deep feature learning algorithm named “deep patient” developed from historical EHR data was able to better predict the probability of patients developing various diseases compared with alternative learning strategies.48 This program did not require supervision from care providers and may serve to augment clinical decision-making in the inpatient setting.

The pharmaceutical and device development/approval process is another area that can be streamlined by the integration of predictive analytics. Using historical drug development and clinical trial data, ML techniques were able to predict transitions from phase 2 to approval and phase 3 to approval with high levels of probability.49 Furthermore, ML has been used to predict the pharmaceutical properties of compounds and targets for drug discovery.50 Forecasting tools that allow researchers to more effectively navigate the translational medicine landscape have the potential to generate significant cost savings for the health care system at large.51

Current uses of AI in anesthesiology

Unlike radiology and pathology, in which machine vision is the dominant domain of study, in anesthesiology, we have seen substantial scholarship in the field of predictive analytics. Although there has been a diverse application of ML principles within the field (mostly in the intraoperative and intensive care settings), a common theme is the attempt to create real-time clinical support tools that can not only aid in decision-making but that can allow anesthesiologists to address problems in a proactive, rather than reactive, manner.


Deep neural networks trained on intraoperative features were able to predict in-hospital mortality based on automatically extractable data.52 Although the model developed by the authors was not able to outperform all current risk prediction models, it was comparable in accuracy to logistic regression, and it may offer the advantage of efficiency by virtue of its automaticity.

An ensemble-model-based ML tool, named “Prescience,” was not able to predict intraoperative hypoxemia during anesthesia, but it was able to delineate the risk factors that contributed to the prediction.53 It formulated the prediction by integrating a large data set from information gleaned from the hospital’s anesthesia information management system, including real-time patient monitor data, the anesthesia machine, medications, fluids, laboratory studies, and baseline demographic data.53 When provided with information generated by Prescience, anesthesiologists were able to significantly improve their ability to predict intraoperative hypoxemia. This represents yet another example where the symbiotic relationship between humans and machines can outperform either one alone.

Predicting intraoperative blood pressure patterns has been a recent target of ML approaches in the intraoperative setting. Analyzing data from thousands of arterial pressure waveforms, an ML-derived algorithm was able to identify an episode of intraoperative hypotension 15 minutes before its occurrence with high sensitivity and specificity.54 In a similar study, multiple prediction models were evaluated for their accuracy in predicting postinduction hypotension. Analyzing over 13,300 cases of general anesthesia, the authors examined 8 different models and quantified the predictive ability of the approaches by measuring area under the receiver operating characteristic curves (AUCs). Several of the tested ML models outperformed logistic regression, and the results of this study may prove useful for future trials.55 Another study used only vital sign-related and anesthesia-related data collected from 102 patients during anesthesia induction. A recurrent neural network was able to generate real-time predictions of blood pressure values before surgical incision.56 Although the precision of the blood pressure predictions could have been improved, the results generated by this study may lay the groundwork for the development of accurate, real-time blood pressure prediction via ML.


The ICU environment generates a mountain of data that can be used for predictive analytics. Given the acuity of the patient population and the need for rapid disease identification and treatment, tools that allow providers to act quickly can have a significant impact on patient outcome. Examples of predictive analytics in the critical care domain include early recognition and treatment of sepsis, mortality prediction, and individual patient-related clinical events.


Sepsis represents a worldwide public health crisis. Approximately one third of patients who die in a hospital have sepsis.57 Early and appropriate antimicrobial administration is the cornerstone of sepsis management. Patient survival decreases an average of 7.6% for each additional hour to effective antimicrobial initiation in the first 6 hours after hypotension develops.58 Systems that could more quickly alert providers to the heralding of sepsis or even predict its future development would transform sepsis management.

In a retrospective study using archived ICU data, an ML-based sepsis prediction program named InSight was able to predict the onset of sepsis up to three hours before a sustained SIRS episode.59 In a separate study, this model was able to superiorly predict sepsis onset compared with commonly used ICU scores.60 A randomized-controlled trial evaluating the use of this algorithm was associated with significant reductions in the average length of stay and in-hospital mortality compared with rules-based sepsis surveillance systems.61 Such a tool can have far-reaching applications as it does not require comprehensive laboratory testing or imaging studies as inputs. In contrast, it analyzes only patient age and easily obtainable vital sign data and can be integrated autonomously into an EHR. In a retrospective observational cohort study of ICU patients, an AI Sepsis Expert (AISE) algorithm based on ML principles was able to predict the onset of sepsis in an ICU patient 4 to 12 hours before clinical recognition.62 The model used a combination of EHR data and high-resolution time-series dynamics of blood pressure and heart rate to not only provide real-time predictions of sepsis onset but also produce a list of the most significant contributing factors. Separate from early identification of sepsis, ML tools have been developed to improve sepsis treatment. Using a combination of kernel-based and deep reinforcement learning techniques, the authors developed a model to learn personalized, more efficient fluid and vasopressor administration strategies for patients with sepsis.63

Risk prediction and other patient-specific outcomes

Several ICU mortality and severity scores have been developed to assist providers in allocating resources and guiding treatment interventions. Most of these scores rely on a logistic regression model and may not accurately predict the actual probability of patient death. The use of an ensemble ML technique, called the Super Learner, better predicted hospital mortality than both SAPS II and APACHE II Scores.64 In a separate study, ML methods were used to develop an automated ICU risk adjustment algorithm with excellent discrimination and calibration elements.65 The model requires only basic administrative and EHR data, and unlike many of the currently used risk prediction tools, it does not require manual data collection or licensing fees.

There have been other examples of the successful use of ML-based analytics to predict the development of various ICU-related outcomes. Prediction models have been created for the identification of patients at risk for prolonged mechanical ventilation and tracheostomy,66 central line-associated blood stream infection,67 acute kidney injury,68 and the development of pressure injuries.69


The earliest attempts to incorporate automation within the field of anesthesiology date back to 1950 with Bickford’s description of an apparatus to automate maintenance of general anesthesia using summated electroencephalography (EEG) signals.70 His apparatus described a closed-loop system wherein a target variable is measured (EEG in Bickford’s case), and a rule-based algorithm is designed to maintain the target within an arranged set of values by manipulating a drug-delivery system (sodium pentothal). In the intervening decades, rule-based closed-loop systems have been the source of substantial research and innovation, and some modern iterations have consistently outperformed humans in maintaining tight control of target variables.71–73 To date, the only FDA-approved device related to the automation of anesthesia is Sedasys, which was approved in 2013 for minimal to moderate sedation of healthy adults undergoing colonoscopy or esophagogastroduodenoscopy.74 The device failed to establish product-market fit for a variety of reasons both medical and commercial,75 and was subsequently pulled from the market in 2016.76 Another closed-loop autonomous anesthesia device, dubbed a pharmacologic robot and named “McSleepy” by its creators, automated several anesthesia-related tasks from induction through emergence, and is capable of controlling several domains of anesthesia including hypnosis, analgesia, and muscle relaxation.77 In 2016, Zaouter et al78 reported a closed-loop system integrating the hypnosis, analgesia, and neuromuscular blockade domains of anesthesia maintenance in cardiac surgery requiring cardiopulmonary bypass. The same year, Restoux et al79 reported a similar system for use in orthotopic liver transplantation. More recently, a randomized-controlled trial utilized multiple closed-loop systems to automate the anesthetic, analgesic, fluid delivery, and ventilation domains of anesthesia and was found to result in improved postoperative cognitive recovery in comparison with manual (ie, human) control.80

Closed-loop systems, considered a very basic form of AI,81 undoubtedly hold great potential as they have been consistently shown to outperform humans in the specific task of maintaining an output variable within a specified range of values. The strength of the closed-loop models is that they never get distracted and are reliable and precise in performing the task that they are designed to do. There is an immense business-use case for a fully formed closed-loop system capable of automatically controlling all the myriad variables needed to perform a general anesthetic and programmed to adhere to best practices. Although such a fully mature system remains theoretical, there has been consistent improvement in the performance and scope of capabilities; thus, a fully autonomous closed-loop pharmacologic robot may be in the future for the specialty of anesthesiology.

A major limitation of closed-loop systems is that each one is bespoke and must be handcrafted for a particular purpose. Aside from being a labor-intensive process, if the need arises to change any part of the system (perhaps related to updated guidelines or best practices), then the entire algorithm must be updated to reflect this. This may introduce new regulatory burdens requiring postmarket revalidation of algorithms with each update. In addition, there are concerns that the rule-based algorithms that power closed-loop systems may no longer be able to outperform humans on more complex tasks or that as multiple closed-loop systems are integrated in series, they may have unanticipated effects on one another.82,83 Another limitation of closed-loop automation is that it will do only what it is told to do and cannot gain new skills or insights—it is not a learning system. As we have discussed, one benefit of ML is that it not only allows for the possibility of improvement in function but also new “insights” where the algorithm finds some previously hidden relationship that humans did not (or could not) see, which may at first be considered an error rather than a discovery.84,85 A closed-loop system is incapable of gaining such insights; it simply tirelessly performs the task that it was designed to do. Despite these shortcomings, control loop systems have been widely studied and will likely play a role in future automation efforts in the field of anesthesiology due to their demonstrated ability to perform certain tasks with greater accuracy and consistency than a human.86

The ability of ML systems to gain new insights and skills using their adaptive nature is their greatest strength and source of potential value within health care. All subfields of ML rely on access to immense amounts of data; thus, the digitization of health care data has greatly fueled the interest in deploying AI in health care settings. Although the predictive value of AI platforms was explored above, we will consider its potential role in automation of anesthesia.

One of the oldest uses of AI in anesthesiology was in monitoring depth of anesthesia using electroencephalography signals. Work spanning decades showed increasing accuracy in predicting awake versus anesthetized patients using neural network evaluation of EEG signals.87–90 A more recent study by Lee et al90 used a DL model to more accurately predict the Bispectral Index during total intravenous anesthesia using target-controlled infusions of propofol and remifentanil.

Another important domain is assessment of nociception. Control loop systems require an output variable to manipulate, but no “gold standard” nociception monitor exists. Several potential options exist including Surgical Pleth Index, pupillometry monitoring, Analgesia Nociception Index, skin conductance measures, spinal withdrawal reflex measurement, and Nociception Level. Reviews have shown improvement by some of these methods in some settings compared with traditional clinical practice, but none are felt to have reached a sufficient degree of accuracy or precision to serve as a gold-standard monitor of nociception.91–94 In particular, the Nociception Level monitor is a multiparameter evaluation utilizing linear regression, and nonlinear regression using a random forest approach (an ML method).95 Future attempts at automation will require nociception monitoring; thus, research continues to identify a viable candidate to serve as the gold-standard monitor.

Closed-loop automation of neuromuscular blockade (NMB) has been described since the 1980s,96 but there has been scholarship since then using ML techniques to refine prediction models for NMB. For example, Laffey et al97 developed an artificial neural network to predict residual postoperative NMB. Other studies have used ML techniques to predict and control NMB.98–100 However, pharmacologic innovation has made it possible to rapidly reverse even profound NMB, potentially obviating the need for further technical innovation for automation in this domain.96,101 Ventilator management is another important domain of anesthesia maintenance and is also based on evaluation of multiple parameters. Closed-loop systems have long been utilized to assist in this area,96 but research utilizing AI techniques102–104 continues to work to refine the control of and weaning from mechanical ventilation.

Accurate prediction in these and other domains of anesthesia management is imperative, but it is only the first step in automation. Prediction must be coupled with action; thus, ML systems must be coupled with control systems. As the field matures, ML algorithms will need to be embedded into control systems, potentially replacing the human-designed, rule-based algorithms of closed-loop systems to create a more robust system capable of performing in situations that it was not explicitly designed for. The high-fidelity, low-variability benefits of closed-loop systems will be improved by the learning capabilities of ML algorithms, wherein they gain new insights with repetition and further improve task-related abilities. The linking of prediction to action, though, raises ethical considerations as it would effectively remove humans from the decision loop if such systems are ever deployed.

The role of AI in the evaluation of image data is well established in health care, as discussed previously. Although the role of image recognition is less fundamental to anesthesiologists, there is ongoing research in its application in airway assessment and management. Cuendet et al105 created an automated system using a random forest approach to select morphologic criteria from preoperative pictures of faces of 970 patients to predict subsequent difficult airway. In this scenario, preoperative pictures were linked with the intraoperative airway management for each patient and characterized as easy, intermediate, or difficult. Cuendet’s algorithm showed a 77.9% positive predictive value. An earlier study by Connor and Segal106 also developed an ML algorithm to predict difficult intubation based on facial photographs that outperformed clinical predictive tests. A more recent study by Matava et al107 utilized a CNN in conjunction with bronchoscopy video to accurately identify, classify, and label both vocal cords and tracheal rings in real time.

Another avenue of research for machine vision in anesthesiology is with the assessment of ultrasound images. Smistad and Lovstakken108 developed a deep CNN to detect blood vessels in real time. One interesting feature of this study was that although the algorithm was trained on femoral artery ultrasound images, it was able to generalize its detection to carotid artery images as well. One deep CNN developed by Hetherington et al109 was able to identify lumbar vertebral levels and intervertebral gaps in transverse ultrasound images, whereas an ML system developed by Pesteie et al110 could localize the epidural space on paramedian ultrasound images of the lumbar spine. One small study of 49 patients attempted to identify both nerves and blood vessels for the axillary brachial plexus block and was able to identify blood vessels easily, but showed significantly less ability to identify the relevant nerves.111 A larger study of 562 ultrasound images of the femoral nerve block region showed satisfactory segmentation of the nerve region.112 Both studies concluded that larger data sets would be necessary before these systems would have clinical applications.

Ethics, legal concerns, barriers, and limitations

Substantial limitations or barriers exist that may delay or even prevent AI from achieving its potential benefits in health care. One promise of automated anesthesia systems is that they will allow for best practices and standard of care to be standardized everywhere.83 Although this could greatly improve access to high-quality care across the world, it may only guarantee such access to countries or institutions that pay the licensing fees to AI developers. In such a scenario, we could see even wider inequality in access to quality care. Second, the ability to generalize data from the training set to a larger population is another source of concern. There is potential for introducing bias in the AI algorithms. In one of the most egregious instances, ProPublica studied a criminal risk assessment tool that was used to make probation, incarceration, and parole decisions in courtrooms throughout the United States, but was found to erroneously predict black defendants as higher risk for recidivism more than twice as frequently as white defendants.113 There are multiple strategies to combat bias in AI including inserting humans back into the decision loop or building “counterfactual fairness” in which sensitive attributes (eg, race, sex, sexual orientation) are removed to observe for changes in outputs.114 Elimination of bias remains a significant, but key, barrier to widespread utilization of AI.

These risks point to the fundamental limitation of AI systems: trust. If humans cannot discern how AI algorithms make decisions, and especially when the decision does not make intuitive sense, then humans are apt to distrust the machine. This has been called the “black box” problem or the explainability problem. As the stakes rise in the types of decisions that we entrust to AI systems, the need for insight into how those decisions are made also rises. This need has resulted in yet another subfield of AI, so-called Explainable AI (XAI), which seeks to create tools and strategies to improve trust and transparency in AI systems.115

A concern specific to health care are questions of liability and reimbursement if AI systems begin to automate aspects of patient care. The culture of medicine is fairly conservative and there could be substantial (and justifiable) pushback if inscrutable health care decisions are made or recommended by a machine, yet a physician is still medicolegally liable. Alternatively, if a machine makes an inscrutable recommendation or prediction but cannot explain how it was generated, then should a physician be liable for not following the recommendation? In the same way, reimbursement paradigms may need to change. It seems intuitive that an insurer may balk at paying a physician for care provided by a machine. Both liability and reimbursement concerns must be addressed by concerned stakeholders (patients, physicians, insurance companies, legislators, etc.) to define the “new normal” before widespread implementation of AI can occur. In addition, AI algorithms require massive quantities of data to function, necessitating interoperability between health care systems. This will allow data sharing and creation of larger networked data sets, which will improve the ability to create AI algorithms that are generalizable outside their narrow training data sets. Unfortunately, large electronic medical record companies are actively working to prevent interoperability,116 which will stifle innovation and development in AI systems. In fact, lack of alignment and cooperation between stakeholders is considered to be one of the greatest barriers to the widespread implementation of AI in health care.117

One final limitation is regulation. The ability of an AI system to learn with exposure to more data leads to a substantial problem with regulation, namely, that the device is continuously changing in response to new data; thus, validation cannot be a single event. The current FDA paradigm for medical device approval and regulation does not anticipate such a continuously learning system. Like all software, AI systems fall into a category of medical devices denoted as Software as a Medical Device (SaMD), where a software program independent of any hardware functions as a medical device. Even this category of medical device is a fairly recent regulatory category and was intended for standalone programs, such as smartphone applications, whose intended function is to “treat, diagnose, cure, mitigate, or prevent disease.” Currently, approved SaMD using AI algorithms have required the algorithm to be “locked” so that there is consistent output to repeated exposure to the same input. This, of course, neutralizes the most powerful capability of AI programs (ie, adaptation), but it also assures consistent, safe, and effective function, which is the fundamental role of the FDA. The FDA has proposed a more flexible paradigm for AI-based SaMD utilizing a total product lifecycle approach that envisions a transparent partnership role between the FDA and the device manufacturer starting from premarket development through postmarket surveillance and testing and assurances of a culture of quality and excellence in the manufacturer.118 Although this new framework is still just a proposal, at some point, this or a similar regulatory paradigm will emerge to allow for AI systems to be approved while still ensuring safety and efficacy for patients. The timeline for such a regulatory framework to emerge is unclear, but it is entirely possible that lack of regulatory innovation, not technological innovation, will delay entry of “unlocked” AI algorithms into the health care market.

How will AI change the practice of anesthesiology?

Despite substantial active research, as summarized above, there remain significant technical, ethical, regulatory, and administrative barriers to overcome before widespread implementation of AI can proceed within anesthesiology. Assuming that these barriers can be overcome, we can look to colleagues in pathology and radiology to see that the dominant opinion within their specialties is that although tasks within the specialty may be automated, there is no foreseeable technology that will be able to automate a sufficient number of tasks to fully replace a physician.27 Looking even wider outside of health care, we can take lessons from other businesses and industries. A report by McKinsey Consulting estimates that although almost half of all tasks within an occupation have potential to be automated by 2055, <5% of occupations are at risk for full automation.119 Other business strategists estimate that widespread AI-enabled automation will fuel a doubling of economic growth by 2035 and boost labor productivity by 40% through efficiencies from offloading routine cognitive work.120,121 Previous automation dating back to the 19th century actually created more jobs than it displaced, but these new jobs will likely require workers to continuously gain new skills throughout their working lives to keep up with the pace of change.121

Overall, the future appears bright for predictive analytics within anesthesiology and perioperative medicine. Despite rapid innovation in other areas of perioperative medicine discussed previously, the preoperative evaluation phase seems to be a relatively uncharted area for ML-based predictive tools. For example, the American Society of Anesthesiologists (ASA) physical status classification is often used to predict operative risk,122,123 but it has only moderate ability to predict in-hospital mortality and cardiac complications, with moderate inter-rater reliability in clinical practice.124 A small proof-of-concept study evaluating the use of ML algorithms in preoperative assessments showed promising results; however, larger prospective trials validating this work have yet to be performed.125 Given the amount of available historical patient data on the EHR, including evaluation of previous anesthetic tolerance, the development of accurate and personalized preoperative risk prediction models in the future seems conceivable. Implementation of robust predictive analytic platforms will allow for the transition from reactive to proactive interventions that will fundamentally change how anesthesiology is practiced. Real-time clinical support tools will be at the forefront of this transformation, alerting providers to impending clinical decompensation before its actual occurrence. Thus, interventions could preempt clinical deterioration, not just treat it.

In terms of automation within the operating room (OR), rule-based, closed-loop systems are the most likely candidate for a commercially available machine capable of automating the maintenance of general anesthesia. ML algorithms will likely lag behind closed-loop systems for a time, but if regulation ever allows for “unlocked” algorithms, then they may very well surpass rule-based closed-loop systems. It is unclear, however, whether automation will ever allow for full replacement of humans in the management of general anesthesia due to the myriad tasks that must be attended to requiring a mix of cognitive and highly dexterous work. Advancements in machine vision may provide for exciting new opportunities in airway management and ultrasound image interpretation, such as in regional anesthesia or point-of-care ultrasound use in the ICU or perioperative setting. At this time, however, machine vision of ultrasound images is much better for the evaluation of vascular structures versus nerves.


Anesthesiologists should consider AI yet another tool to be used to augment their unique skill sets. Fundamentally, anesthesiologists excel at using technology to monitor patient status in real time and intervene to maintain normal physiologic function or return a patient to such a state. There is clear evidence that aberrant vital signs often lead to clinical deterioration,126 but current vital sign monitoring methods miss almost all of these episodes.127,128 Miniaturization of vital sign technology can allow them to be embedded throughout the hospital system and, as outlined by Sessler and Saugel,129 it is now possible for all inpatients to have continuous, real-time vital sign monitoring. Aviation is a commonly used analogy for the practice of anesthesiology. In the future, the anesthesiologist may go from being the pilot in the cockpit (or OR) to the air traffic controller, monitoring all the planes in the sky (or patients in the hospital). In addition, by incorporating the AI-enabled predictive analytics systems to improve workflows, eliminate tedious tasks, and reduce the cognitive workload of vigilantly monitoring so many patients simultaneously,38 the anesthesiologist would be empowered to switch roles as needed from air traffic controller to pilot, stepping into the OR, ICU, or ward to intervene before the moment of clinical deterioration. Rather than job displacement, the AI-augmented anesthesiologists of the future will be able to take their unique skills outside the OR and ICU to care for many more patients, thereby bringing value to hospitals and health systems by efficiently improving perioperative outcomes.

Conflict of interest disclosure

The authors declare that they have nothing to disclose.


1. Johnson CC. Logistic matrix factorization for implicit feedback data. Available at: Accessed July 9, 2020.
2. NVIDIA Titan RTX. Available at: Accessed July 9, 2020.
3. Supercomputer TOP500 November List; 2004. Available at: Accessed July 9, 2020.
4. Jha K, Doshi A, Patel P, et al. A comprehensive review on automation in agriculture using artificial intelligence. Artif Intell Agri. 2019;2:1–12.
5. Patricio DI, Rieder R. Computer vision and artificial intelligence in precision agriculture for grain crops: a systematic review. Comput Electron Agr. 2018;153:69–81.
6. Poyry E, Hietaniemi N, Parvinen P, et al. Personalized product recommendations: evidence from the field. Proceedings of the 50th Hawaii International Conference on System Sciences 2017. Available at: Accessed July 9, 2020.
7. Brin S, Page L. The anatomy of a large-scale hypertextual web search engine. Comput Netw ISDN Syst. 1998;30:107–117.
8. Sadat A, Ren M, Pokrovsky A, et al. Jointly learnable behavior and trajectory planning for self-driving vehicles. Available at: Accessed July 9, 2020.
9. Jean N, Burke M, Xie M, et al. Combining satellite imagery and machine learning to predict poverty. Science. 2016;353:790–794.
10. Chalkidis I, Androutsopoulos I. A deep learning approach to contract element extraction. Available at: Accessed July 9, 2020.
11. Amin A, Al-Obeidat F, Shah B, et al. Customer churn prediction in telecommunication industry using data certainty. J Bus Res. 2019;94:290–301.
12. Wong K, Wang S, Ren M, et al. Identifying unknown instances for autonomous driving. Available at: Accessed July 9, 2020.
13. Xiong Y, Liao R, Zhao H, et al. UPSNet: a unified panoptic segmentation network. Available at: Accessed July 9, 2020.
14. Yan C, Xu W, Liu J. Can you trust autonomous vehicles: contactless attacks against sensors of self-driving vehicle. Available at: Accessed July 9, 2020.
15. Zhou G, Mou N, Fan Y, et al. Deep interest evolution network for click-through rate prediction. Available at: Accessed July 9, 2020.
16. Silver D, Huang A, Maddison CJ, et al. Mastering the game of go with deep neural networks and tree search. Nature. 2016;529:354–359.
17. Silver D, Hubert T, Schrittwieser J, et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science. 2018;362:1140–1144.
18. Vinyals O, Babuschkin I, Czarnecki WM, et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature. 2019;575:350–354.
19. David PA. The dynamo and the computer: a historical perspective of the modern productivity paradox. Am Econ Rev. 1990;80:335–361.
20. Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and pathologists as information specialists. JAMA. 2016;316:2253–2254.
21. Lehman CD, Wellman RD, Buist DS, et al. Diagnostic accuracy of digital screening mammography with and without computer-aided detection. JAMA Intern Med. 2015;175:1828–1837.
22. Kohli A, Jha S. Why CAD failed in mammography. J Am Coll Radiol. 2018;15:533–537.
23. Paiva OA, Prevedello LM. The potential impact of artificial intelligence in radiology. Radiol Bras. 2017;50:V–VI.
24. McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577:89–94.
25. Ardila D, Kiraly AP, Bharadwaj S, et al. End-to-end lung cancer screening with three dimensional deep learning on low-dose chest computed tomography. Nat Med. 2019;25:954–961.
26. Majkowska A, Mittal S, Steiner DF, et al. Chest radiograph interpretation with deep-learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation. Radiology. 2020;294:421–431.
27. Chan S, Siegel EL. Will machine learning end the viability of radiology as a thriving medical specialty. Br J Radiol. 2019;91:20180416.
28. Martin Noguerol T, Paulano-Godino F, Martin-Valdivia MT, et al. Strengths, weaknesses, opportunities, and threats analysis of artificial intelligence and machine learning applications in radiology. J Am Coll Radiol. 2019;16:1239–1247.
29. Liew CJ, Krishnaswamy P, Cheng LT, et al. Artificial intelligence and radiology in Singapore: championing a new age of augmented imaging for unsurpassed patient care. Ann Acad Med Singapore. 2019;48:16–24.
30. Serag A, Ion-Margineanu A, Qureshi H, et al. Translational AI and deep learning in diagnostic pathology. Front Med (Lausanne). 2019;6:185.
31. Rashidi HH, Tran NK, Betts EV, et al. Artificial intelligence and machine learning in pathology: the present landscape of supervised methods. Acad Pathol. 2019;6:2374289519873088.
32. Liu Y, Gadepalli K, Norouzi M, et al. Detecting cancer metastases on gigapixel pathology images. Available at: Accessed July 9, 2020.
33. Nagpul K, Foote D, Liu Y, et al. Development and validation of a deep learning algorithm for improving gleason scoring of prostate cancer. NPJ Digit Med. 2019;2:48.
34. Yamamoto Y, Tsuzuki T, Akatsuka J, et al. Automated acquisition of explainable knowledge from unannotated histopathology images. Nat Commun. 2019;10:5642.
35. RIKEN. Artificial intelligence identifies previously unknown features associated with cancer recurrence. Available at: Accessed July 9, 2020.
36. Vogenberg FR. Predictive and prognostic models: implications for healthcare decision-making in a modern recession. Am Health Drug Benefits. 2009;2:218–222.
37. Christodoulou E, Ma J, Collins GS, et al. A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models. J Clin Epidemiol. 2019;110:12–22.
38. Ngiam KY, Khor IW. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 2019;20:e262–e273.
39. Goto T, Camargo CA Jr, Faridi MK, et al. Machine learning-based prediction of clinical outcomes for children during emergency department triage. JAMA Netw Open. 2019;2:e186937.
40. Wong A, Plasek JM, Montecalvo SP, et al. Natural language processing and its implications for the future of medication safety: a narrative review of recent advances and challenges. Pharmacotherapy. 2018;38:822–841.
41. FDA News Release. FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems. Available at: Accessed July 9, 2020.
42. Wellner B, Grand J, Canzone E, et al. Predicting unplanned transfers to the intensive care unit: a machine learning approach leveraging diverse clinical elements. JMIR Med Inform. 2017;5:e45.
43. Bentley P, Ganesalingam J, Carlton Jones AL, et al. Prediction of stroke thrombolysis outcome using CT brain machine learning. Neuroimage Clin. 2014;4:635–640.
44. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2:230–243.
45. Attia ZI, Noseworthy PA, Lopez-Jimenez F, et al. An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction. Lancet. 2019;394:861–867.
46. Attia ZI, Kapa S, Lopez-Jimenez F, et al. Screening for cardiac contractile dysfunction using an artificial intelligence-enabled electrocardiogram. Nat Med. 2019;25:70–74.
47. Choi E, Bahadori MT, Schuetz A, et al. Doctor AI: predicting clinical events via recurrent neural networks. JMLR Workshop Conf Proc. 2016;56:301–318.
48. Miotto R, Li L, Kidd BA, et al. Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Sci Rep. 2016;6:26094.
49. Lo AW, Siah KW, Wong CH. Machine learning with statistical imputation for predicting drug approvals. Harvard Data Science Review 2019. Available at: Accessed July 9, 2020.
50. Shah P, Kendall F, Khozin S, et al. Artificial intelligence and machine learning in clinical development: a translational perspective. NPJ Digit Med. 2019;2:69.
51. Van Norman GA. Drugs, devices, and the FDA: part 1: an overview of approval processes for drugs. JACC Basic Transl Sci. 2016;1:170–179.
52. Lee CK, Hofer I, Gabel E, et al. Development and validation of a deep neural network model for prediction of postoperative in-hospital mortality. Anesthesiology. 2018;129:649–662.
53. Lundberg SM, Nair B, Vavilala MS, et al. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nat Biomed Eng. 2018;2:749–760.
54. Hatib F, Jian Z, Buddi S, et al. Machine-learning algorithm to predict hypotension based on high-fidelity arterial pressure waveform analysis. Anesthesiology. 2018;129:663–674.
55. Kendale S, Kulkarni P, Rosenberg AD, et al. Supervised machine-learning predictive analytics for prediction of postinduction hypotension. Anesthesiology. 2018;129:675–688.
56. Jeong YS, Kang AR, Jung W, et al. Prediction of blood pressure after induction of anesthesia using deep learning: a feasibility study. Appl Sci. 2019;9:5135.
57. Centers for Disease Control & Prevention. Data Reports. Available at: Accessed July 9, 2020.
58. Kumar A, Roberts D, Wood KE, et al. Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Crit Care Med. 2006;34:1589–1596.
59. Calvert JS, Price DA, Chettipally UK, et al. A computational approach to early sepsis detection. Comput Biol Med. 2016;74:69–73.
60. Desautels T, Calvert J, Hoffman J, et al. Prediction of sepsis in the intensive care unit with minimal electronic health record data: a machine learning approach. JMIR Med Inform. 2016;4:e28.
61. Shimabukuro DW, Barton CW, Feldman MD, et al. Effect of a machine learning-based severe sepsis prediction algorithm on patient survival and hospital length of stay: a randomised clinical trial. BMJ Open Respir Res. 2017;4:e000234.
62. Nemati S, Holder A, Razmi F, et al. An interpretable machine learning model for accurate prediction of sepsis in the ICU. Crit Care Med. 2018;46:547–553.
63. Peng X, Ding Y, Wihl D, et al. Improving sepsis treatment strategies by combining deep and kernel-based reinforcement learning. AMIA Annu Symp Proc. 2018;2018:887–896.
64. Pirracchio R, Petersen ML, Carone M, et al. Mortality prediction in intensive care units with the super ICU learner algorithm (SICULA): a population-based study. Lancet Respir Med. 2015;3:42–52.
65. Delahanty RJ, Kaufman D, Jones SS. Development and evaluation of an automated machine learning algorithm for in-hospital mortality risk adjustment among critical care patients. Crit Care Med. 2018;46:e481–e488.
66. Parreco J, Hidalgo A, Parks JJ, et al. Using artificial intelligence to predict prolonged mechanical ventilation and tracheostomy placement. J Surg Res. 2018;228:179–187.
67. Parreco JP, Hidalgo AE, Badilla AD, et al. Predicting central line-associated bloodstream infections and mortality using supervised machine learning. J Crit Care. 2018;45:156–162.
68. Tomašev N, Glorot X, Rae JW, et al. A clinically applicable approach to continuous prediction of future acute kidney injury. Nature. 2019;572:116–119.
69. Alderden J, Pepper GA, Wilson A, et al. Predicting pressure injury in critical care patients: a machine-learning model. Am J Crit Care. 2018;27:461–468.
70. Bickford RG. Automatic electroencephalographic control of general anesthesia. EEG Clin Neurophysiol. 1950;2:93–96.
71. Brogi E, Cyr S, Kazan R, et al. Clinical performance and safety of closed-loop systems: a systematic review and meta-analysis of randomized controlled trials. Anesth Analg. 2017;124:446–455.
72. Pasin L, Nardelli P, Pintaudi M, et al. Closed-loop delivery systems versus manually controlled administration of total IV anesthesia: a meta-analysis of randomized clinical trials. Anesth Analg. 2017;124:456–464.
73. Puri GD, Mathew PJ, Biswas I, et al. A multicenter evaluation of a closed-loop anesthesia delivery system: a randomized controlled trial. Anesth Analg. 2016;122:106–114.
74. Summary of safety and effectiveness data. United States Food & Drug Administration. Available at: Accessed July 9, 2020.
75. Goudra B, Singh PM. Failure of sedasys: destiny or poor design? Anesth Analg. 2017;124:686–688.
76. J&J to stop selling automated sedation system sedasys. Available at: Accessed July 9, 2020.
77. Hemmerling TM, Zaouter C, Tang L, et al. McSleepy—a novel completely automatic anesthesia delivery system: performance evaluation in comparison to manual control in Abstracts of 2010 CAS Meeting. Can J Anesth. 2010;57:116.
78. Zaouter C, Hemmerling TM, Lanchon R, et al. The feasibility of a completely automated total IV anesthesia drug delivery system for cardiac surgery. Anesth Analg. 2016;123:885–893.
79. Restoux A, Grassin-Delyle S, Liu N, et al. Pilot study of closed-loop anaesthesia for liver transplantation. Br J Anaesth. 2016;117:332–340.
80. Joosten A, Rinehart J, Bardaji A, et al. Anesthetic management using multiple closed-loop systems and delayed neurocognitive recovery: a randomized controlled trial. Anesthesiology. 2020;132:253–266.
81. Connor CW. Artificial intelligence and machine learning in anesthesiology. Anesthesiology. 2019;131:1346–1359.
82. Alexander JC, Joshi GP. Anesthesiology, automation, and artificial intelligence. Proc (Bayl Univ Med Cent). 2017;31:117–119.
83. Hemmerling TM. Robots will perform anesthesia in the near future. Anesthesiology. 2020;132:219–220.
84. Metz C. Google’s AI wins pivotal second game in match with go grandmaster. Wired Magazine. 2016. Available at: Accessed July 9, 2020.
85. Metz C. In two moves, AlphaGo and Lee Sedol redefined the future. Wired Magazine. 2016. Available at: Accessed July 9, 2020.
86. Dumont GA, Anserimo JM. Closed-loop control of anesthesia: a primer for anesthesiologists. Anesth Analg. 2013;117:1130–1138.
87. Viselis RA, Reinsel R, Wronski M. Analytical methods to differentiate similar electroencephalographic spectra: neural network and discriminant analysis. J Clin Monit. 1993;9:257–267.
88. Ortolani O, Conti A, Di Filippo A, et al. EEG signal processing in anaesthesia: use of a neural network technique for monitoring depth of anaesthesia. Br J Anaesth. 2002;88:644–648.
89. Mirsadeghi M, Behnam H, Shalbaf R, et al. Characterizing awake and anesthetized states using a dimensionality reduction method. J Med Syst. 2016;40:13.
90. Lee HC, Ryu HG, Chung EJ, et al. Prediction of bispectral index during target-controlled infusion of propofol and remifentanil: a deep learning approach. Anesthesiology. 2018;128:492–501.
91. Jiao Y, He B, Tong X, et al. Intraoperative monitoring of nociception for opioid administration: a meta-analysis of randomized controlled trials. Minerva Anestesiol. 2019;85:522–530.
92. Meijer FS, Niesters M, van Velzen M, et al. Does nociception monitor-guided anesthesia affect opioid consumption? A systematic review of randomized controlled trials. J Clin Monit Comput. 2019;34:629–641.
93. Ledowski T. Objective monitoring of nociception: a review of current commercial solutions. Br J Anaesth. 2019;123:e312–e321.
94. Funcke S, Pinnschmidt HO, Wesseler S, et al. Guiding opioid administration by 3 different analgesia nociception monitoring indices during general anesthesia alters intraoperative sufentanil consumption and stress hormone release: a randomized controlled pilot study. Anesth Analg. 2020;130:1264–1273.
95. Ben-Israel N, Kliger M, Zuckerman G, et al. Monitoring the nociception level: a multi-parameter approach. J Clin Monit Comput. 2013;27:659–668.
96. Rinehart J, Liu N, Alexander B, et al. Review article: closed-loop systems in anesthesia: is there a potential for closed-loop fluid management and hemodynamic optimization? Anesth Analg. 2012;114:130–143.
97. Laffey JG, Tobin E, Boylan JF, et al. Assessment of a simple artificial neural network for predicting residual neuromuscular block. Br J Anaesth. 2003;90:48–52.
98. Lendl M, Schwarz H, Romeiser HJ, et al. Nonlinear model-based predictive control of non-depolarizing muscle relaxants using neural networks. J Clin Monit Comput. 1999;15:271–278.
99. Shieh JS, Fan SZ, Chang LW, et al. Hierarchical rule–based monitoring and fuzzy logic control for neuromuscular block. J Clin Monit Comput. 2000;15:583–592.
100. Santanen OA, Svartling N, Haasio J, et al. Neural nets and prediction of the recovery rate from neuromuscular block. Eur J Anaesthiol. 2004;20:87–92.
101. Le Guen M, Liu N, Chazot T, et al. Closed-loop anesthesia. Minerva Anestesiol. 2016;82:573–581.
102. Schaublin J, Derighetti M, Feigenwinter P, et al. Fuzzy logic control of mechanical ventilation during anaesthesia. Br J Anaesth. 1996;77:636–641.
103. Martinoni EP, Pfister CHA, Stadler KS, et al. Model-based control of mechanical ventilation: design and clinical validation. Br J Anaesth. 2004;92:800–807.
104. Gottschalk A, Hyzer MC, Geer RT. A comparison of human and machine-based predictions of successful weaning from mechanical ventilation. Med Decis Making. 2000;20:160–169.
105. Cuendet GL, Schoeittker P, Yuce A, et al. Facial image analysis for fully automatic prediction of difficult endotracheal intubation. IEEE Trans Biomed Eng. 2016;63:328–339.
106. Connor CW, Segal S. Accurate classification of difficult intubation by computerized facial analysis. Anesth Analg. 2011;112:84–93.
107. Matava C, Pankiv E, Raisbeck S, et al. A convolutional neural network for real time classification, identification, and labelling of vocal cord and tracheal using laryngoscopy and bronchoscopy video. J Med Syst. 2020;44:44.
108. Smistad E, Lovstakken L Vessel detection in ultrasound images using deep convolutional neural networks. DLMIA. Vol. 10008 of Lecture Notes in Computer Science. 2016: 0–8.
109. Hetherington J, Lessoway V, Gunka V, et al. SLIDE: automatic spine level identification system using a deep convolutional neural network. Int J CARS. 2017;12:1189–1198.
110. Pesteie M, Lessoway V, Abolmaesumi P, et al. Automatic localization of the needle target for ultrasound-guided epidural injections. IEEE Trans Med Imaging. 2018;37:81–92.
111. Smistad E, Johansen KF, Iversen DH, et al. Highlighting nerves and blood vessels for ultrasound-guided axillary nerve block procedures using neural networks. J Med Imaging (Bellingham). 2018;5:044004.
112. Huang C, Zhou Y, Tan W, et al. Applying deep learning in recognizing the femoral nerve block region on ultrasound images. Ann Transl Med. 2019;7:453.
113. Angwin J, Larson J, Mattu S, et al. Machine bias. ProPublica 2016. Available at: Accessed July 9, 2020.
114. Manyika J, Silberg J, Presten B. What do we do about the biases in AI? Harvard Business Review 2019. Available at: Accessed July 9, 2020.
115. Adadi A, Berrada M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access. 2018;6:52138–52160.
116. Farr C. Epic’s CEO is urging hospital customers to oppose rules that would make it easier to share medical info. CNBC 2020. Available at: Accessed July 9, 2020.
117. Shaw J, Rudzicz F, Jamieson T, et al. Artificial intelligence and the implementation challenge. J Med Internet Res. 2019;21:e13659.
118. Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD): discussion paper and request for feedback. U.S. Food & Drug Administration. Available at: Accessed July 9, 2020.
119. Manyika J, Chui M, Miremadi M, et al. A future that works: automation, employment, and productivity. McKinsey & Company, McKinsey Global Institute 2017. Available at: Accessed July 9, 2020.
120. Purdy M, Daugherty P. Why artificial intelligence is the future of growth. Accenture 2016. Available at: Accessed July 9, 2020.
121. The Economist. Return of the Machinery Question. The Economist 2016. Available at: Accessed July 9, 2020.
122. Saklad M. Grading of patients for surgical procedures. Anesthesiology. 1941;2:281–284.
123. ASA House of Delegates/Executive Committee. ASA Physical Status Classification System. Schaumburg, IL: American Society of Anesthesiologists; 2014. Available at: Accessed July 9, 2020.
124. Sankar A, Johnson SR, Beattie WS, et al. Reliability of the american society of anesthesiologists physical status scale in clinical practice. Br J Anaesth. 2014;113:424–432.
125. Karpagavalli S, Jamuna KS, Vijaya MS. Machine learning approach for preoperative anaesthetic risk prediction. Int J Recent Trends Engin Technol. 2009;1:19–22.
126. Jones D, Mitchell I, Hillman K, et al. Defining clinical deterioration. Resuscitation. 2013;84:1029–1034.
127. Turan A, Chang C, Cohen B, et al. Incidence, severity, and detection of blood pressure perturbations after abdominal surgery: a prospective blinded observational study. Anesthesiology. 2019;130:550–559.
128. Sun Z, Sessler DI, Dalton JE, et al. Postoperative hypoxemia is common and persistent: a prospective blinded observational study. Anesth Analg. 2015;121:709–715.
129. Sessler DI, Saugel B. Beyond “failure to rescue”: the time has come for continuous ward monitoring. Br J Anaesth. 2019;122:306–310.
Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.