Secondary Logo

Journal Logo

BASIC SCIENCE

Artificial Intelligence-enabled, Real-time Intraoperative Ultrasound Imaging of Neural Structures Within the Psoas

Validation in a Porcine Spine Model

Carson, Tyler DOa,b; Ghoshal, Goutam PhDc; Cornwall, George Bryan PhD, MBA, PEngd; Tobias, Richarde; Schwartz, David G. MD, MBAf; Foley, Kevin T. MDg

Author Information
doi: 10.1097/BRS.0000000000003704
  • Open

Lateral lumbar interbody fusion (LLIF) is an established and effective surgical technique.1–5 However, the trans-psoas approach puts elements of the lumbar plexus at risk.6–8 To mitigate this, electromyography (EMG) is used for detecting and avoiding motor nerves while traversing the psoas and accessing the disc space. Proximity to motor nerves is inferred using a stimulated probe within the psoas and measuring the EMG response.6–8 Unfortunately, this process provides neither true spatial nerve location nor identification of sensory branches. Furthermore, patient comorbidities (e.g., diabetes), anesthetic paralytics, electrical interference, and technical issues can reduce the accuracy of EMG in this setting.9,10,11 In addition, LLIF necessitates the use of C-Arm fluoroscopy or other ionizing radiation imaging modalities. There is increasing concern about the risk of radiation exposure to surgeons and patients in lateral spine surgery as well as in other spinal procedures.12–17

Ultrasound is an important diagnostic tool because it facilitates the visualization of patient anatomy without the risk of radiation exposure associated with other imaging modalities, such as fluoroscopy and computerized tomography (CT).18–20 The use of ultrasound is ubiquitous in some disciplines of medicine.18,21,22 Advances have occurred in many aspects of ultrasound technology including sensors, computer hardware, and software. It is used in spinal applications,23,24 and has even been used in the cervical spine in outer space.25,26

A recent review summarized the use of ultrasound in spinal diagnostic and therapeutic applications27 and another reviewed the use of ultrasound in spine surgery,24 claiming the use of intraoperative ultrasound was mainly for tumors or calcified thoracic discs. A more recent study reported the clinical evaluation of ultrasound for lateral lumbar spine surgery.28 The potential appeal of ultrasound for lateral spine surgery is its ability to visualize soft tissues and vasculature using Doppler imaging with color overlays indicating the direction and velocity of blood flow.18,28,29 In addition, ultrasound has the potential to reduce radiation exposure to surgeons and patients.

Given the limitations of current methodologies for intraoperatively localizing and visualizing neural structures during trans-psoas surgery, a need exists for improving the safety of LLIF. Ultrasound technology, enhanced with artificial intelligence (AI)-derived neural detection algorithms, could prove useful for doing so. In the present study, we evaluated the use of an artificial intelligence (AI)-enabled, real-time intraoperative ultrasound system for localization of nerves within the psoas in an in vivo porcine model.

MATERIAL AND METHODS

Animal Model and Handling

In vivo tests were performed in porcine tissue30,31 using an FDA-cleared ultrasound imaging system (SonoVision,™ Tissue Differentiation Intelligence, USA) to evaluate the performance of the system in imaging and identifying tissue structures. The experimental protocol was approved by the Institutional Animal Care and Use Committee at the Medical Education and Research Institute (MERI), Memphis, TN. The pigs were anesthetized and placed in a lateral decubitus position to simulate the LLIF approach. A retroperitoneal surgical approach exposed the relevant anatomy, including the psoas. The experiments were conducted in two phases with a total of 50 pigs. The first phase involved using ultrasound imaging and subsequent open dissection to train the AI software to distinguish neural anatomy in the psoas environment. In the second phase, five of the 50 pigs were used to test the algorithm performance.

Ultrasound Imaging

The AI-enhanced ultrasound imaging system shown in Figure 1 was used to image the target region to detect neurovascular structures during LLIF spine surgery. The ultrasound imaging was performed in conjunction with the porcine LLIF approach in the vicinity of the L4 to L6 vertebral bodies. The software was used to identify neurovascular features in the targeted region of interest. In this study, the imaging was performed using an ultrasound probe (Beluga1, Tissue Differentiation Intelligence, USA), with a 128-element linear array and operating at a central frequency of 10 MHz.

Figure 1
Figure 1:
SonoVision™—an Artificial-Intelligence enhanced ultrasound imaging system.

After the lateral skin incision, the peritoneum was pushed anteriorly to access the psoas muscle. The probe was inserted through the incision to reach the surface of the psoas muscle. The surgeon used C-Arm fluoroscopy to locate the approximate region of interest for scanning. The psoas was then scanned utilizing the tissue identification features of the ultrasound system, helping the surgeon to visualize neurovascular structures in each scan location. C-arm fluoroscopy was used to verify the location of the probe above the target region of the spine (Figure 2) specifically with respect to the vertebral body.

Figure 2
Figure 2:
C-arm fluoroscopic image of the ultrasound probe above the vertebral body of a pig spine.

Once the location was finalized, the surgeon used a surgical arm with attachment to hold the probe stationary at the target location. After the probe position was fixed, the surgeon utilized the ultrasound system to confirm the desired path and then inserted a metal pin (1.4 mm diameter, 10–15 cm long) into the vertebral body or intervertebral disc space with the intent of targeting either a nerve or a clear path. Here, “clear path” is defined as a region without the presence of neurovascular structure so that the metal pin could be inserted without damaging surrounding neurovascular structures. The metal pins were inserted such that, post-imaging and euthanasia, the tissue could be dissected to validate that the ultrasonically detected features matched the dissected tissue in proximity to the pins. All dissections were performed by a neurosurgeon and the dissected tissue served as the ground truth to the presence or absence of any anatomical features in the region of interest.

Image Processing and Classification

Segmentation

During the initial phase of the study, B-mode ultrasound images were acquired and compared to the ground truth information established through dissection. Specifically, the nerve regions in the acquired B-mode images were annotated based on the dissection notes. Through dissection, the nerve regions were verified and simultaneously segmentation algorithms were developed based on image processing techniques. The segmentation algorithm separated the B-mode image into one or multiple regions that corresponded with the presence or absence of neural tissue. These segmented regions were sent into a classification training process where each region was classified as either a “nerve” or “other” region.

Tapered windowing functions were used to suppress the edges of the given B-mode image followed by applying a Time Gain Compensation (TGC) to account for attenuation as a function of depth. As well, the brightness of the image was normalized. Dilation and erosion of the image were performed, which helped form different closed or open regions in the given B-mode image. Each of these segmented regions was selected and further analyzed with respect to shape, area, aspect ratio, solidity, and threshold values. The segments that satisfied these threshold criteria were used to define either a “nerve” or “not-nerve” region and passed into the machine learning model for classification. During the training phase each of the contours was refined based on ground truth data through dissection.

Real-time Image Annotation and Display

To increase computational efficiency and reduce computation time as well as to display detected regions in real-time, a GeForce RTX 2080 Graphics Card (NVIDIA, USA) was used in the ultrasound system to achieve a frame rate >15 frames per second (fps).

The U-Net Convolutional Neural Network (CNN) algorithm32–34 was used to classify and detect bone and muscle regions in a given B-mode image. In the U-Net architecture the B-mode image was the input to the network and the output was a probability map of the image for both bone and muscle. This probability map was then color-coded into RGB and alpha mapped with appropriate transparency onto the ultrasound image and displayed on the graphical user interface (GUI) such that the background and the overlay were both visible.

The segmentation algorithm was used to partition the B-mode image into regions. Using the segmentation algorithm, the regional contours were collected. These collected contours were passed into the CNN model for classification of these segments as a nerve or other region (i.e., not a nerve). The CNN model was used TensorFlow (Google, USA)35 to classify the segmented regions. The classified nerve regions were then overlaid on the B-mode image and indicated with a solid yellow color.

Training and Validation of SonoVision Algorithm

In the first phase, training of the classification algorithm was performed using 36,000 and 10,000 B-mode images for nerves and bone/muscle regions, respectively. To test various aspects of the system, both nerve regions and clear path regions were selected. The protocol was to target nerve for some of the metal pin insertions and, in other cases, to find a clear path to insert the metal pin without damaging any surrounding neurovascular tissue structures. The aim was to validate the software both when it indicated a nerve region and when it indicated a clear path. The second phase of the study was to test the trained detection algorithm using approximately 4800 B-mode images.

Figure 3A and B demonstrates the ultrasound system GUI identifying neural tissue within the psoas and the corresponding anatomical dissection. This validation process was followed throughout the experimental study for both training the CNN model in phase one and for validating and testing the trained CNN model in phase two.

Figure 3
Figure 3:
(A) SonoVision screen identifying a nerve region (yellow) at approximately 20 mm depth in the muscle, (B) dissected validation of the nerve as identified in (A).

Figure 4A and B demonstrates an example of clear psoas anatomy with no neural tissue present along with the corresponding anatomical dissection. The Doppler color flow imaging mode was also enabled allowing the surgeon to check for any vessels along the insertion path. An example of Doppler mode indicating the blood flow during real-time imaging is shown in Figure 4c.

Figure 4
Figure 4:
(A) SonoVision screen indicating clear path through the psoas without the presence of nerve regions and vertebral body cortical surface beneath the psoas, (B) dissected tissue indicating muscle region is clear of neural tissue, and (C) SonoVision Doppler mode showing blood flow in the vessel.

Quantitative Metrics

To determine the similarity of the anatomical region detected by the algorithm and the labeled annotated ground truth, the Dice coefficient was calculated as follows:36,37(1)Dice=2XYX+Y×100

where |X| and |Y| are the cardinalities of the two sets (i.e., number of elements in each set). Here the two sets refer to the labeled annotated ground truth region and the algorithm detected region on a given B-mode, respectively, for each of the tissue types. Based on the Dice score, the true positive (TP), true negative (TN), false positive (FP), and false negative (FN) values were defined. The sensitivity was defined as a statistical measure of the proportion of the actual positives that were correctly identified and was calculated as follows:(2)Sensitivity=TPTP+FN×100

Specificity was defined as a statistical measure of the proportion of the actual negatives that were correctly identified and was calculated as follows:(3)Specificity=TNTN+FP×100

The accuracy was calculated as follows:(4)Accuracy=TP+TNTP+TN+FP+FN×100

During the second phase of the experimental study (the validation testing phase of the system), metal pins were inserted in the pigs using a blinded study design. The imaging notes and the dissection notes for each of these pins were collected separately and subsequently analyzed by an independent observer. Since these pins were inserted through the psoas muscle and anchored into the bone or the disc, the detection of bone surface and muscle region was also validated using the same pin locations. If a nerve was seen within a radius of 1 to 2 mm from the pin then it was considered as nerve within the vicinity of the pin. Validation was performed via post-euthanasia dissection, visual inspection, and tactile assessment.

RESULTS

The AI-trained ultrasound system detected nerve, psoas muscle, vertebral body surface, and disc space with high sensitivity and specificity. For all the pins, the ultrasound imaging recorded during the surgical approach matched the open dissection and identification of anatomy. Thorough dissection and gross anatomy analyses were performed for each type of tissue: nerve, muscle, bone surface, and disc space.

The ultrasound system identification of anatomical features was deemed successful when the imaging notes matched the dissection notes. For example, if the imaging notes indicated that the user targeted a clear path and then, post-dissection, it was observed that a nerve was present in the vicinity of the pin, then the imaging notes and the dissection notes did not match. This was considered a FN indication from the system with respect to nerve identification. Similarly, if the imaging notes indicated the presence of nerve and the dissection notes also indicated the presence of nerve, then this would be considered a TP.

In the second phase of the study to test the trained detection algorithm, approximately 4800 B-mode images were used to estimate the various quantitative parameters to define the sensitivity, specificity and accuracy of the detection algorithm. These 4800 B-mode images were acquired using five pigs at different locations in the left and the right psoas environments from each pig. The Dice scores from all of the cases for nerve, bone, and muscle are shown in Table 1. The mean Dice score for each tissue type was >80%, indicating that the detected region and ground truth were >80% similar to each other. The sensitivity for each of the three tissue types was >95%. The mean specificity of nerve detection was 92%; for bone and muscle, it was >95%. The accuracy of nerve detection was >95%.

TABLE 1 - Results
Tissue Type Dice Score Sensitivity Specificity Accuracy
Nerve 83.81 ± 0.20 100 93.13 ± 0.15 96.30 ± 0.10
Bone 90.60 ± 0.09 100 96.42 ± 0.38 97.12 ± 0.20
Muscle 88.60 ± 0.36 100 98.61 ± 0.21 98.29 ± 0.18

DISCUSSION

The results of this animal study suggest that a combination of image processing and machine learning algorithms can correctly detect different tissue types such as nerve, muscle, disc space, and vertebral bone surface from a given B-mode ultrasound image. The tissues can be detected, segmented, classified, and displayed in real-time during the ultrasound scanning process. It was observed that only 40% to 50% of the GPU card memory was used during the real-time detection of the anatomical features. In particular, a key feature of the algorithm is that segmentation through image processing reduces the amount of data feed into the CNN classifier and thus reduces computational burden. The high sensitivity and specificity indicate the stability of the algorithm.

Machine learning has been previously used for detecting various tissue architectures.32,38,39 To our knowledge, it has never been used to enhance image guidance for spine procedures. In a previously conducted clinical study evaluating traditional ultrasound for intraoperative guidance in lateral lumbar spine surgery,28 the investigators used a transvaginal probe to ensure the operative corridor was free of other soft tissues such as kidneys and bowel. These investigators reported that the major vessels anterior to the vertebral body were identified using Doppler-mode in all 100 patients in their cohort. They did not identify neural structures within the psoas muscle.

The present study focused on the lateral surgical approach to the lumbar spine. Ultrasound using Doppler mode can provide real-time information concerning the location of surgical instruments relative to critical vascular structures. This could be beneficial in cases where patients have aberrant anatomy or in more complex spine cases, such as procedures where there is greater concern for vascular structures (e.g., OLIF and more complicated deformity corrections,3,40–47 especially since vascular complications have been reported in the 0.3%–8.6% range).44,45,48

The study focused on AI algorithms for neural anatomy detection. These algorithms are deep learning methods whose performance improves with training. In other words, the system is self-educating such that its output (in this case, anatomic feature detection within the psoas and elsewhere) improves with additional data input. One of the study's limitations was that the Doppler-mode was only used to detect the presence of flow and was not analyzed in a quantitative fashion. Also, this study did not focus on quantifying the potential of the AI-enhanced ultrasound system to reduce radiation exposure. This potential benefit for health care constituents (surgeons, patients, and OR personnel) will be evaluated in subsequent clinical trials.

The present study incorporated a lateral surgical approach that simulated LLIF procedures in a porcine model. AI-enhanced ultrasound imaging provided a real-time spatial map of the critical anatomy present in the surgical field. Most importantly, the real-time intraoperative imaging demonstrated the presence and location of neural structures within the psoas muscle. These nerves are not evident with other intraoperative imaging modalities and are only indirectly and incompletely localized with intraoperative electrical testing. Applied to humans, the real-time guidance provided by this technology should enable a surgeon to quickly identify neural structures and define a safe path to the disc space while minimizing radiation exposure.

Key Points

  • The study evaluated the use of an AI-enabled, real-time intraoperative ultrasound imaging system for localization of nerves and other anatomic structures within and adjacent to the psoas muscle in an in vivo porcine model of LLIF.
  • AI-enhanced ultrasound imaging provided a real-time spatial map of the critical neural anatomy present in the surgical field during lateral spine surgery.
  • This technology is intended to enable a spine surgeon to choose a safe pathway to the lateral lumbar spine.

References

1. Pimenta L, Tohmeh A, Jones D, et al. Rational decision making in a wide scenario of different minimally invasive lumbar interbody fusion approaches and devices. J Spine Surg 2018; 4:142–155.
2. Taba HA, Williams SK. Lateral lumbar interbody fusion. Neurosurg Clin N Am 2020; 31:33–42.
3. Hah R, Kang HP. Lateral and oblique lumbar interbody fusion—current concepts and a review of recent literature. Curr Rev Musculoskelet Med 2019; 12:305–310.
4. Rodgers WB, Gerber EJ, Rodgers JA. Lumbar fusion in octogenarians: the promise of minimally invasive surgery. Spine (Phila Pa 1976) 2010; 35: (26 suppl): S355–S360.
5. Smith WD, Wohns RNW, Christian G, et al. Outpatient minimally invasive lumbar interbody: fusion predictive factors and clinical results. Spine (Phila Pa 1976) 2016; 41:s106–s122.
6. Kolb B, Peterson C, Fadel H, et al. The 25 most cited articles on lateral lumbar interbody fusion: short review. Neurosurg Rev 2020; doi: 10.1007/s10143-020-01243-0.
7. Tohmeh AG, Rodgers WB, Peterson MD, et al. discrete-threshold electromyography in the extreme lateral interbody fusion approach: clinical article. J Neurosurg Spine 2011; 14:31–37.
8. Rodgers WB, Gerber EJ, Patterson J. Intraoperative and early postoperative complications in extreme lateral interbody fusion: an analysis of 600 cases. Spine (Phila Pa 1976) 2011; 36:26–32.
9. Walker CT, Harrison Farber S, Cole TS, et al. Complications for minimally invasive lateral interbody arthrodesis: a systematic review and meta-analysis comparing prepsoas and transpsoas approaches. J Neurosurg Spine 2019; 30:446–460.
10. Cho SC, Ferrante MA, Levin KH, et al. Utility of electrodiagnostic testing in evaluating patients with lumbosacral radiculopathy: an evidence-based review. Muscle Nerve 2010; 42:276–282.
11. Salzmann SN, Shue J, Hughes AP. Lateral lumbar interbody fusion—outcomes and complications. Curr Rev Musculoskelet Med 2017; 10:539–546.
12. Vaishnav AS, Merrill RK, Sandhu H, et al. A review of techniques, time demand, radiation exposure, and outcomes of skin-anchored intraoperative 3D navigation in minimally invasive lumbar spinal surgery. Spine (Phila Pa 1976) 2020; 45:E465–E476.
13. Godzik J, Mastorakos GM, Nayar G, et al. Surgeon and staff radiation exposure in minimally invasive spinal surgery: prospective series using a personal dosimeter. J Neurosurg Spine 2020. 1–7.
14. Larson AN, Schueler BA, Dubousset J. Radiation in spine deformity: state-of-the-art reviews. Spine Deform 2019; 7:386–394.
15. Sembrano JN, Yson SC, Theismann JJ. Computer navigation in minimally invasive spine surgery. Curr Rev Musculoskelet Med 2019; 12:415–424.
16. Wong Y-S, Lai KK-L, Zheng Y-P, et al. Is radiation-free ultrasound accurate for quantitative assessment of spinal deformity in idiopathic scoliosis (IS): a detailed analysis with EOS radiography on 952 patients. Ultrasound Med Biol 2019; 45:2866–2877.
17. Pennington Z, Cottrill E, Westbroek EM, et al. Evaluation of surgeon and patient radiation exposure by imaging technology in patients undergoing thoracolumbar fusion: systematic review of the literature. Spine J 2019; 19:1397–1411.
18. Singh R, Culjat M. Culjat M, Singh R, Lee H. Medical ultrasound devices (Chapter 14). Medical Devices: Surgical and Image-Guided Technologies.. Hoboken, NJ: John Wiley & Sons, Inc; 2013; 303–339.
19. Powles AE, Martin DJ, Wells IT, et al. Physics of ultrasound. Anaesth Intensive Care Med 2018; 19:202–205.
20. Shriki J. Ultrasound physics. Crit Care Clin 2014; 30:1–24.
21. Newman PG, Rozycki GS. The history of ultrasound. Surg Clin North Am 1998; 78:179–195.
22. Auguste C. Medical Ultrasound Devices: Technologies and Global Markets. 2013; (January). Available at: http://www.bccresearch.com.offcampus.lib.washington.edu/market-research/instrumentation-and-sensors/medical-ultrasound-devices-technology-markets-ias040a.html.
23. Zhai X, Cui J, Shao J, et al. Global research trends in spinal ultrasound: a systematic bibliometric analysis. BMJ Open 2017; 7:e015317.
24. Vasudeva VS, Abd-El-Barr M, Pompeu YA, et al. Use of intraoperative ultrasound during spinal surgery. Glob spine J 2017; 7:648–656.
25. Marshburn TH, Hadfield CA, Sargsyan AE, et al. New heights in ultrasound: first report of spinal ultrasound from the International Space Station. J Emerg Med 2014; 46:61–70. Available at: https://www.sciencedirect.com/science/article/pii/S0736467913008871?via%3Dihub. Accessed October 12, 2018.
26. Kufta JM, Dulchavsky SA. Medical care in outer space: a useful paradigm for underserved regions on the planet. Surgery 2013; 154:943–945.
27. Ahmed AS, Ramakrishnan R, Ramachandran V, et al. Ultrasound diagnosis and therapeutic intervention in the spine. J spine Surg (Hong Kong) 2018; 4:423–432.
28. Nojiri H, Miyagawa K, Yamaguchi H, et al. Intraoperative ultrasound visualization of paravertebral anatomy in the retroperitoneal space during lateral lumbar spine surgery. J Neurosurg Spine 2019; 31:334–337.
29. Teh J. Applications of Doppler imaging in the musculoskeletal system. Curr Probl Diagn Radiol 2006; 35:22–34.
30. Kusmuk KN, Schook LB. Pigs as a model for biomedical sciences. In: Genet. Pig Second Ed.; 2011:426–444.
31. Gueziri H-E, Drouin S, Yan CXB, et al. Toward real-time rigid registration of intra-operative ultrasound with preoperative CT images for lumbar spinal fusion surgery. Int J Comput Assist Radiol Surg 2019; 14:1933–1943.
32. Springer Verlag, Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol 9451 2015; 234–241.
33. Springer Verlag, Salehi M, Prevost R, Moctezuma JL, et al. Precise ultrasound bone registration with learning-based segmentation and speed of sound calibration. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol 10434 LNCS 2017; 682–690. doi: 10.1007/978-3-319-66185-8_77.
34. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal mach Intell 2016; 39:640–651.
35. Abadi M, Agarwal A, Barham P, et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. Available at: https://research.google/pubs/pub45166/. Published 2015. Accessed April 29, 2020.
36. Fan G, Liu H, Wu Z, et al. Deep learning–based automatic segmentation of lumbosacral nerves on CT for spinal intervention: A translational study. Am J Neuroradiol 2019; 40:1074–1081.
37. Mazurowski MA, Buda M, Saha A, et al. Deep learning in radiology: an overview of the concepts and a survey of the state of the art with focus on MRI. J Magn Reson Imaging 2019; 49:939–954.
38. Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 2017; 39:640–651.
39. Nogata F, Yokota Y, Kawamura Y, et al. Towards the application of AI technology to assess risk of aneurysm rupture based on medical imaging. Int J Comput Inf Technol 2019; 08:121–130.
40. Daniels AH, Reid DBC, Tran SN, et al. Evolution in surgical approach, complications, and outcomes in an adult spinal deformity surgery multicenter study group patient population. Spine Deform 2019; 7:481–488.
41. Mundis GM, Turner JD, Kabirian N, et al. Anterior column realignment has similar results to pedicle subtraction osteotomy in treating adults with sagittal plane deformity. World Neurosurg 2017; 105:249–256.
42. Hosseini P, Mundis GM, Eastlack RK, et al. Preliminary results of anterior lumbar interbody fusion, anterior column realignment for the treatment of sagittal malalignment. Neurosurg Focus 2017; 43:E6.
43. Saigal R, Mundis GM, Eastlack R, et al. Anterior column realignment (ACR) in adult sagittal deformity correction. Spine (Phila Pa 1976) 2016; 41:s66–s73.
44. Quillo-Olvera J, Lin G-X, Jo H-J, et al. Complications on minimally invasive oblique lumbar interbody fusion at L2-L5 levels: a review of the literature and surgical strategies. Ann Transl Med 2018; 6:101–1101.
45. Hijji FY, Narain AS, Bohl DD, et al. Lateral lumbar interbody fusion: a systematic review of complication rates. Spine J 2017; 17:1412–1419.
46. Godzik J, Walker CT, Whiting AC, et al. Release of anterior longitudinal ligament in setting of unfavorable vascular anatomy for anterior column realignment-technical note: 2-dimensional operative video. Oper Neurosurg 2020; 19:E189.
47. Joseph JR, Smith BW, Marca L, et al. Comparison of complication rates of minimally invasive transforaminal lumbar interbody fusion and lateral lumbar interbody fusion: a systematic review of the literature. Neurosurg Focus 2015; 39:E4doi: 10.3171/2015.7.FOCUS15278.
48. Xu DS, Walker CT, Godzik J, et al. Minimally invasive anterior, lateral, and oblique lumbar interbody fusion: a literature review. Ann Transl Med 2018; 6:104–1104.
Keywords:

artificial intelligence; image guidance; lateral spine surgery; neural anatomy; porcine model; psoas muscle; ultrasound

Copyright © 2020 The Author(s). Published by Wolters Kluwer Health, Inc.