Advanced 3-Dimensional Planning in Neurosurgery
Ferroli, Paolo MD*,‡; Tringali, Giovanni MD*,‡; Acerbi, Francesco MD, PhD‡; Schiariti, Marco MD‡; Broggi, Morgan MD‡; Aquino, Domenico§; Broggi, Giovanni MD‡
‡Department of Neurosurgery
§Neuroradiology Unit, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milano, Italy
Correspondence: Paolo Ferroli, MD, Department of Neurosurgery, Fondazione IRCCS Istituto Neurologico Carlo Besta, Via Celoria 11, 20133, Milano, Italy. E-mail: Paolo.Ferroli@istituto-besta.it
* These authors have contributed equally to this article.
Received June 15, 2012
Accepted August 29, 2012
During the past decades, medical applications of virtual reality technology have been developing rapidly, ranging from a research curiosity to a commercially and clinically important area of medical informatics and technology. With the aid of new technologies, the user is able to process large amounts of data sets to create accurate and almost realistic reconstructions of anatomic structures and related pathologies. As a result, a 3-diensional (3-D) representation is obtained, and surgeons can explore the brain for planning or training. Further improvement such as a feedback system increases the interaction between users and models by creating a virtual environment. Its use for advanced 3-D planning in neurosurgery is described. Different systems of medical image volume rendering have been used and analyzed for advanced 3-D planning: 1 is a commercial “ready-to-go” system (Dextroscope, Bracco, Volume Interaction, Singapore), whereas the others are open-source-based software (3-D Slicer, FSL, and FreesSurfer). Different neurosurgeons at our institution experienced how advanced 3-D planning before surgery allowed them to facilitate and increase their understanding of the complex anatomic and pathological relationships of the lesion. They all agreed that the preoperative experience of virtually planning the approach was helpful during the operative procedure. Virtual reality for advanced 3-D planning in neurosurgery has achieved considerable realism as a result of the available processing power of modern computers. Although it has been found useful to facilitate the understanding of complex anatomic relationships, further effort is needed to increase the quality of the interaction between the user and the model.
ABBREVIATIONS: DICOM, digital imaging and communications in medicine
DTI, diffusion tensor imaging
PACS, picture archiving and communication system
VR, virtual reality
Virtual reality (VR) is a computer-generated 3-dimensional (3-D) environment that provides real-time interactivity for the user. Since its beginning, the term VR has been used in a variety of ways that are often confusing and misleading. On a computer, VR is experienced primarily through 2 of the 5 senses: sight and hearing. The simplest form of VR is a 3-D image that can be interactively explored with a personal computer, usually by manipulating keys or the mouse so that the content of the image moves in some direction or zooms in or out. More sophisticated systems involve the use of a headset as display and haptic devices. As a result of the decrease in the price of high-performance computers, this technology is becoming widespread and is developing rapidly. The form of VR in which the user interacts with objects by means of headsets, gloves, and suits is called immersive VR.
During the past decades, medical applications of VR technology have been developing rapidly, ranging from a research curiosity to a commercially and clinically important area of medical informatics and technology. In 1992, computer science professor Henry Fuchs and 2 graduate students superimposed ultrasonic images of a fetus onto a video image of a pregnant woman’s abdomen to provide an accurate and unique perspective for guiding physicians as they inserted and manipulated probes in the body.1 The following year (1993), gastroenterologist J.S. Bladen and his team of surgeons2 developed an imaging technique that allowed a physician to use computer-generated images of the patient’s tissues to guide the performance of a colonoscopy. These first reports represent the appearance of VR applications in medicine. Since then, advances in computer processing capabilities and hardware manufacturing have led to widespread use of VR and its implementations such as augmented-reality systems. The latter allows one to superimpose 3-D reconstruction of anatomic structures on a surface (eg, patient’s skin or skull), improving the interactions between the initial surgical planning and intraoperative neuronavigation.3 The use of different software and hardware available for advanced 3-D planning in neurosurgery is described, with the aim of shedding some light on its advantages and limits.
DATA ACQUISITION AND ELABORATION
Advanced 3-D surgical planning involves the collection of different data sets of images with anatomic and physiological information. Computer graphics techniques—rendering and modeling—are then used to display that data as (part of) a virtual body so that it can be examined and manipulated. As surgery becomes more customized and patient specific, it is becoming heavily dependent on patient data. Surgical planning is the result of the surgeon’s interaction with models of patient anatomy. In training, he/she operates on a model that is built from patient data that must be as accurate as possible. The current technology allows the examination and elaboration of 3-D models of large sets of 2-dimensional (2-D) “slices” from different sources (magnetic resonance imaging [MRI], computed tomography [CT], digital subtraction angiography, etc). One of the many skills that must be developed by brain surgeons is the spatial reconstruction and integration of 2-D pictures within the anatomic reality. Furthermore, the surgeon should be able to reconstruct in his/her mind how different information originating from different imaging modalities is spatially interconnected. One of the main advantages of computer elaboration, and particularly of a VR environment, is that it allows anatomic, metabolic, and functional data from different sources (CT, MRI, magnetic resonance angiography, x-ray, positron emission tomography, functional MRI, diffusion tensor imaging [DTI]) to be combined (or registered together) in the same 3-D space. This 3-D VR representation can be examined in detail, shared and discussed with others, and related precisely to physical reality. Nowadays, commercial software/workstations that automatically complete the above-mentioned procedures are available as “low-cost” technologies. We have had experience with some of them, specifically the Dextroscope, the applications of which have been discussed in previous reports, and desktop-oriented VR environments (3-D Slicer, FSL, and FreesSurfer).3-9 Regarding VR systems, several engineering hurdles must be overcome to fully simulate reality in an effective way. The resolution of the video display must be high enough and with fast enough refresh and update rates to allow scenes to look like and to change like they usually do in real life. The field of view must be wide enough and the lighting and shadows realistic enough to maintain the illusion of a real scene. For simulations, reproducing the sensations of touch and motion is particularly critical. Although VR technology is expensive and requires frequent updates in software and hardware, there are several low-cost options based on open-source software. All these applications have evolved considerably from their origin in the 1990s, becoming more user friendly. The choice of software is dictated by factors such as the platforms available to run it (most of them run on Unix or Unix-like machines) and, most important, the local expertise. In fact, although commercial workstations have usually a more familiar graphic interface and more automatized process, open-source software has a less basic graphic user interface and often requires some knowledge of Unix command line and scripting. One of the most reliable is the 3-D Slicer, an open-source application for displaying medical data. This application was originally developed by the Surgical Planning Laboratory at Brigham and Women’s Hospital and the Massachusetts Institute of Technology Artificial Intelligence Laboratory in 1998.10-12 The visualization of patient data involves the combination of different data sets collected in several geometric locations into a single scene and the exploration of the scene interactively. The scene is created from a range of volume data sets, surface models derived from them, and transformations derived from 3-D registrations of both volumes and models. Volume rendering works directly from the available scan images, so the quality of data during acquisition will affect the result of the rendering.
All the available software, either commercial or open source, needs a volumetric acquisition to process the data sets. In our institution, to achieve the lower coregistration errors, all volumes are acquired with the same field of view. A picture archiving and communication system (PACS)/resonance ionization spectroscopy system allows fast transmission of these data through a dedicated network and allows query-retrieve functions. The Dextroscope workstation automatically aligns each data set; the user needs only to select the appropriate windowing level to obtain the desired surface. This process is very fast and requires less training. All the operations, except the windowing, are done under stereoscopic vision in a virtual environment where the user moves the data set, floating in the space, with the aid of a stylus and a holder. The advantages of a virtual environment rely mainly on the segmentation capability of the workstation. Its algorithm allows manual segmentation, and with the use of the stylus, it is possible to draw the contour of the relevant structure that one wants to render. It is up to the user to choose the amount of structures to contour, and at the end of the process, each one is already aligned to the entire volume. Then, each object can be visualized partially or completely transparent and oriented according to the viewer’s choice.
The digital imaging and communications in medicine (DICOM) standard for data acquired with a clinical imaging device is very broad and complex. All major manufacturers of medical imaging equipment have so-called DICOM conformance statements that explicitly state how their hardware implements. Because most analysis packages cannot work directly with the original DICOM data, to work with complex data sets (such as functional MRI or DTI), it is necessary to use different formats of medical imaging. The most simple, compact, and versatile is the Neuroimaging Informatics Technology Initiative image format. This is the reason why the analysis of neuroimaging data often starts with converting the vendor-specific format (Figure 1). The strength of open-source software is related to its capability to work effectively with the broad spectrum of formats, obtaining a result that can be exported to the final repository (neuronavigation device or PACS) with a degree of resolution that is more accurate than that of automatic systems. The drawback is that compared with the available commercial workstation, the learning curve is longer and requires more computer skills. The best results are obtained with the aid of a dedicated computer scientist team, shifting modern neurosurgery more from a single-person job to a job for a multidisciplinary team.
Traditional surgical planning uses volumetric information stored in a stack of intensity-based images, usually from CT and/or MRI scanners. Surgeons can view these images using specific 2-D image viewers. Using a number of these image slices, surgeons build their own mental 3-D model of the anatomy and pathology (ie, tumor and its neurovascular relationships). This task can be difficult, even for experienced surgeons, because pathological processes can contribute to modifying and increasing interindividual anatomic variability, thus further complicating the task. As a consequence, they can miss important information or draw incorrect conclusions that can lead to suboptimal treatment strategy decisions. The use of 3-D visualization based on segmentation of strategic structures can improve surgeons’ understanding of the complex pathological relationships between vital structures and surgical targets (Figure 2).9,13-17
In the medical context, however, presenting 3-D visualization on a conventional workstation is insufficient. Surgical planning is inherently a 3-D–oriented task, and 2-D input devices such as a keyboard and a mouse are suboptimal. Using such interfaces for simple tasks such as object selection or distance measurement can be acceptable. For more complex interactions such as specifying a deformable plane for simulating a resection, the limits of 2-D input devices are obvious. With the advent of minimally invasive neurosurgery, the exposure should aim to minimize the invasiveness of surgical procedures and to increase accuracy and safety.4,8,18,19 Obviously, this can be performed only with a high degree of spatial orientation. Computer graphics techniques enable the surgeon to preoperatively visualize and memorize the spatial relationships between the lesion and the surrounding structures as seen from different points of view, thus facilitating him/her in the intraoperative recognition.20-24
In our institution, we have had the opportunity to use VR systems since 2006.4 A number of cases were studied preoperative and postoperatively in different fields of neurosurgery (vascular, oncology, functional). Six (P.F., G.T., F.A., M.S., M.B., G.B.) of 23 neurosurgeons in our department experienced different software/workstations (Dextroscope, 3-D Slicer, FSL, and FreesSurfer). They all agreed that the preoperative experience of virtually planning the approach was helpful during the operative procedure. For Dextroscope, a dedicated workstation was installed in a specific room, adjacent to the operatory block; the other systems were available on laptop computers. From our previous experience with the use of Dextroscope,4 we can estimate that almost 10% to 15% of the cases in our department are now evaluated preoperatively or postoperatively with the aid of these VR systems. The majority are represented by aneurysms and tumors.
Vascular Surgery: Aneurysms
The clinical application of 3-D imaging technology represents a powerful advancement in the diagnosis and treatment of patients with vascular disease.17,19,22 Currently, 3-D imaging is used during invasive angiographic procedures for the diagnosis and endovascular treatment of vascular pathologies. This helps to overcome many of the limits of traditional 2-D angiography. Traditional angiography in fact relies heavily on the experience of the operator and on the capability to elaborate 2-D images coming from unconventional points of view to clarify the spatial relationships of a specific lesion. This “flattening” of a 3-D volume results in the generation of a final mental model that can be limited and inaccurate. Thus, 3-D angiography provides an objective, operator-independent method of acquiring more angiographic information to guide clinical decisions. Cerebral saccular aneurysms show a substantial amount of variation in both shape and size. The aneurysm size is the index most commonly used to predict rupture, but shape is increasingly recognized as a useful prognostic factor. An appropriate morphological 3-D characterization may thus become vital for the evaluation of the risk of rupture. Apart from the diagnostic value, the possibility of merging data obtained from vascular (CT angiography, 3-D rotational angiography) and anatomic (MRI) studies allows either an evaluation of the boundaries of complex aneurysms or an investigation of its spatial relationships with the surrounding structures. Advanced 3-D planning enables the surgeon to accurately choose the best trajectory of view for aneurysm dissection and clip positioning (Figure 2). In addition, the postoperative evaluation of the reconstructed 3-D clip, if fused with preoperative images, can offer the chance to study in detail its relationships with the aneurysm and can give an idea of the quality of clipping.
Vascular Surgery: Microvascular Decompression for Trigeminal Neuralgia and Other Cranial Nerve Irritative Syndromes
Although microvascular decompression may seem not to have a special need for planning, the possibility of using such virtual reconstruction of the nerve and of the compressing vessel/vessels enabled surgeons to evaluate preoperatively the spatial relationships of all objects in the surgical field (petrosal veins, tentorium, petrous bone with suprameatal tubercle, trigeminal nerve, anterior inferior cerebellar artery, superior cerebellar artery, VII-VIII complex).21,24 In addition, in the diagnostic phase, it was possible to work in an immersive way, with 6 degrees of freedom increasing the likelihood of identifying the conflict compared with a simple 2-D analysis (Figure 3). The preoperative planning involves head positioning and analysis and recognition of bony landmarks for perfect retrosigmoid minicraniectomy placement. Measurements can be obtained, and each segment of cranial nerves and vessels can be explored from different points of view. It is possible to simulate the view of both the microscope magnification and the endoscope. Thus, especially in cases of complex conflicts, it was possible to accurately preplan the safest strategy to mobilize and fix the conflicting vessels.
One of the goals of brain surgery is to avoid damage to eloquent cortex, subcortical white matter, vessels, and nerves. Advanced 3-D preoperative planning enables the surgeon to identify anatomic landmarks and critical structures (eg, large vessels crossing the path of the operating microscope or critical cranial nerves) and to determine the optimal head positioning.7,9,14,15,20 It is during this planning session that the surgical approach is optimized by adapting the general surgical plan and standard approach to the individual patient’s anatomy. MRI allows general brain anatomy and tumor visualization; CT scans are superior for picturing bony structures; functional MRI depicts eloquent areas (such as speech and motor); positron emission tomography shows metabolic activity; digital subtraction angiography provides high-quality imaging of vessel shape and course; and DTI remains the only noninvasive method capable of segmenting the subcortical course of white matter tracts. Nevertheless, the mental reconstruction and combination of all these data sets and a correct 3-D understanding by simple slice-by-slice analysis can be very difficult, even for the most experienced surgeon. Tumors represent a particular field of concern in 3-D visualization (Figure 4). Accurate and reproducible segmentation, in fact, is still a challenging and difficult task because of the variety of the signal intensity/density, shapes, locations, and relationships of the various types of tumors. The easiest case occurs when the tumor is a solid homogeneous, unequivocally defined mass. Unfortunately, more often tumor definition can be complicated by associated edema or necrosis that changes the image intensity in the boundaries. Simple thresholding or morphological techniques may affect the result of the rendering by overestimating or underestimating the real form of the lesion or by excluding important information such as cranial nerves or vessel encasement. This is one of the main reasons why, although automatic algorithms of segmentation have been elaborated, accurate planning relies on a manual segmentation by an “expert” operator, and of course, it is inevitably subjective.25-28 Although all the automated systems already available, especially for radiosurgery or functional planning, offer an acceptable result, they miss some important information necessary to the surgeon. When a bundle of fibers is studied by DTI analysis, it is important to know if it is displaced and where. Accurate analysis and validation of displacement depend on the method used to acquire the DTI data. Two approaches are now available: deterministic tractography methods (faster but does not incorporate the uncertainty associated with image distortions, image noise, and fiber crossing) and probabilistic approaches (time and hardware consuming, which compute the distribution of fiber pathways emanating from each seed point and assign a confidence level to a specific trajectory).29 The method used by many commercial neuronavigation systems is deterministic, so they are faster and less operator-dependent but are less accurate compared with what is offered by an ad hoc software that can process probabilistic data. One compromise is the possibility of importing raw data directly from the PACS and processing them with external software. Then the fibers are plotted on an anatomic volumetric data set (ie, T1-weighted MR images with IV contrast administration) and exported into the neuronavigation system. The disadvantages of this procedure are that it is time-consuming and that the fibers are superimposed as a draw to the images visualized on the neuronavigation screen.
Localization and targeting of depth or cortical electrodes in specific regions of the human brain are critical for accurate clinical diagnoses (ie, epilepsy) and treatment (ie, movement disorders) and for neuroscientific neurophysiological research. To identify the exact electrode location after implantation, current computational alignment methods allow registration of postimplantation CT scans and preimplantation MRIs with millimetric accuracy.30 Thus, the limits of a postoperative CT scan alone with image artifacts caused by the presence of the electrodes are overcome. MRI also is affected by distortion. In addition, limitations exist in the head coil and magnetic field used to avoid electrode tip heating issues, which are theoretically possible if the body coil is used to transmit radiofrequency power.31,32 Advanced 3-D preoperative planning and postoperative analysis not only contribute to the safety and precision of electrode placement but also enable the recognition of the electrode course and positions in the 3-D model and, as a result, provide an accurate, 3-D visualization of the anatomic position of each contact (Figures 5 and 6). A precise functional and structural map of the patient’s brain is thus created and used to choose which contact of the electrode is to be activated to achieve the most beneficial response.
Traditional methods of medical education have not radically changed over the last decades. The use of older, more experienced surgeons to train younger apprentices dates back to the ancient Egyptian medical practice. In the late Middle Ages, Galen promulgated the concept of learning from cadaver dissection, and this system is still used today. Abraham Flexner and William Stuart Halsted in separate reports in the early 20th century defined the structure of medical education by introducing the structure of medical schools and the concept of residency, respectively. The halstedian model of medical education established step-wise, time-based postgraduate training, with patients serving as part of the teaching materials during prolonged periods in the hospital. In this way, residents gradually developed greater independence and responsibility under faculty supervision. More recently, Fitts and Posner33 proposed a model of the 3 stages of motor skill acquisition. According to this model, motor skills are learned sequentially. The first stage is a cognitive stage during which the learner forms a concept of executing the motor task. The second stage is an associative stage during which the learner begins to connect the individual movements into a smoother whole, and performance becomes less error prone or halting. Practice facilitates this stage. The third stage is an autonomous stage that, when attained, allows performance of the motor task with little necessary conscious thought or attention.
Currently, the high level of assistance required in the operating room and ethical-legal implications reduce the possibility of this teaching. New technologies should help to fill the gap between lack of access to the patient and the need for training. However, a technical simulator is most useful if there is a clear understanding of the purpose of the simulation experience. It allows one to increase spatial coordination and analysis; furthermore, it helps in validating knowledge acquired in surgical planning by allowing one to try new approaches, to evaluate the field of view provided, to make mistakes, and to be prepared to compare what has been simulated and what will occur during surgery. Studies conducted on airline pilots showed that they have the same biologic responses when they practice on aircraft simulators and when they fly. Indeed, pilots have to be certified in flight simulation before they can fly commercially.34-37 Aviation-based dynamics such as a teaching model based on the briefing-intraoperative-teaching-debriefing model can be used to facilitate intraoperative learning processes.
Of course, one of the main limits of any VR system available is the impossibility of mimicking the complex dynamics that occur during a surgical procedure. When the skull is opened, the positions of anatomic structures change because of brain swelling and cerebrospinal fluid egression. In brain tumor surgery, further changes occur as tissue is removed.
Despite the above-mentioned issues, VR and computer imaging may allow the creation of a procedure-based technical skills curriculum. Training in a safe environment enables integration of knowledge and judgment into the technical skills already learned during the formal residency: at lower levels, by teaching the basic skills necessary in a general settings and normal microsurgical anatomy, and then by presenting more complex and challenging tasks under the supervision of the tutor. Training is complete when the predefined benchmark levels of skill have been achieved. Progression to the procedural stage of the training program entails repeated sessions until benchmark levels have been attained.
“Virtual reality is not as good a teacher as the real thing, except that no one actually suffers.”38
Advanced 3-D planning provides useful information that can facilitate the learning curve of spatial orienteering in neurosurgery. Its use covers all the main fields of brain, vascular, functional, and oncologic surgery and contributes to the development of a kind of surgery that is more patient-specific and minimally invasive not because of the size of the craniotomy but because of the noble tissues respected. Although simulations have achieved considerable realism because of the available processing power of modern computers, more effort is needed to increase the quality of interaction and the haptic feedback.39-41 Touch poses an especially formidable challenge. For some uses, sensors can record the movements of the user’s hand and provide tactile feedback, but it is still somewhat too different from real surgery to be used for the simulation of any single surgical procedure. Although virtual experiences in neurosurgery are still far from the level of real experience provided by simulators in some fields such as military and civil aviation, further research, including the creation of 3-D real models to be used to simulate surgery, is warranted.
The authors have no personal financial or institutional interest in any of the drugs, materials, or devices described in this article.
1. Bajura M, Fuchs H, Ryutarou O. Merging virtual objects with the real world: seeing ultrasound imagery within the patient: proceedings of SIGGRAPH ‘92 (Chicago, IL, July 26-31, 1992). Computer Graphics. 1992;26(2):203–210.
2. Bladen JS, Anderson AP, Bell GD, Rameh B, Evans B, Heatley DJ. Non-radiological technique for three-dimensional imaging of endoscopes. Lancet. 1993;341(8847):719–722.
3. Kockro RA, Stadie A, Schwandt E, et al.. A collaborative virtual reality environment for neurosurgical planning and training. Neurosurgery. 2007;61(5 suppl 2):S379–S391.
4. Ferroli P, Tringali G, Acerbi F, Aquino D, Franzini A, Broggi G. Brain surgery in a stereoscopic virtual reality environment: a single institution’s experience with 100 cases. Neurosurgery. 2010;67(3 suppl operative):ons79–ons84.
5. Kockro RA, Serra L, Tseng-Tsai Y, et al.. Planning and simulation of neurosurgery in a virtual reality environment. Neurosurgery. 2000;46(1):118–135.
6. Kockro RA, Tsai YT, Ng I, et al.. Dex-ray: augmented reality neurosurgical navigation with a handheld video probe. Neurosurgery. 2009;65(4):795–807.
7. Gu SX, Yang DL, Cui DM, et al.. Anatomical studies on the temporal bridging veins with Dextroscope and its application in tumor surgery across the middle and posterior fossa. Clin Neurol Neurosurg. 2011;113(10):889–894.
8. Reisch R, Stadie A, Kockro RA, Hopf N. The keyhole concept in neurosurgery [published online ahead of print February 10, 2012]. World Neurosurg. doi:10.1016/j.wneu.2012.02.024.
9. Kockro RA, Hwang PY. Virtual temporal bone: an interactive 3-dimensional learning aid for cranial base surgery. Neurosurgery. 2009;64(5 suppl 2):216–229.
10. Pieper S, Lorensen B, Schroeder W, Kikinis R. The NA-MIC Kit: ITK, VTK, pipelines, grids and 3-D slicer as an open platform for the medical image computing community. Proc IEEE Biomed Imaging. 2006;1:698–701.
11. Pieper S., Halle M., Kikinis R. 3-D SLICER. Proc IEEE Biomed Imaging. 2004;1:632–635.
12. Gering DT, Nabavi A, Kikinis R, et al.. An integrated visualization system for surgical planning and guidance using image fusion and interventional imaging. Int Conf Med Image Comput Comput Assist Interv. 1999;2:809–819.
13. Wang SS, Xue L, Jing JJ, Wang RM. Virtual reality surgical anatomy of the sphenoid sinus and adjacent structures by the transnasal approach. J Craniomaxillofac Surg. 2012;40(6):494–499.
14. Oishi M, Fukuda M, Ishida G, Saito A, Hiraishi T, Fujii Y. Presurgical simulation with advanced 3-dimensional multifusion volumetric imaging in patients with skull base tumors. Neurosurgery. 2011;68(1 suppl operative):188–199.
15. Qiu TM, Zhang Y, Wu JS, et al.. Virtual reality presurgical planning for cerebral gliomas adjacent to motor pathways in an integrated 3-D stereoscopic visualization of structural MRI and DTI tractography. Acta Neurochir (Wien). 2010;152(11):1847–1857.
16. Malone HR, Syed ON, Downes MS, D’Ambrosio AL, Quest DO, Kaiser MG. Simulation in neurosurgery: a review of computer-based simulation environments and their surgical applications. Neurosurgery. 2010;67(4):1105–1116.
17. Kimura T, Morita A, Nishimura K, et al.. Simulation of and training for cerebral aneurysm clipping with 3-dimensional models. Neurosurgery. 2009;65(4):719–725.
18. Stadie AT, Kockro RA, Serra L, et al.. Neurosurgical craniotomy localization using a virtual reality planning system versus intraoperative image-guided navigation. Int J Comput Assist Radiol Surg. 2011;6(5):565–572.
19. Fischer G, Stadie A, Schwandt E, et al.. Minimally invasive superficial temporal artery to middle cerebral artery bypass through a minicraniotomy: benefit of three-dimensional virtual reality planning using magnetic resonance angiography. Neurosurg Focus. 2009;26(5):E20.
20. Zele T, Matos B, Knific J, Bajrović FF, Prestor B. Use of 3D visualisation of medical images for planning and intraoperative localisation of superficial brain tumours: our experience. Br J Neurosurg. 2010;24(5):555–560.
21. Du ZY, Gao X, Zhang XL, Wang ZQ, Tang WJ. Preoperative evaluation of neurovascular relationships for microvascular decompression in the cerebellopontine angle in a virtual reality environment. J Neurosurg. 2010;113(3):479–485.
22. Nakagawa I, Kurokawa S, Tanisaka M, Kimura R, Nakase H. Virtual surgical planning for superficial temporal artery to middle cerebral artery bypass using three-dimensional digital subtraction angiography. Acta Neurochir (Wien). 2010;152(9):1535–1540.
23. Ito E, Fujii M, Hayashi Y, et al.. Magnetically guided 3-dimensional virtual neuronavigation for neuroendoscopic surgery. Neurosurgery. 2010;66(6 suppl operative):342–353.
24. González Sánchez JJ, Enseñat Nora J, Candela Canto S, et al.. New stereoscopic virtual reality system application to cranial nerve microvascular decompression. Acta Neurochir (Wien). 2010;152(2):355–360.
25. Baillard C, Hellier P, Barillot C. Segmentation of brain 3D MR images using level sets and dense registration. Med Image Anal. 2001;5(3):185–194.
26. Barra V, Boire JY. Automatic segmentation of subcortical brain structures in MR images using information fusion. IEEE Trans Med Imaging. 2001;20(7):549–558.
27. Capelle AS, Colot O, Fernandez-Maloigne C. Evidential segmentation scheme of multi-echo MR images for the detection of brain tumors using neighborhood information. Info Fusion. 2004;5(3):203–216.
28. Kaus MR, Warfield SK, Nabavi A, Black PM, Jolesz FA, Kikinis R. Automated segmentation of MR images of brain tumors. Radiology. 2001;218(2):586–591.
29. Descoteaux M, Deriche R, Knösche TR, Anwander A. Deterministic and probabilistic tractography based on complex fibre orientation distributions. IEEE Trans Med Imaging. 2009;28(2):269–286.
30. Broggi G, Ferroli P, Franzini A, et al.. CT-guided neurosurgery: preliminary experience. Acta Neurochir Suppl. 2003;85:101–104.
31. Pictet J, Wicky S, Meuli R, van der Klink JJ. Heating effects around resonant lengths of wire during RF excitation. Proc Intl Soc Mag Reson Med. 2001;9:1757.
32. Rezai AR, Phillips M, Baker KB, et al.. Neurostimulation system used for deep brain stimulation (DBS): MR safety issues and implications of failing to follow safety recommendations. Invest Radiol. 2004 39(5):300–303.
33. Fitts PM, Posner MI. Human Performance. Belmont, CA: Brooks/Cole Publishing Co; 1967.
34. Helmreich RL, Davies JM. Anaesthetic simulation and lessons to be learned from aviation. Can J Anaesth. 1997;44(9):907–912.
35. Burt DE. Virtual reality in anaesthesia. Br J Anaesth. 1995;75(4):472–480.
36. Helmreich RL. Exploring flight crew behaviour. Soc Behav. 1987;2:63–72.
37. Helmreich RL, Davies JM. Human factors in the operating room: interpersonal determinants of safety, efficiency and morale. In: Aitkenhead AA, ed. Balliere’s Clinical Anaesthesiology: Safety and Risk Management in Anaesthesia. Vol 10. London, UK: Balliere Tindall; 1996:277–296.
38. Klein LW. Computerized patient simulation to train the next generation of interventional cardiologists: can virtual reality take the place of real life? Catheter Cardiovasc Interv. 2000;51(4):528.
39. West A, Hubbold R. Research challenges for systems supporting collaborative virtual environments. In: Proceedings from the Collaborative Virtual Environments CVE’98; 1998; Manchester, UK. Page 11.
40. Aylett R, Cavazza M. Intelligent virtual environments: a state-of-the-art report. In: Proceedings of Eurographics 2001 STARs. 2001;3:87-109.
41. Ström P, Hedman L, Särnå L, Kjellin A, Wredmark T, Felländer-Tsai L. Early exposure to haptic feedback enhances performance in surgical simulator training: a prospective randomized crossover study in surgical residents. Surg Endosc. 2006;20(9):1383–1388.
Neurosurgery; Surgical plan; 3-D reality; Tumors; Vascular disease
Figure. No available...Image Tools
Copyright © by the Congress of Neurological Surgeons
Highlight selected keywords in the article text.