Kockro, Ralf A. MD*‡; Reisch, Robert MD*‡; Serra, Luis PhD§; Goh, Lin Chia BSc¶; Lee, Eugene BSc¶; Stadie, Axel T. MD‖
In the past 2 decades, intraoperative navigation technology has changed preoperative and intraoperative strategies and methodology tremendously.1,2 Today, intraoperative navigation systems represent the gold standard to determine the size and location of a craniotomy,3,4 to locate deep-seated lesions,5 and to estimate the extent of resection.6 However, most currently available systems still rely on the display of mainly 2-dimensional (2-D) navigational information, although the task of navigating toward specific structures is inherently 3-dimensional (3-D). Some 3-D display features are available, but they are limited to extracting and visualizing simple structures (represented by graphic objects), which hardly match the 3-dimensionality of a neurosurgical operation. Although neurosurgeons consider this disadvantage important,1,7 research to bring graphic 3-dimensionality into the navigation process is slow.
We understand the term virtual reality as the technology that allows a user to interact naturally with a 3-D computer-generated environment, involving both 3-D man-machine interface technology and 3-D displays. In the particular case of preoperative planning and simulation of neurosurgical procedures, the computer-simulated virtual reality environment is the 3-D patient data, which is made up of multimodal imaging series and structures extracted from these data. This has been under evaluation for some time,8-10 and the benefits of understanding of the pathoanatomy of the lesion and planning of the surgical strategy have been reported by several authors.11-16 Transferring the 3-D planning data of the patient (including multimodality information of relevant structures, measurements, and simulated craniotomies) to the operating room would allow navigating directly according to the preoperative 3-D surgical plan. When combined with a 3-D user interface conveniently operated by the surgeon, we anticipate that this method of navigating entirely with 3-D graphic data would lead to better intraoperative spatial orientation and more straightforward path finding. Elsewhere, some of the authors of this article have described an improved navigation experience (DEX-Ray) by using 3-D graphics overlaid on a video stream of a handheld lipstick-sized video camera.17 This system enabled a see-through effect that allowed direct understanding of the subsurface surgical anatomy. However, this same improvement added by the DEX-Ray system introduced a new way of working with the navigation system, which was a significant barrier for its adoption. Learning from that, we wanted a system that would maintain as much as possible the familiar interface of today’s navigation systems while adding improved 3-D understanding by means of detailed, stereoscopic 3-D navigational information. The solution was to link the new stereoscopic system presented here to a commercially available navigation system.
Thus, we have developed a system called DexVue, which displays a 3-D planning scenario in the operating room that was generated before the operation on a virtual reality workstation called Dextroscope. DexVue is used in parallel to a standard BrainLAB navigation system, and we report here our initial clinical experiences using DexVue with 21 patients.
Planning With the Dextroscope
Intraoperative navigation with DexVue is based on 3-D surgical planning using a virtual reality workstation called the Dextroscope (Volume Interactions, Singapore). This system has been described before.8-10,16
As a first step, magnetic resonance imaging (MRI) or computed tomographic (CT) data in DICOM (digital imaging and communications in medicine) format are transferred to the planning station, coregistered, and then displayed as stereoscopic objects. Wearing LCD (liquid crystal display) shutter glasses synchronized with a time-multiplexed display, the user reaches with both hands into a stereoscopic environment containing the “floating” patient-specific 3-D imaging data and various image processing and surgical planning tools. Electromagnetic sensors held in both hands convey the user’s 3-D interactions to the system, allowing intuitive real-time 3-D data manipulation. Software tools for segmentation, measuring, volume exploration, virtual tissue removal, and documentation are available within the virtual workspace (Figure 1). At the end of the planning session, the user specifies the structures to be transferred to DexVue for navigation. These structures may include, for example, the volumetric segmentation of a tumor, vascular structures such as a sinus underlying the planned craniotomy, and, based on CT data, a simulated craniotomy or simulated cranial base bone work. The selected data are stored in a USB memory stick for subsequent transfer to the DexVue system. DexVue uses the same rendering methods as the Dextroscope: a combination of volume rendering and isosurface rendering. Both renderings can be displayed simultaneously and overlapping. For example, skin and ventricles may be displayed as isosurfaces, whereas cortex, blood vessels, or tumors are displayed in volume rendering.
DexVue consists of a personal computer running Windows XP housed in a medical cart. The display is a 24-in stereoscopic monitor (Miracube, Pavonine, South Korea) with a resolution of 1024 × 768 (Figure 2).
During surgery, a cordless mouse (MX Air mouse, Logitech, Switzerland) is used by the surgeon in the sterile field (covered with a sterile bag) to control DexVue. The MX Air incorporates inertial sensors that allow it to work either as a normal mouse on a surface or, by lifting it from that surface, freely in space (although still controlling only the 2-D cursor, the coordinates of which are mapped to the interface). This allows navigational control of the DexVue 2-D interface and enabling of 3-D interaction with the virtual model, eg, rotation, zoom, or switching graphical components on or off (Figure 3).
For each patient, a T1-weighted 3-D MRI data set served as the common reference between the VectorVision and DexVue systems. The DICOM MRI was loaded in VectorVision, and the corresponding virtual model created from the same DICOM MRI data set was loaded in the DexVue system.
After DexVue was loaded with the 3-D data generated in the Dextroscope, DexVue was linked to VectorVision by means of an Ethernet cable using the VectorVision Link (VVLink) network-based interface software. The BrainLAB proprietary VVLink offers a seamless interface to connect external computer systems to the VectorVision software to take advantage of its data processing and tracking capabilities. It allows transferring instrument tracking coordinates from the VectorVision navigation system to an external system and importing JPEG files (showing, for example, screen captures of DexVue) to be viewed alongside the regular display. The data exchange to the VectorVision system is performed by the VectorVision Link C++ library, which is independent and based on the Visualization Toolkit. This enables the VectorVision real-time navigational tracking data (patient position, position of the probe, and/or focal point of the microscope) to be transferred to the DexVue system. With this navigational information, DexVue could display integrated views of the patient’s virtual model with views of the probe in its position relative to the patient data or show views of the patient’s virtual model from the viewpoint of the surgical microscope or the probe. Before using the system intraoperatively, we tested the reliability of the data transfer and hence the coregistration accuracy of the system with plastic models that had been scanned with CT and MRI.
The DexVue software was programmed by Volume Interactions Pte Ltd, Singapore, as a research project. It provides the following features (also see DexVue interface in Figure 2).
DexVue can work in either monoscopic or stereoscopic mode. Stereo display is achieved by a polarizing monitor (Miracube, South Korea), which requires the surgeons to wear passive polarizing glasses to perceive the 3-D image.
Two 3-D navigation modes can be selected: the probe view, which shows the view as it is seen when looking along the handheld probe or the viewing axis of the microscope, or the 3-D view, which allows viewing of the 3-D model from any viewpoint, under the control of the user (simulating, for example, a prone or supine position), and displaying of the relative position of the navigation probe (Figure 4).
The system can be used in 2 modes: the navigation mode, in which the 3-D model and the position of the probe are updated in the 3-D and 2-D windows according to the tracking information, or the inspection mode, in which the last position of the probe relative to the 3-D model can be frozen at will (eg, by pressing a foot switch provided by the system), allowing it to be freely inspected by use of the MX Air mouse: switching on or off 3-D objects, altering transparency, rotation, translation, or zooming.
The familiar orthogonal 2-D image planes are displayed alongside the large window containing the 3-D view (Figure 2). In the probe view mode, the probe tip (or the focal point of the microscope) defines the point of cross section on the 3 orthogonal image planes and simultaneously the viewing point toward the 3-D data. A 3-D display showing the 3-D model as created in the Dextroscope is displayed in 1 large window (Figure 3).
The system can work connected to the navigation system using the VVLink or can work as a standalone system, without connection to any navigation system, acting like an independent visualization system.
Twenty-one patients with intracranial lesions (10 female and 11 male) were operated on with DexVue (Table). Their ages ranged from 17 to 79 years; their mean age was 46 years.
TABLE List of Patien...Image Tools
Preoperatively, DICOM imaging series (MRI, magnetic resonance angiography [MRA], and CT) were imported into the Dextroscope, and a 3-D model consisting of coregistered volumes and segmentations of relevant structures like vessels, tumors, part of the skull base, tentorium, or cranial nerves was created. This virtual model was used by the surgical team to discuss surgical options, to simulate specific approaches, and to establish a surgical plan. More detailed descriptions of the technology of the Dextroscope and its applications for neurosurgical planning have been published previously.8-10
Once the surgical plan was determined, the 3-D model was transferred to the DexVue system in the operating room. The patient was coregistered with the VectorVision system in the usual way, using the BrainLAB “Z-touch” surface detecting probe in most cases. DexVue was linked to the VectorVision navigation system, and intraoperatively, both systems were used for navigation.
Postoperatively, the value of navigating with DexVue was evaluated by the lead neurosurgeon and the assistant by filling in a questionnaire. The questions related to spatial orientation compared with the BrainLAB system, which could be answered by selecting “strong improvement,” “improvement,” “no improvement,” “worse,” or “significantly worse”; preferred display for navigation (DexVue 3-D view vs BrainLAB cross sections); and workflow with respect to the need to put on and take off the glasses during navigation.
In a free format, the surgeons stated benefits or shortcomings of the system in each case (see the Table).
Intraoperatively, both VectorVision and DexVue were available for navigation. For navigational information, the display on both systems was inspected. It was clearly defined that in case of a discrepancy between the systems, the information displayed on VectorVision was to be used for decision making. When DexVue was used for navigation, a circulating nurse assisted the surgeons with putting on and removing the polarizing glasses.
Data Preparation and Setup of the DexVue System
We tested the system in 21 procedures. The DexVue system was used in all cases without technical problems. DexVue was evaluated in all cases by the 2 participating surgeons. However, in 3 patients, the evaluation by the assistant neurosurgeon was not performed because of clinical time constraints. Therefore, a total of 39 evaluation reports were available.
The various intracranial pathologies are demonstrated in the Table.
The preoperative preparation of the data set using the Dextroscope required 25 minutes on average. The setup of the DexVue system consisted of loading the patient’s virtual model, plugging in the Ethernet cable, and activating the VVLink module on the VectorVision system. This process took 5 to 10 minutes.
3-D Data-to-Patient Coregistration
The intraoperative accuracy of the DexVue data was identical to that of the data displayed on VectorVision in all cases. The average registration accuracy as calculated by the system (mean root square) was 2.5.mm. The virtual model displayed on DexVue was spatially displayed correctly and in concordance with the data displayed on VectorVision in all cases. This was verified intraoperatively by the surgeons by pointing with the probe to anatomic structures and checking their positions on the respective VectorVision and DexVue navigation screens.
Navigation With the DexVue Stereoscopic Display
* When asked about the value of spatial orientation of DexVue compared with VectorVision, the neurosurgeons stated strong improvement in 24 cases (62%) and improvement in 15 cases (38%). There was no case in which the neurosurgeon stated not improved, worsened, or significantly worsened spatial orientation.
* In all cases, the participating neurosurgeon preferred navigating with the data displayed on the DexVue stereo screen compared with the VectorVision system.
* Using the DexVue stereoscopic display requires wearing polarized glasses, which need to be put on with the help of a nurse. Although the glasses allow a clear view, they should not be worn throughout the operation. In 37 of 39 evaluations, the surgeons felt mildly disturbed by the fact that the glasses need to be worn during the navigation procedure.
* In general remarks (Table) written on the questionnaire, the 6 surgeons who participated in this study stated that DexVue provided easy understanding of the intraoperative anatomic situation by allowing freezing of the 3-D data and uncoupling and inspection of the data from different angles and scales of magnification; simplified the craniotomy by clearly showing its size and shape and underlying structures in relation to it; and helped by providing comprehensive spatial orientation during dissection of the lesion, especially with respect to vasculature.
ILLUSTRATIVE CASE REPORT
This 42-year-old female patient underwent surgery for a meningioma of the medial sphenoid ridge 2 years earlier. Progressive visual disturbances on her right eye led to an MRI, which revealed a recurrent meningioma on the planum sphenoidale with expansion into the right optic canal and consecutive compression of the right optic nerve. Preoperatively, a contrast-enhanced sagittal MRI volume data set and CT scan were obtained (Figure 5). These data were loaded in the Dextroscope, and the surgical pathoanatomy such as the optic system, tumor, neighboring arteries, and bony parts of the skull was segmented. The surgical planning revealed that a 2-cm right subfrontal approach using a supraorbital skin incision was suitable to remove this tumor.
After the DexVue system was set up in the operating room, navigation started by identifying the craniotomy according to the preoperatively created plan. This was an essential step because we planned a right subfrontal minicraniotomy via an eyebrow incision that had to be positioned precisely to reach the target zone. The craniotomy was easy to carry out by following the outlined edges of the planned craniotomy with the probe tip and subsequently with the craniotome as planned and visualized in the navigational 3-D model. During the course of surgery, navigation was used in several phases. In a first phase, it served to help identify the tumor and adjacent structures such as the optic nerve and carotid artery (Figure 6). During this phase, the surgeon benefitted from the DexVue model because the navigation took place in the already known preoperatively constructed virtual patient model. This complemented the visible scene as seen through the microscope because certain hidden structures like the clinoid process and the carotid artery could be clearly visualized. After large tumor parts had been removed, neuronavigation additionally helped in identifying the intracanalicular tumor aspects. Compared with VectorVision, DexVue displayed the anatomic relation of the tumor remnants, the optic system, and the adjacent vessels in exact detail. In this phase, the DexVue system was also used in a decoupled mode to inspect the virtual patient model in 360° to fully understand the location of the tumor remnant in and around the optic canal. Figure 6A shows the view generated by the DexVue system compared with the surgeon’s intraoperative view.
The tumor was removed totally. Postoperatively, the patient’s vision improved remarkably, and postoperative MRI scans after 4 months did not show any tumor remnant.
Neurosurgical interventions are challenging procedures in a delicate and complex 3-D space, and a core component of successful surgery is a precise understanding of the anatomy of the surgical target and the corridor that leads to it. This 3-D understanding starts with the planning of the surgery, which lays the foundation of all surgical steps to follow. Over the past 2 decades, interactive 3-D image processing platforms, often associated with the term virtual reality technology, have been reported in neurosurgery as promising planning tools.7,18 Encouraging experiences have been reported in surgical planning and simulation10,11,13,16 and in surgical training.15,19 However, until now, a major limitation of these systems has been the fact that although elaborate 3-D planning could be achieved, the 3-D graphic models could not be transferred to the operating room to be used for intraoperative navigation.10,17
Since 2003, the Department of Neurosurgery of the University Hospital of Mainz (Mainz, Germany) has been working extensively with the 3-D neurosurgical planning station called Dextroscope, planning approximately 800 neurosurgical procedures and still using it on a regular basis.14,16 In 2006, Volume Interactions (Bracco AMT) developed a technology to navigate intraoperatively with the 3-D data of the Dextroscope. The system, called DEX-Ray, was a standalone navigation system that allowed an intraoperative overlay of the preoperative planning data of the Dextroscope over a video stream generated by a lipstick-sized camera integrated into a handheld pointing device. First clinical results were published by Kockro et al17 in 2009, and the results of multicenter clinical trials conducted at the National Neuroscience Institute Singapore and Hospital Clinic i Provincial de Barcelona (Barcelona, Spain) were very encouraging. However, the use of the system was limited by a time-consuming setup procedure, which, given that it was an experimental system, had to take place in addition to the setup of a standard navigation system. In DEX-Ray, because it was an augmented-reality system obtaining its real-world images from a handheld probe, the presentation of the 3-D graphics was monoscopic (only 1 video camera) and was constrained to display the viewing angle that matched that of the video camera and therefore did not include functionality to be uncoupled from the camera and used independently for inspection of the patient data. Therefore, the decision was made to develop a system that would use the tracking capabilities of an already existing navigation system and at the same time display the 3-D graphics created during the planning process—in stereo and with an intuitive user interface to control the 3-D data from within the sterile intraoperative setting. This system was called DexVue. It works by connecting to a commercially available navigation system (BrainLAB) using the Ethernet available. The result is a combined navigation platform that allows working with customized 3-D data generated during the planning procedure while being able to navigate with the features provided by the standard navigation system. In 21 cases, we added this system to our surgical routine and evaluated its benefit for navigation.
All 21 procedures of this series were planned in the Dextroscope. The data transfer, 3-D reconstruction, and fusion of CT and MR data, as well the actual planning procedure, worked without technical problems. The time spent for data fusion and segmentation in our series was on average 25 minutes, which was significantly faster than reported in our initial series of 106 patients planned with the Dextroscope.16 We do not consider the time spent for planning only a necessity to build the virtual model for navigation; we consider it a useful learning process specific to each patient during which the surgeon deals extensively with the patient’s individual pathoanatomic situation.
Connecting VectorVision and DexVue
Setting up DexVue worked without technical issues. The coupling of VectorVision and DexVue took place via the BrainLAB proprietary connection software VVLink, which is available as part of the BrainLAB product line.20 The advantage of this technology is that it facilitates the communication between a commercial navigation system and a research platform, allowing the testing of novel features and algorithms in a clinical setting while preserving a stable functioning of the navigation system during the routine clinical procedure.21 Reports on the clinical use of this technology are scarce. Elhawary and coworkers21 used VVLink to enable intraoperative real-time querying of white matter tracts during frameless stereotactic navigation in 5 cases. The system was reliable in their series; however, they had to use bridging software to convert data from VVLink to be imported in their 3-D tractography software. They stated that using this bridging led to numerous software failures; hence, they found it desirable to have a direct connection from the VVCranial software to their 3-D tractography software.
Sergeeva and colleagues22 used the VVLink successfully to integrate 3-D ultrasound into a neuronavigation system; however, their system was tested only in a laboratory setting.
We perceived the 3-D graphics displayed by DexVue as an added value even before skin incision because the 3-D model generated preoperatively could directly be viewed in coregistration with the actual patient, hence providing instant spatial understanding of the subsurface anatomy and the structures in the depth of the surgical target area. Any structure of the virtual model, including the virtual skin silhouette, can be turned transparent, revealing the underlying structures. This advantage became most obvious in case of craniotomies next to the large venous sinuses, supraorbital region, middle fossa base, or retrosigmoid region or for craniotomies above a specific cortical area or a sulcus that had been identified for dissection toward a subcortical target. Because the most suitable shape and size of the craniotomy had been planned preoperatively and was part of the 3-D data set available in the operating room, the planned craniotomy could be seen in direct relation to the actual skin surface. This facilitated finding the optimal line of skin incision and, once on bone level, allowed the craniotomy to be carried out as displayed.
During navigation into the depth of the surgical corridor (or into the target area along the tracked viewing axis of the microscope), the magnification and focal point served as spatial references for DexVue to generate a virtual view that resembled the surgeon’s microscopic view. Most surgeons opted to navigate with this mode rather than with axial, coronal, and sagittal planes and regarded it as an immediate advantage to navigate with the virtual model that they had built and familiarized themselves with during the planning process. In case of navigating within 1 or several subvolumes, a crop plane positioned at the tip of the navigation pointer and perpendicular to its axis (or the line of sight of the microscope) avoided being confused with structures positioned between the navigation point and the surface of the surgical corridor. In the area of the cranial base or during vascular procedures the 3D computer graphic display resembling the direct view towards structures like the neck of an aneurysm and its connected vessels, the position of the third nerve behind a tumor or the distance to the optic canal was regarded as a novel navigational experience, especially because the data were displayed on a stereo monitor and the depth could therefore be perceived even when the image was static. Spatial orientation was enhanced by the option to uncouple the 3-D data from the coregistered viewpoint and to freely rotate and zoom with the MX Air mouse to inspect it closely. In fact, it was thought that, especially in areas in which the overall position of the focal point of the microscope had generally been understood, the feature of inspecting the surrounding structures was of greater value than the mere knowledge derived from the position of the probe. Naturally, to continue the navigation, the software provided a feature that would snap the uncoupled 3-D objects back to the correct coregistered navigational position at the desired time. Navigation with 3-D graphics in cranial and spinal neurosurgery has experienced increasing interest in the past decades (especially because 3-D graphical rendering power and stereoscopic viewing technology have improved tremendously). However, the commercially available navigation systems have barely changed. Commercially available navigation is still based mainly on 2-D cross sections of MRI or CT data, and it remains a challenge to understand exactly the position of the pointer tip or microscope focal point and, most important, to comprehend structures of interest beyond and next to the instantaneous position of the probe in 3-D space.9 In addition, the currently available planning features are rudimentary, with limited options of segmentation, multimodality display, realistic simulation of intraoperative viewpoints, or simulation of bone work. Multimodality display is technically possible on many commercial navigation systems; however, it is limited mainly to 2-D image planes, and even this feature is hardly used routinely.4,32 Most navigation systems also offer 3-D surface reconstructions of the outline of the skin of the head (as polygonal meshes) or other presegmented anatomic structures. However, these are simple graphic objects originating from 1 imaging series, and they can hardly be viewed in direct context and scale to the surgical site.23,24 For that reason, we did not find much use for the 3-D features of the BrainLAB system installed in our institution, and when evaluating DexVue, we compared it with navigating with axial, coronal, and sagittal image planes.
Several research groups reported profound benefits of 3-D navigation techniques. Rohde et al25 displayed segmented 3-D CT angiographic data on a regular navigation system and adjusted the correct intraoperative viewpoint by manual rotation. Despite this rather cumbersome procedure and the lack of a stereoscopic display, they reported that “this technique has the potential to improve operative results by reduction of the surgical trauma and avoidance of intraoperative complications.” From their experience of operating on 110 patients with a navigation system (CBYON, Palo Alto, California)24 that allowed the generation of volumetric, scalable, and viewpoint-adaptable graphics with translucent surface modulation, Rosahl et al26 reported improved understanding of hidden structures and an intraoperative “déjà-vu” experience based on the 3-D planning process. Unsgaard et al27 displayed combined preoperative MRA and intraoperative ultrasound angiography on a red-green stereoscopic display linked to a navigation system. They reported significant enhancement of the surgeon’s perception of the vascular architecture, allowing direct identification of feeding vessels with a pointer. They also concluded that this technology improved the identification and clipping of arteriovenous malformation feeders in the initial phase of the operation. In a similar study of 9 patients, Mathiesen et al used a navigated red-green stereoscopic display of MRA and 3-D ultrasound for resection of arteriovenous malformations. They regarded the stereoscopic visualization of preoperative MRA as a powerful means to construct a mental 3-D picture of the arteriovenous malformation architecture and feeder anatomy even before skin incision, and they concluded that “this technology improved the quality and flow of surgery.”
In contrast Unsgaard et al27 and Mathiesen et al28 who used an anaglyphic display to generate a stereoscopic image, we used a polarizing monitor and glasses. The benefit here resides in the preservation of colors, allowing the preoperatively generated 3-D models to be inspected intraoperatively with all their color-coded details and shades, making the identification of relevant structures like cranial nerves or vessel branches easier.
Some navigation systems offer image injection of navigational data into 1 optical channel of the microscope; however, the benefits of this technology are controversial. Several groups who have worked with image injection into the microscope faced the challenge of overcoming the phenomenon that, despite placing great emphasis on the adding virtual depth cues like transparency or shading, it is hard to achieve a realistic perception of the virtual structures being situated below the visible surface.17,29-31 Nevertheless, augmented-reality techniques would ultimately provide the most direct and clearest navigational information because they provide a simultaneous view of both the real surgical scene and the graphic navigational environment beyond the visible surfaces.17,32 However, because the technical challenges of this technology, especially when it is applied in stereo, are not trivial, the mere use of a clear stereoscopic display to present the navigation data is a major leap forward.33 We perceived working with the 3-D data on the stereo monitor as a new quality of navigation going far beyond today’s method of “point and identify” navigation. Inspecting and discussing the virtual models while manipulating them with the MX Air mouse in 3-D space revealed a refreshing new source of information, adding balance and confidence to the course of surgery.
Limitations and Outlook
The time involved for the creation of the 3-D models during the preoperative planning process has been reduced over the years mainly by improved coregistration and segmentation algorithms and refined user interfaces. However, the segmentation process most likely cannot be fully automated or left to junior residents, students, or technicians. If intraoperative decisions are based on postprocessed data rather than the original image planes, the lead surgeon must be involved in the segmentation process or a least carefully verify its results.
In 37 of the 39 cases of our series, the surgeons stated that putting on the polarizing glasses to look at the navigation monitor and taking them off to continue operating was cumbersome and interrupted workflow and that a display that allows stereo perception without the need for glasses would be preferred. Even though basic principles of stereo vision and optical techniques to view pictures in stereo were understood as early as 1838,34 developing an autostereoscopic display to view data naturally in 3-D without the use of glasses is technically challenging. Most currently available displays lack satisfactory resolution and image clarity. The “sweet spot” defining the viewing angle toward a clear stereo image usually is too small for reliable clinical use, and it prevents multiple users from perceiving good image quality. An autostereoscopic monitor for viewing volumetric data developed by a Swedish company (SETRED, Stockholm) provides 30° of continuous parallax with a high-resolution 3-D image. Parallax means that one sees different perspectives of the volume by moving the head sideways, the same way that one sees different perspectives when looking around a real object. We have tested this system by loading our own imaging data, and we found the level of clarity suitable for medical image analysis. Displaying a series of 314 volumetric maximum-intensity projections of MRA data on the SETRED monitor, Abildgaard et al35 reported significantly improved structural visualization and perception.
Neurosurgical procedures are manipulations in complex 3-D space. The anatomy is spatial, dissection is spatial, and so is our understanding of the surgical strategy. Hence, we believe that complex neurosurgical procedures should be planned with 3-D data and navigated in 3-D space. This improves spatial orientation and confidence and reduces guesswork and exploratory dissection. Specifically, the use of a stereoscopic monitor improves 3-D clarity of structural display and adds realism to the navigation process. Surgical corridors surrounded by synthetic 3-D navigation data, ideally even from intraoperative sources, will ultimately simplify many neurosurgical procedures, which is the basis for safer surgery and better outcome.
The DexVue system was developed by Volume Interactions Pte Ltd. DexVue is an experimental system and is not available as a commercial product. Dr Kockro, Dr Serra, L.C. Goh, and E. Lee were cofounders of Volume Interactions Pte Ltd; however, because the company has been discontinued, they have no financial interest related to the technology described here. The other authors have no personal financial or institutional interest in any of the drugs, materials, or devices described in this article.
We would like to thank all our colleagues and operating room nursing staff of the Department of Neurosurgery, University Hospital of Mainz, Mainz, Germany, for supporting this work. We gratefully acknowledge the inspiring support of our former teacher Axel Perneczky, who passed away in 2009.
1. Barnett GH, Nathoo N. The modern brain tumor operating room: from standard essentials to current state-of-the-art. J Neurooncol. 2004;69(1-3):25–33.
2. Slavin KV. Neuronavigation in neurosurgery: current state of affairs. Expert Rev Med Devices. 2008;5(1):1–3.
3. Tirakotai W, Hellwig D, Bertalanffy H, Riegel T. Localization of precentral gyrus in image-guided surgery for motor cortex stimulation. Acta Neurochir Suppl. 2007;97(pt 2):75–79.
4. Wagner W, Gaab MR, Schroeder HW, Tschiltschke W. Cranial neuronavigation in neurosurgery: assessment of usefulness in relation to type and site of pathology in 284 patients. Minim Invasive Neurosurg. 2000;43(3):124–131.
5. Spivak CJ, Pirouzmand F. Comparison of the reliability of brain lesion localization when using traditional and stereotactic image-guided techniques: a prospective study. J Neurosurg. 2005;103(3):424–427.
6. Grunert P, Darabi K, Espinosa J, Filippi R. Computer-aided navigation in neurosurgery. Neurosurg Rev. 2003;26(2):73–99.
7. Elder JB, Hoh DJ, Oh BC, Heller AC, Liu CY, Apuzzo ML. The future of cerebral surgery: a kaleidoscope of opportunities. Neurosurgery. 2008;62(6 suppl 3):1555–1579.
8. Kockro RA, Serra L, Tsai YT, et al.. Planning of skull base surgery in the virtual workbench: clinical experiences. Stud Health Technol Inform. 1999;62:187–188.
9. Serra L, Hern N, Guan CG, et al.. An interface for precise and comfortable 3D work with volumetric medical datasets. Stud Health Technol Inform. 1999;62:328–334.
10. Kockro RA, Serra L, Tseng-Tsai Y, et al.. Planning and simulation of neurosurgery in a virtual reality environment. Neurosurgery. 2000;46(1):118–135.
11. Anil SM, Kato Y, Hayakawa M, Yoshida K, Nagahisha S, Kanno T. Virtual 3-dimensional preoperative planning with the Dextroscope for excision of a 4th ventricular ependymoma. Minim Invasive Neurosurg. 2007;50(2):65–70.
12. Gorman PJ, Meier AH, Krummel TM. Simulation and virtual reality in surgical education: real or unreal? Arch Surg. 1999;134(11):1203–1208.
13. Wong GK, Zhu CX, Ahuja AT, Poon WS. Craniotomy and clipping of intracranial aneurysm in a stereoscopic virtual reality environment. Neurosurgery. 2007;61(0):564–568.
14. Kockro RA, Stadie A, Schwandt E, et al.. A collaborative virtual reality environment for neurosurgical planning and training. Neurosurgery. 2007;61(5 suppl 2):379–391.
15. Lee CK, Tay LL, Ng WH, Ng I, Ang BT. Optimization of ventricular catheter placement via posterior approaches: a virtual reality simulation study. Surg Neurol. 2008;70(3):274–277.
16. Stadie AT, Kockro RA, Reisch R, et al.. Virtual reality system for planning minimally invasive neurosurgery. Technical note. J Neurosurg. 2008;108(2):382–394.
17. Kockro RA, Tsai YT, Ng I, et al.. Dex-ray: augmented reality neurosurgical navigation with a handheld video probe. Neurosurgery. 2009;65(4):795–807.
18. Spicer MA, Apuzzo ML. Virtual reality surgery: neurosurgery and the contemporary landscape. Neurosurgery. 2003;52(3):489–497.
19. Caversaccio M, Langlotz F, Nolte LP, Häusler R. Impact of a self-developed planning and self-constructed navigation system on skull base surgery: 10 years experience. Acta Otolaryngol. 2007;127(4):403–407.
20. Papademetris X, DeLorenzo C, Flossmann S, et al.. From medical image computing to computer-aided intervention: development of a research interface for image-guided navigation. Int J Med Robot. 2009;5(2):147–157.
21. Elhawary H, Liu H, Patel P, et al.. Intraoperative real-time querying of white matter tracts during frameless stereotactic neuronavigation. Neurosurgery. 2011;68(2):506–516.
22. Sergeeva O, Uhlemann F, Schackert G, Hergeth C, Morgenstern U, Steinmeier R. Integration of intraoperative 3D-ultrasound in a commercial navigation system. Zentralbl Neurochir. 2006;67(4):197–203.
23. Gildenberg PL, Labuz J. Use of a volumetric target for image-guided surgery. Neurosurgery. 2006;59(3):651–659.
24. Shahidi R, Bax MR, Maurer CR Jr, et al.. Implementation, calibration and accuracy testing of an image-enhanced endoscopy system. IEEE Trans Med Imaging. 2002;21(12):1524–1535.
25. Rohde V, Hans FJ, Mayfrank L, Dammert S, Gilsbach JM, Coenen VA. How useful is the 3-dimensional, surgeon's perspective-adjusted visualisation of the vessel anatomy during aneurysm surgery? A prospective clinical trial. Neurosurg Rev. 2007;30(3):209–216.
26. Rosahl SK, Gharabaghi A, Hubbe U, Shahidi R, Samii M. Virtual reality augmentation in skull base surgery. Skull Base. 2006;16(2):59–66.
27. Unsgaard G, Ommedal S, Rygh OM, Lindseth F. Operation of arteriovenous malformations assisted by stereoscopic navigation-controlled display of preoperative magnetic resonance angiography and intraoperative ultrasound angiography. Neurosurgery. 2005;56(1 suppl):281–290.
28. Mathiesen T, Peredo I, Edner G, et al.. Neuronavigation for arteriovenous malformation surgery by intraoperative three-dimensional ultrasound angiography. Neurosurgery. 2007;60(4 suppl 2):345–350.
29. Edwards PJ, King AP, Maurer CR Jr, et al.. Design and evaluation of a system for microscope-assisted guided interventions (MAGI). IEEE Trans Med Imaging. 2000;19(11):1082–1093.
30. King AP, Edwards PJ, Maurer CR, et al.. A system for microscope-assisted guided interventions. Stereotact Funct Neurosurg. 1999;72(2-4):107–111.
31. Nijmeh AD, Goodger NM, Hawkes D, Edwards PJ, McGurk M. Image-guided navigation in oral and maxillofacial surgery. Br J Oral Maxillofac Surg. 2005;43(4):294–302.
32. Liao H, Ishihara H, Tran HH, Masamune K, Sakuma I, Dohi T. Precision-guided surgical navigation system using laser guidance and 3D autostereoscopic image overlay. Comput Med Imaging Graph. 2010;34(1):46–54.
33. van Beurden MHPH, IJsselsteijn WA, Juola JF. Effectiveness of stereoscopic displays in medicine: a review. 3-D Res. 2012;3(1):1–13.
34. Wheatstone C. Contributions to the physiology of vision. Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philosophical Trans R Soc. 1838;128(1838):371–394.
35. Abildgaard A, Witwit AK, Karlsen JS, et al.. An autostereoscopic 3D display can improve visualization of 3D models from intracranial MR angiography. Int J Comput Assist Radiol Surg. 2010;5(5):549–554.