Christopher, Lauren A. PhD*; William, Albert MS‡; Cohen-Gadol, Aaron A. MD, MSc§
Neurosurgery is a 4-dimensional process that requires the operator to move through time in an anatomic 3-dimesional (3-D) space, so the requirements for 3-D imaging and displays must meet or exceed the naked eye and microscopic surgery reference in both resolution and 3-D perception. Combining real-time stereo data with preoperative data for image-guided surgery (IGS) is where this 3-D technology can make a significant impact beyond the current practice. Providing students and clinicians a better experience of the complex 3-D anatomy of the cerebrovascular structures can significantly affect neurosurgical education. For education, the ability to edit, merge, and overlay recorded surgery with anatomic data is of key importance for training surgeons. Both the neurosurgical training and the operating room applications of 3-D display technology are discussed in this article.
The current clinical practice in neurosurgical imaging separates the surgical task into 2 steps. First, the planning is done against 3-D scans. These scans are available for viewing in slices or rendered as a volume but frequently viewed in 2 dimensions (2-D). Many computer vision cues such as shading, lighting, and vanishing points are used to create the impression of a 3-D view on a 2-D screen. Once the planning is done, the second step relies on the surgeon’s trained eye and the views through the microscope used in the operating theater, where the planning scans are typically available only for separate viewing.
The field of IGS, which began in the late 1980s, has grown rapidly, especially over the last 10 years. As discussed by Peters,1 IGS for neurosurgery has been successfully deployed in surgery, and in stereotactic radiosurgery, it is paired with robotic manipulation of the patient and the radiation sources. In many cases, the clinician still develops these 3-D therapy plans on conventional 2-D displays. Mixed-reality viewing, in which the clinician has both the preoperative and patient data aligned and registered with video from surgery, is an emerging research area. This is due to continuing improvement in the hardware and algorithms applied to these IGS tasks. A 2005 augmented-virtuality article by Paul et al2 presented a system that takes the video data from the neurosurgical field, creates a depth model from the left and right camera views, and renders this live view to a (2-D) monitor registered to the preoperative surface models generated in surgical planning.
Education is a second application of 3-D display. During their neurosurgical training, residents begin studying 2-D anatomy from textbooks. The resident staff members also learn from 3-D models of anatomy, 3-D cadaver dissections, and finally observation and practice in the surgical theater. The computer visualization tools that create 3-D interactive models from preoperative scans have also been an important advancement. Some examples of these computer visualizations are 3-D movies developed by William3 for student training. Figure 1 shows a computer-generated model for orthopedic surgery, a rendering of knee anatomy. An article by Alaraj et al4 in 2011 extended the 3-D world to virtual reality training in neurosurgery. These authors cited several procedures currently using immersive virtual reality training methods, including ventriculostomy catheter insertion and endoscopic and endovascular training.
Even though both clinical practice and education demand data that are inherently 3-D and IGS systems will be more ubiquitous in the near future, relatively few studies to date have discussed the application of 3-D displays in neurosurgery. Viewing 3-D was shown to improve surgical education in a study by Abildgaard et al5 in which the authors demonstrated that the clinical task of identifying intracranial arterial markings was significantly improved with the autostereoscopic display compared with the standard 2-D display. In addition, Henn et al6 confirmed that the clinical education task can be improved with stereo vision. A similar finding by Ilgner7 in otorhinolaryngology showed that students’ retention and task performance were improved by 3-D display. Stereoscopic vision for medical content has also been studied in general terms by William3,8 and William and Bailey.9
Because IGS is an emerging field, a variety of terms are used to describe the components of the system. To clarify the terminology, Kersten-Oertel et al10 described a taxonomy of common terms that define the IGS workflows and technical framework. In our work, we investigate the display devices. In particular, we use the multiview 3-D autostereoscopic display, the 3-D binocular stereoscopic display, and the 2-D display. Figure 2 shows an operating room setup for this trio of displays.
Any 3-D display device must adapt to the clinical and teaching environment. For teaching, a standard computer graphics program such as Maya software by Autodesk11 is used to develop anatomic models into a 3-D training movie. This program was used to generate the views of knee anatomy shown in Figure 1. In the clinical setting, real-time surgical video, the stereo images originate from the operating microscope camera (stereo pair) and can be displayed and recorded. These data can be handled in 3-D visualization programs in which data can be manipulated by the user. Many of these tools are based on software from Kitware known as the Visualization ToolKit (VTK).12,13 Our work has developed a workflow path from these sources to the 3-D autostereoscopic display.
Here, we describe our experience with 3-D display devices with 2 potential applications. The feasibility of the 3-D application is shown in intraoperative tasks and for the training of the future generation of neurosurgeons. Presented are the combinations of the required hardware and software for the 3-D display and snapshots of the resulting recorded surgeries. The strengths and weaknesses of the 3-D techniques are discussed.
Anaglyph is a well-known, standard practice for showing stereo images in the print media. Our methods for producing anaglyph images start with a stereo pair and use standard software tools to create the red-blue image. All images captured from the cameras are recorded in the format provided by the vendors, with left and right views available either side by side or top to bottom, and the digitized images are stored on a computer. These images are available for display on the stereo or autostereo monitors. Figure 3 is an example of an anaglyph stereogram of a neurosurgical procedure. The benefit of this type of imaging is that glasses with these filters are common and cheaply available. The drawback is the loss of color quality through the stereogram, which can be very important in surgical images, especially for vascular structures. The viewer may perceive that color has been lost so that the video appears black and white. Please see the link below for the anaglyph video, which is suitable to view with color filter glasses, of a right frontotemporal craniotomy for clip ligation of a middle cerebral artery aneurysm (http://bcove.me/w9y98tus). Please note that the same video is shown in different formats below.
Stereoscopic Imaging With Polarized Displays
The operating theater has been set up with 2 available operating microscope systems capable of stereoscopic recording connected to 3-D display systems. The 2 systems include 1 from TrueVision Systems that requires a stereoscopic camera to be mounted on the commercially available operating microscopes and 1 from Zeiss Meditech that records stereoscopic images using 2 built-in (left and right eye) cameras. Included in both systems is the 3-D display screen paired with glasses and personal computer-based 3-D video storage with editing software. Intradural portions of procedures such as tumor resection and aneurysm clip ligation were recorded and edited for training and presentation. Additional neurosurgical training recordings were made of cadaver dissections.
The mounting camera for the TrueVision system is fitted with 2 high-definition television (HDTV)-resolution sensors that provide 1080P HDTV-quality image capture. The signals are combined within the stereoscopic camera and fed to a recording and processing system in a left-to-right side-by-side manner. Capable of interfacing with most commercially available 3-D displays, the processing system in this instance formats the signal to a row-interleaved, line-by-line output using a standard high-definition multimedia interface protocol suitable for 3-D. In the 46-in liquid crystal display (LCD) HDTV 3-D display, the odd lines are circularly polarized in 1 direction, and the even lines use the opposite polarization. This matches with passive stereo circularly polarized glasses, so that the left eye receives 1 polarized left image and the right eye receives the oppositely polarized image.
The Zeiss system currently uses 2 standard definition cameras built into the microscope and handles the images in a top-to-bottom manner for display on an LCD 3-D display. The Zeiss display also uses polarization and polarized glasses for viewing 3-D. For both the Zeiss and TrueVision systems, these 2 polarized images can be as high as the resolution of the microscope cameras but may be lower because of multiplexing needed for the data transmission bandwidth to the display and because of the interleaving necessary for polarization. The viewer’s brain then processes the 2 images, inferring depth from their spatial differences (also called disparity).
These left-to-right (or top-to-bottom) images and movies are recorded and can be edited and replayed in 3-D on various displays. The editing software allows several standard video output formats, including audio video interleave (.avi). For example, in the TrueVision system, the .avi file contains the side-by-side left-to-right view and the audio track. Figure 4 shows a left-to-right stereo example of a cadaver dissection of a left-sided frontotemporal craniotomy and sectioning the dural fold along the lateral aspect of the superior orbital fissure.
Compared with current standard of surgical microscope images, the camera source can limit the image reproduction quality in the system. To create a high-definition--resolution image, the optics must provide a good match to the imager chip of the camera. The best systems have accurate color reproduction, good resolution, and good sensitivity. Our experience with several systems indicates that this is technically feasible. Please see the online side-by-side video in the link below of a right frontotemporal craniotomy for clip ligation of a middle cerebral artery aneurysm (http://bcove.me/pkzlvczs). This is the same video mentioned above in anaglyph format.
Autostereoscopic Imaging With Multiview Display
For the autostereo display, the stereo pair data are converted to 2-D–plus–depth format and viewed on the commercially available 3-D Fusion autostereoscopic display screen. This display system uses the depth information to render the scene in 3-D with multiple views. The current technology renders 9 views, and these views are displayed through a lenticular lens arrangement in front of a standard LCD. The lenses gather the light from each view, directing it to the viewer’s left and right eyes as separate images. The brain again infers depth from the different spatial appearance in the paired views. In addition, because the display allows some “look around,” the cue of the viewer’s motion disparity can be used by the brain to infer depth. Figure 5 presents an intraoperative picture from clip ligation of a posterior inferior cerebellar aneurysm with the depth map (on right) automatically generated from the left-to-right views captured at the level of the camera of the microscope. The lighter shade of gray corresponds to a closer depth, and the darker gray is farther away from the viewer.
The depth information is generated automatically, described by Liu and Vasanth,14 using both the disparity (left-to-right differences) and scene flow motion information. The larger the disparity between objects in the left and right views is and the higher the degree of motion of the object is, the closer the object is to the viewer. With the use of geometry and pattern matching, an accurate depth map is constructed. Our 2-D--plus--depth video results were generated with the commercially available software from 3-DFusion with this algorithm embedded. In addition, an early version of software running the real-time version of this algorithm has been tested. This software takes the left-to-right output from the TrueVision system, modifies it for 2-D plus depth, and then immediately provides the output signal to the autostereoscopic display. This has been run concurrently with the 3 displays in the operating theater. Please see the link below for the 2-D plus depth video, suitable for viewing on an autostereoscopic display, of a right frontotemporal craniotomy for clip ligation of a middle cerebral artery aneurysm (http://bcove.me/ge58nnw0). This is the same video mentioned above in side-by-side format.
3-D Displays for IGS
In IGS, the surgeon compares a combination of intraoperative and preoperative planning images. These 2 modes of imaging are quite different. The intraoperative data are stored as a movie over time of exactly what is viewed through the microscope (in the case of 3-D, this is left-to-right views). The preoperative images are 3-D volume scans (eg, computed tomography, magnetic resonance, ultrasound) taken of the patient at a particular instant in time. The 3-D volume is first segmented by tissue type, eg, with thresholding or by statistical techniques,15 and then the segmented volume is rendered to make a model of the anatomy in 2-D. This rendered view uses lighting, shading, and vanishing point to present the clinician with the optical cues for understanding the image in 3-D.
These scans are typically viewed through a computer graphics tool. One of the common open-source software viewing tools is the Kitware VTK.12 This software is a common starting point for many other medical visualization tools, which means that the data structures across these tools are substantially similar. For our research, we also use this common tool as a base because the results are transferrable to other tools.
An important component of visualization software is that the depth information about a rendered scene from the 3-D scans is directly available and can be extracted from the graphics memory. For example, Figure 6 is a surface model of a skull from VTK. The rendering tool uses Z-depth (Figure 6, right) information in the process of rendering the 2-D image (Figure 6, left). These planning data can then be used for overlay of real-time surgery in an IGS application.
Several authors have explored the use of VTK as a source for 3-D visualization. Portoni et al16 described use of the Visible Human data sets with stereo viewing, and Ruijters17 described angiography data with VTK driving the real-time autostereo visualization. A 3-D annotation system using consumer game controller merged with the VTK visualization in autostereoscopic display was described by Vitanovski et al.18
The IGS application using 3-D displays has been researched and reported by a few authors. The authors of the article about data visualization processing also envision the use of 3-D displays in IGS.10 Cardiac IGS at the University of Tokyo was reported by Liao19,20 using an integral videography approach to autostereo display technology. Stereoscopic IGS was noted by TrueVision for ophthalmology applications in which cataract surgery was improved.21 Minimally invasive liver IGS was reported by Chopra et al.22 For endoscopic skull base surgery, Wasserzug et al23 reported that surgeons had improved identification of several brain structures with 3-D over 2-D.
For IGS, what remains is how to combine real-time and preoperative sources. The preoperative and intraoperative combination is a demanding computational task and the subject of ongoing research. The key technology for this combination is 3-D volume registration. Registration takes the features in one 3-D volume and tries to match those features to similar ones in the second 3-D volume. Therefore, both the intraoperative data and the preoperative 3-D scans must be in volumetric form. Although rigid 3-D manual registration was used in our experiments to show feasibility for the 3-D displays, nonrigid automatic registration is the preferred method, which tracks tissue shifts during surgery. Because of the lack of fast and accurate nonrigid methods, we are exploring this subject in future research. Finally, to obtain 3-D information from the intraoperative video, the typical process is to infer depth from the disparity (differences) in the stereo (left-to-right) source data. This is the same process used to create images for the autostereoscopic display.
TrueVision has reported that it is developing an intraoperative IGS overlay software package that incorporates 3 essential elements (personal communication, Burton Tripathi, vice president of product development, TrueVision Systems). The device will include high-definition 3-D visualization of the microsurgical site, a volumetric digital imaging and communications in medicine (DICOM) rendering engine similar to the VTK tool described previously, and a real-time data interface to existing IGS systems that incorporates live tracking and registration data into the rendering engine to overlay the DICOM rendering accurately with the live microsurgical view.
This research proved the feasibility of this future work by creating an overlay by manual means in non--real-time. The images depicted in Figures 7 and 8 were composed with a prototype of the software described above. In the prototype software, it is possible to substitute the live microsurgical visualization with prerecorded stereoscopic videos. Because of the postprocess nature of the overlay with video footage, the registration process is accomplished manually rather than through the IGS interface. Further work is underway to automate the process and to make the system available for real-time live surgery.
From our initial research in the 3-D displays for neurosurgery, we see the positive application of 3-D technology both in the education of neurosurgeons and in the operating theater for IGS. Figure 3 demonstrates a still anaglyph of foramen magnum meningioma surgery suitable for educational purposes. The feasibility of stereoscopy and autostereoscopy in the operating room is established.
We show initial feasibility of merging preoperative data from a visualization tool with intraoperative data of neurosurgery. We show a still image of this overlay in Figure 7 in anaglyph format, which can be viewed with red-blue glasses. Figure 8 shows the left and right views, which can be used on a stereoscopic display with polarization. Finally, for the first time, we present the left view plus a depth map (Figure 9) suitable for autostereoscopic display.
The present study is the first attempt to test the stereo camera system with the autostereo display in real time for neurosurgical application in the operating theater, as seen in Figure 2. We have given examples of the data visualization from VTK, Maya, and real-time surgery. Our next research steps will tackle the overlay and registration of 3-D preoperative volume data with a 3-D stereoscopic and autostereoscopic display of the real-time surgery for improved IGS.
We can first make some subjective conclusions about the microscope view compared with various electronic view methods. For resolution, it is our experience that the HD-resolution camera is required. In addition, the camera imager optics should be integrated to provide to the display a full-width image rather than a circle. Furthermore, at the camera level, sensitivity (in low light) should be appropriately managed to reduce noise.
We found some variation in our tested 3-D displays in terms of resolution, size, contrast ratio, and color reproduction. For a proper comparison of microscope to display resolution, the visual fields must be filled equally by the images. This can be done by matching the perceived picture heights (in some cases, coming closer to or farther from the display). The optimal distance is 2 to 3 picture heights away from the display. At this distance, an HD display of an HD-resolution camera is substantially similar to the live data. However, we have seen some resolution compromises in the systems that we tested for 3-D, discussed below. Second, the high contrast ratios (1000:1) of modern displays also bring the electronic view much closer to the microscope view but also may enhance imager noise in the low-light situation. We have also experienced that the color gamut and reproduction of the microscope live view with good imager and display technology are comparable.
We next discuss the applications of the various display technologies.
Neurosurgery Text and Print Training Applications
The display technique that requires printed 3-D images forces the technology toward the anaglyph stereograms for the foreseeable future. Print holography could be a longer-term technology improvement for print publication of 3-D. The benefits of anaglyph are that the technology is very inexpensive and the resolution is quite flexible, matching the best camera or graphics data in print quality. The drawback of anaglyph is the poor color reproduction, which can be important for surgical anatomy training. In addition, red-cyan glasses have some short-term color fatigue effects on the viewer, in addition to the discomfort caused by the glasses.
Neurosurgery Large-Theater Video Training Applications
The technical feasibility of various 3-D displays in neurosurgery training and operating theater has been demonstrated, so an assessment of potential 3-D display choices is necessary. For large audiences, the display is driven by what is available at the conference site. This display technology is increasingly HDTV resolution and uses either active glasses (which shutter the left and right views alternately) or polarized glasses with a polarized projection display technology. This technology will be driven primarily by what is available for cinema. The left-to-right views from the microscope optics and left-to-right rendered data from VTK or Maya are best matched with this technology display. The benefits of this system are the synergies with consumer cinema and HDTV or better resolution, which for a large viewing room and image size are required for good viewer experience. A very large viewing and picture size may require higher than HDTV resolution, potentially up to digital cinema formats. This will require even higher-resolution cameras. The drawback of these systems is the requirement for glasses, which can cause discomfort for some viewers.
Neurosurgery Classroom or Surgical Theater Video Training Applications
Training in the operating theater or in small groups can best be served by a 42- or 52-in display. The current stereoscopic displays are LCD and can be attached to a wall or mounted on a cart. The HDTV resolution should be sufficient in this application; however, if the display uses polarization, the input is currently left-to-right (or top-to-bottom); this display technique reduces the maximum horizontal (or vertical) resolution by a factor of 2 from HDTV. A full HDTV resolution is available in the active shutter-glasses technique, but it requires more expensive glasses for viewers.
We have shown here that the autostereoscopic multiview display is a viable alternative in the operating theater as the display device. Autostereoscopic display does not require glasses and can be used with an audience of up to 20 to 30 people. In addition, the 2-D–plus–depth input has some synergy with tools such as VTK and Maya, in which the rendering task complexity is halved because the Z depth is automatically available and only 1 view is required as input to the display. The display itself generates 9 views from this input. The drawback to this autostereoscopic technology is that the views require additional native display resolution (but not additional camera resolution). In the case of the display we tested, the native HDTV LCD resolution is reduced by a factor of 3 in the horizontal and vertical dimensions. To maintain high resolution for the viewer, the LCD panel will need a denser matrix. These higher-resolution LCDs are available to be paired with the lenticular technology, so this is not a fundamental limitation but will translate to a slightly higher cost. We also witnessed that the real-time depth from disparity algorithm needs some improvement to reproduce a more accurate depth perception. Because the software (non–real-time) does not harbor this limitation, we believe that this can be remedied by future computing power improvements.
Some current IGS navigational systems use an overlay of preoperative data in the surgical microscope with a (switchable) half-mirror in the optical path. Many systems still are using 2-D material to perform this overlay. The availability of 3-D live data will improve the registration performance. Deformable, nonrigid 3-D registration will require real-time acquisition of the left-to-right microscope views. Once this advanced registration is feasible, the overlay can be done both on the display devices and in the microscope optics. In addition, any current 2-D method for IGS registration can be built on and improved with the 3-D depth data available from a camera.
Given the available modular output capability of the present 3-D systems, we also anticipate the availability of registered DICOM overlays on autostereoscopic displays. In our future project, we plan to test implementation of this capability. With the choice to use glasses-based or glasses-free displays, surgeons and educators will be empowered to use the image-processing and image-overlay capabilities in a manner that best suits their needs.
For the IGS application, more emphasis needs to be placed on display requirements for data from preoperative scans or anatomic atlas. The lengthy times for procedures performed in the neurosurgical operating room may make eyestrain with glasses more acute. Our research to date has shown the feasibility of pairing emerging autostereoscopic display technology in image-guided neurosurgery. Our team will be applying this feasibility to test future IGS, including merging preoperative data registered with the real-time data.
Two major applications for 3-D displays in neurosurgery have been discussed. Viewing neurosurgical techniques in 3-D has been shown to accelerate the trainees’ learning curves. A sufficient-resolution autostereoscopic display has 2 advantages. First, the merging of preoperative data with real-time data can be of lower complexity because the 2 image sources can be combined in 2-D–plus–depth format and then rendered. Second, the ability to view 3-D without glasses in an autostereoscopic display eliminates eyestrain and associated discomfort.
This study was funded in part by a Multidisciplinary Undergraduate Research Initiative internal grant from Indiana University-Purdue University Indianapolis. Stereo Visions Systems, a subsidiary of 3DFusion, is currently funding Dr Christopher to conduct medical imaging research other than that cited in this article. The authors have no personal financial or institutional interest in any of the drugs, materials, or devices described in this article.
We thank 3DFusion for providing software for Dr Christopher, PhD, that was used in this study. We also thank Burton Tripathi, PhD, Vice President of Product Development, TrueVision Systems, Inc, for reviewing this manuscript and advising us about technical issues. Dr Tripathi also provided several figures for the manuscript. The VTK 2D and Z-depth material was developed under the Indiana University Multidisciplinary Undergraduate Research Initiative by students Edward Murray, BS, Patrick Cavanagh, BS, Niloofar Moshanrafian, BS, and Alan Adeboye, BS, who were mentored by 2 of the authors. The unavoidable discussion of commercial products by the authors in this article is necessary because the practical application to neurosurgery of such devices is new and helpful to the readers. We have compared these innovative products that are needed to address 3D imaging in neurosurgery education.
1. Peters TM. Image-guided surgery and therapy: current status and future. Published in Proceedings of SPIE
. Paper presented at: Visualization, Display, and Image-Guided Procedures Conference (keynote paper); San Diego, CA, May 28, 2001.
2. Paul P, Fleig O, Jannin P. Augmented virtuality based on stereoscopic reconstruction in multimodal image-guided neurosurgery: methods and performance evaluation. IEEE Trans Med Imaging. 2005;24(11):1500–1511.
3. William A. Stereoscopic visualization of scientific and medical content for education: seeing in 3-D. Paper presented at: 4th IEEE International Conference on eScience Proceedings; Indianapolis, IN, December 8-12, 2008.
4. Alaraj A, Lemole MG, Finkle JH, et al.. Virtual reality training in neurosurgery: review of current status and future applications. Surg Neurol Int. 2011;2:52.
5. Abildgaard A, Witwit AK, Karlsen JS, et al.. An autostereoscopic 3D display can improve visualization of 3D models from intracranial MR angiography. Int J Comput Assist Radiol Surg. 2010;5(5):549–554.
6. Henn JS, Lemole GM Jr, Ferreira MA, et al.. Interactive stereoscopic virtual reality: a new tool for neurosurgical education: technical note. J Neurosurg. 2002;96(1):144–149.
7. Ilgner J. Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education. Paper presented at: Proceedings of SPIE-IS&T Electronic Imaging; San Jose, CA, January 16-19, 2006.
8. William A. Stereoscopic visualization of scentific content for education. Paper presented at: Conference 2007 Summer Conference Proceedings; Indianpolis, IN, June 2007.
9. William A, Bailey D. Stereoscopic visualization of scientific and medical content. Paper presented at: Conference ACM SIGGRAPH 2006 Educators Program; Boston, MA, July 31-August 3, 2006.
10. Kersten-Oertel M, Jannin P, Collins DL. DVV: a taxonomy for mixed reality visualization in image guided surgery. IEEE Trans Vis Comput Graph. 2012;18:332–352.
12. Schroeder W, Ken M, Lorensen B. The Visualization Toolkit [computer program]. 2006.
13. Ibanez L, Schroeder W, Ken M, Lorensen B, Ng L, Cates J. The ITK Software Guide updated for version 2.4. 2005.
14. Liu FF, Vasanth F. Disparity estimation in stereo sequences using scene flow. Paper presented at: British Machine Vision Conference; London, UK, September 7-10, 2009.
16. Portoni L, Patak A, Noirard P, Grossetie J, vanBerkel C. Real-time auto-stereoscopic visualization of 3-D medical images. Paper presented at: Proceedings of the SPIE, Medical Imaging: Image Display and Visualization; San Diego, CA, February 12, 2000.
17. Ruijters D. Integrating autostereoscopic multi-view lenticular displays in minimally invasive angiography. Paper presented at: Proceedings MICCAI AMI-ARCS Workshop; New York, NY, September 6-10, 2008.
18. Vitanovski D, Schaller C, Hahn D, Daum V, Hornegger J. 3-D annotation and manipulation of medical anatomical structures. Paper presented at: SPIE Proceedings Medical Imaging 2009: Visualization, Image-Guided Procedures and Modeling; Lake Buena Vista, FL, February 27, 2009.
19. Liao H, Hata N, Dohi T. Image-guidance for cardiac surgery using dynamic autostereoscopic display system. Paper presented at: International Symposium on Biomedical Imaging: Nano to Macro; Arlington, VA, April 15-18, 2004.
20. Liao H, Hata N, Iwahara M, Sakuma I, Dohi T. An autostereoscopic display system for image-guided surgery using high-quality integral videography with high performance. Paper presented at: Medical Image Computing and Computer Assisted Intervention Conference; Montreal, Canada, November 15-18, 2003.
22. Chopra SS, Eisele RM, Denecke T, et al.. Advances in image guided conventional and minimal invasive liver surgery. Minerva Chir. 2010;65(4):463–478.
23. Wasserzug O, Margalit N, Weizman N, Fliss DM, Gil Z. Utility of a three-dimensional endoscopic system in skull base surgery. Skull Base. 2010;20(4):223–228.