Skip Navigation LinksHome > January 2013 - Volume 72 - Issue > Virtual Reality Simulation in Neurosurgery: Technologies an...
Neurosurgery:
doi: 10.1227/NEU.0b013e3182750d26
Haptics

Virtual Reality Simulation in Neurosurgery: Technologies and Evolution

Chan, Sonny MS*; Conti, François PhD*; Salisbury, Kenneth PhD*,‡; Blevins, Nikolas H. MD§

Free Access
Article Outline
Collapse Box

Author Information

*Department of Computer Science, Stanford University, Stanford, California

Department of Surgery, Stanford University, Stanford, California

§Department of Otolaryngology, Stanford University School of Medicine, Stanford, California

Correspondence: François Conti, PhD, Artificial Intelligence Laboratory Department of Computer Science, 353 Serra Mall, Stanford University, Stanford, CA 94305. E-mail: conti@stanford.edu

Received June 27, 2012

Accepted September 05, 2012

Collapse Box

Abstract

Neurosurgeons are faced with the challenge of learning, planning, and performing increasingly complex surgical procedures in which there is little room for error. With improvements in computational power and advances in visual and haptic display technologies, virtual surgical environments can now offer potential benefits for surgical training, planning, and rehearsal in a safe, simulated setting. This article introduces the various classes of surgical simulators and their respective purposes through a brief survey of representative simulation systems in the context of neurosurgery. Many technical challenges currently limit the application of virtual surgical environments. Although we cannot yet expect a digital patient to be indistinguishable from reality, new developments in computational methods and related technology bring us closer every day. We recognize that the design and implementation of an immersive virtual reality surgical simulator require expert knowledge from many disciplines. This article highlights a selection of recent developments in research areas related to virtual reality simulation, including anatomic modeling, computer graphics and visualization, haptics, and physics simulation, and discusses their implication for the simulation of neurosurgery.

We are all familiar with the concept of improving our skills with practice. The idea is introduced to us early and pervades nearly every aspect of our personal and professional lives. We intuitively expect to become competent through repetition before performing complex tasks, and we expect to have access to training environments that are challenging while allowing us to make errors in relative safety. The aviation industry has set a standard, requiring commercial pilots to train extensively in flight simulators before carrying passengers. We also expect even experts to continue to practice. No one would expect a pianist to perform a piece for the first time in front of a full concert hall. However, we still expect surgical trainees to learn by operating on live patients and accept that innovative new procedures are often developed in much the same way.

Neurosurgeons in particular are faced with the challenge of learning, planning, and performing increasingly complex surgical procedures in which there is little room for error. Resources for education are being eroded by increasing time and economic constraints. Animal and cadaver laboratories have been a standard for learning but have many practical limitations. The use of physical models can be similarly constrained. There is a growing awareness of the need to incorporate new techniques for optimizing surgical outcomes. With ongoing improvements in computational power, virtual reality environments now offer the potential for flexible training experiences for a broad range of users. Novices can potentially benefit from access to libraries of variable anatomy to explore, both normal and pathological, with automated instruction geared specifically to particular strengths and weaknesses. Practicing surgeons can potentially review unusual procedures and challenging anatomy in a safe, simulated environment. Even experts can potentially plan and rehearse complex approaches and assess the merits of various approaches on a virtual representation of a patient’s specific anatomy.

Clearly, many technical challenges currently limit the application of virtual environments, and we cannot expect a digital patient to be indistinguishable from reality. Still, with realistic expectations and validated simulations that effectively incorporate the essential elements, as determined through experience or informed design, to bridge the gap between the virtual and physical, surgical simulators can play a vital role in the way we approach surgical training and how we prepare for challenging cases in the future.

The goal of this article is twofold. First, we wish to introduce the various types of surgical simulators relevant to neurosurgery and the different purposes they serve through a brief survey of exemplar systems. Second, we review current research in the various technological elements from a span of many disciplines that are essential to the design and implementation of an immersive virtual surgical environment.

Back to Top | Article Outline

PURPOSE AND EVOLUTION OF SIMULATION

Surgical simulation is a very active research area. Immersive virtual surgical environments, which provide rich 3-dimensional (3-D) visualization and touch feedback as they allow the surgeon to manipulate a virtual patient’s anatomy, continue to evolve in their functionality and fidelity. On one side, simulators traditionally designed for surgical training and education are beginning to provide support for incorporating patient-specific anatomy. On the other side, software systems originally developed for surgical planning purposes are likewise adopting capabilities that improve the surgeon’s ability to interact with and manipulate the patient models, much like virtual reality simulators. As we will see in later examples, this convergence is becoming prevalent within the field of neurosurgery.

In this section, we present a brief survey of several types of surgical simulators that one is likely to encounter in the field. It is not our intention to provide a complete catalog of existing simulators relevant to neurosurgery; toward that goal, Malone and colleagues1 have already compiled an extensive review. We select and describe a few illustrative examples that characterize the purpose and attributes of the simulators in their respective classes.

Back to Top | Article Outline
Part-Task Trainers

Many neurosurgical procedures can be prohibitively complex to simulate in their entirety. However, most can be subdivided into a series of key tasks, each of which requires specific technical skills to master.2 Part-task trainers are simulators that specifically focus on replicating one such particular aspect of a procedure. The purpose of these simulators is to assist the surgeon with acquiring technical, procedural, or psychomotor skills isolated from the completion of a larger procedure.3 By limiting focus on an isolated technical task, customized software and hardware can be used, and validation can be simplified because objective criteria for success can be more readily defined. This additional advantage may allow part-task trainers to be successfully deployed relatively quickly.

A notable example of a neurosurgical part-task trainer involves training for ventriculostomy catheter placement. A number of specific simulators have been developed specifically for this task.2,4-6 It is perhaps a combination of the frequency of the task and its relative haptic simplicity that has led designers to identify this procedure as a target with high yield for simulation.1 Successful simulators use a stereoscopic display that is colocated with the user’s hands for an added sense of immersion into the virtual world. They can provide a sense of haptic feedback, which can inform the user when he or she has successfully entered the ventricle.

Back to Top | Article Outline
Procedure Simulators

The goal of a full-procedure simulator is to replicate the multiple sequential steps that are encountered within the operating room. Rather than focusing on a single psychomotor task, procedural simulators stress the cognitive reasoning that goes into successful completion of a surgical intervention, often incorporating physiological responses and anatomic findings that can influence a surgeon’s intraoperative decisions. Users are often allowed to proceed in a nondeterministic manner, allowing the potential for highly variable situations and unpredictable outcomes.

The NeuroTouch system7 developed by the National Research Council of Canada is an example of a neurosurgery procedure simulator capable of simulating a variety of craniotomy procedures, including soft-tissue manipulation such as tumor debulking and electrocautery (Figure 1). It uses stereographical rendering through an interface that mimics a binocular microscope and provides haptic feedback for a variety of virtual neurosurgical instruments. Bleeding and even brain pulsation are simulated to provide a very realistic look and feel. Such comprehensive open-ended simulations are what many people think of when they consider virtual reality simulation. The potential benefit of guiding a trainee through such a multistep procedure is considerable because it challenges both technical skills and surgical judgment. However, to be successful, such simulation environments often need to be more sophisticated and realistic than their part-task counterparts. It is more difficult to integrate patient-specific data because such a simulated procedure is difficult to set up without considerable preprocessing of imaging data. Despite these challenges, if the goal of the simulation is primarily to teach the cognitive process, it may be reasonable to sacrifice some graphic or haptic realism. Although the sense of an immersive experience may be lost, more abstracted anatomy or physical interactions may provide a sound framework to teach procedural reasoning, especially to more novice learners.8

Figure 1
Figure 1
Image Tools
Back to Top | Article Outline
Surgical Rehearsal Platforms

One aspect of neurosurgical procedures that make them more amenable to simulation than those of other specialties is the availability of relevant preoperative volumetric imaging studies. Computed tomography, magnetic resonance (MR) imaging, computed tomography and MR angiography, functional MR imaging, and positron emission tomography can be used to acquire valuable information for presurgical planning. Complementary data from several imaging studies can be merged to yield additional accurate insights into preoperative anatomic relationships and expected pathologies.9,10 In addition, in neurosurgery, the bony framework of the skull and spine often helps to ensure that preoperative findings will represent intraoperative relationships.

It can be a considerable challenge to study 2-dimensional cross sections of varying imaging modalities and to map the data onto a mental 3-D construct. It is another challenge to visualize a surgical approach on the mental construct that can predict what will be encountered in the operating room. For this reason, considerable research effort has been expended in developing methods and systems that are capable of fusing multimodality image data and presenting the surgically relevant information in interactive 3-D visualizations.11-13

The Dextroscope (Bracco AMT, Princeton, New Jersey) is an example of a commercial, integrated workstation designed to support surgical evaluation and decision making (Figure 1). The system can automatically coregister preoperative images of different modalities, assist with segmentation of critical anatomic structures, and present the information-fused 3-D model on a stereographic display. It allows the operator to inspect and manipulate the virtual patient model using an ergonomic handle in one hand and a stylus-shaped instrument in the other, both of which are colocated in space with the 3-D rendering. The utility and applicability of the Dextroscope in the clinical setting are well studied,10 and extensions to the system have been developed to enable planning and simulation for addition types of interventions, including intracranial aneurysm clipping14 and temporal bone surgery.15

Another promising example of a surgical rehearsal platform is a virtual endoscopy system developed for training and preoperative planning of transsphenoidal pituitary surgery.16 A steep learning curve, risk of complications, and interpatient anatomic variability are all factors that suggest that presurgical rehearsal would yield measurable benefit for this procedure. The system generates a virtual model of the patient from computed tomography, MR, and MR angiography image data and then allows the surgeon to define a path or move freely through the nasal cavity as it renders simulated endoscopic views. The mouse is used to control tissue removal, simulating to some degree the effect of a virtual rongeur. In a study with 35 patients, the ability of the system to present otherwise unpredictable anatomic variations from a surgical perspective was judged by the surgeons to be valuable for both training and preoperative planning.17

Although these systems do not attempt to reproduce the look and feel of using surgical tools to manipulate the virtual patient’s anatomy, they can be properly classified as surgical simulators in the sense that they allow the surgeon to experiment with different approaches in a virtual 3-D environment and to assess potential risks without risking harm to the patient. In many ways, they serve as potential platforms on which to develop fully immersive surgical simulators. Because the systems already provide answers to the anatomic modeling and visualization problems, what remains is to incorporate a suitable haptic interface and capabilities for simulation of instrument-tissue manipulation.

Back to Top | Article Outline
The Robotic Neurosurgery Paradigm

On occasion, new research or technology introduces a new intervention or dramatically alters how an existing surgical procedure is performed. When this happens, there is a need for surgeons, both new and experienced, to educate themselves and to train to proficiency as quickly as possible. This surge in educational demand can create an opportunity for research and development of virtual reality surgical simulators and accounts for the majority of commercial successes within this field.

We saw a prominent example of this phenomenon with the introduction of laparoscopic surgery. Laparoscopic instrument technique is difficult to master, and the need for training has spurred the development of myriad simulators. To date, virtual reality simulation for laparoscopic surgery has been rigorously validated and is seeing active use in the educational curriculum within numerous hospitals.18,19

A similar pattern is emerging with the adoption of robotic surgery. The da Vinci Surgical System (Intuitive Surgical, Sunnyvale, California) lends the surgeon enhanced dexterity and precision but requires an investment of many hours’ worth of training before the surgeon can safely perform a procedure. In response, the immersive virtual reality simulator now known as the dv-Trainer (Mimic Technologies, Seattle, Washington) was developed (Figure 2) and has been validated independently20 and proven beneficial for curriculum-based training.21

Figure 2
Figure 2
Image Tools

Recent developments in robotic neurosurgery22,23 may generate similar demand for simulation as these systems mature. With the neuroArm, an MR imaging-compatible, image-guided, teleoperated neurosurgical robot successfully completing its first clinical cases,24 there may soon be a need to train neurosurgeons on the use of such systems. Immersive surgical simulators developed for this application have 2 distinct advantages that favor their use over other means of training. First, the virtual environment can be designed to use the same master console or interface that the surgeon would otherwise use to operate the surgical robot. This eliminates the need to design a haptic interface that mimics the feel a microsurgical instrument, which is a difficult task.25 The use of the actual console also reduces the concern of transferring incorrect motor skills that can exist if a simulated interface is not accurate. Borrowing from aviation terminology, it is akin to learning to fly in an exact replica of the cockpit of the aircraft. Second, a virtual reality simulation system can be much more cost-effective than having an additional installation of the robotic system or reserving dedicated training time on the clinical system.21 A virtual reality environment can provide a much richer training experience than the manipulation of inanimate objects with the surgical robot. As robotic neurosurgery becomes a reality, we will likely see a surge in computer-driven simulator development and use in the field.

Back to Top | Article Outline

CORE TECHNOLOGIES

Virtual reality surgical simulators are very complex systems consisting of core components that span many disciplines. The building blocks that go into the making of a surgical simulator can come from fields as diverse as computer science, physics, mathematics, imaging, mechanical engineering, medical illustration, and, of course, surgery. We can coarsely partition simulation technologies into 4 categories: anatomic modeling, graphics and visualization, haptics, and physics simulation (Figure 3). In this section we attempt to describe some of the most recent developments in each subject and to discuss their implications for the simulation of neurosurgery.

Figure 3
Figure 3
Image Tools
Back to Top | Article Outline
Anatomic Modeling

Behind every simulation is a computational model of the virtual patient’s anatomy. This model must encapsulate the geometry, visual appearance, and biomechanical behavior of the anatomic structures and tissue within the simulation.26 Computational anatomic modeling deals with the determination and application of these 3 characteristics for the virtual patient’s anatomy.27

Morphometric, optical, and physical properties of the anatomy can be measured using various imaging, photographic, or mechanical approaches, or they can be crafted by a technician-artist who serves a role similar to that of a medical illustrator for an anatomy text. Handmade and hand-tuned models have traditionally had the most preferable characteristics but can be difficult and extremely time-consuming to create.28 This is an obvious hindrance for patient-specific surgical planning or rehearsal, in which the goal is to transform preoperative image data into interactive virtual models for simulation as quickly as possible. The increasing need for scenario variation, coupled with the tedium and intensive labor required for creating synthetic scenes, has pushed simulation designers toward automatic methods for surgical scene generation,26 although such methods present themselves as particularly challenging research problems. At present, the best results can usually be obtained by use of content acquired through imaging or sensing in combination with artificially created elements in the virtual surgical scene.

Many anatomic structures and spatial relationships can be gleaned from preoperative image data with image segmentation algorithms.29 Automatic segmentation, however, has been quite the elusive problem for scientists; despite decades of research effort, the reliability of automatic techniques is still rather limited, and manual segmentation remains the gold standard despite its often tedious nature.28 Threshold and edge-based segmentation can work well for certain structures that image with high contrast relative to the surrounding tissue (such as differentiating bone from soft tissue or air), although much of the relevant neuroanatomy does not have this nice property. Atlas-guided or model-based segmentation methods30,31 can use anatomic knowledge to delineate relevant structures. This process works much the same way that humans read an imaging study—having some predefined idea of what a structure “should” look like and then searching for imaging evidence that differentiates it at its expected margins. Unfortunately, such automated algorithms tend to perform poorly on anatomic abnormalities or pathology. Perhaps the best compromises for morphological modeling are semiautomated or interactive segmentation methods such as the 3-D variants of live-wire and active contour methods32-34 in which a technician can quickly delineate the anatomy by drawing rough outlines while the computer algorithm actively uses the underlying image content to refine the shape of the model. With these user-steered methods, the operator can obtain high-quality results with minimal effort expenditure while at the same time verifying the accuracy of the model created.

Biomechanical and optical properties can be determined empirically as well. Miller and colleagues35,36 have measured mechanical properties of swine brain tissue in an in vivo experiment and have shown that these properties can be applied to a nonlinear, hyperviscoelastic finite-element model to realistically simulate deformations of the brain. Likewise, Howard et al37 were successful with similar measurements on in vivo human brain tissue. Although these data can be used to generate a highly accurate computational model of an average, healthy brain, the utility of such methods may be of limited use for patient-specific simulation, especially when pathology has a significant influence on the biomechanical behavior of the brain tissue. Magnetic resonance elastography,38 although a young technology with its own limitations with respect to what can be measured, is a noninvasive technique that may be a promising solution to this problem.

Optical properties are classified primarily according to 2 characteristics: the scattering and absorption of light passing through the material (phase function) and surface reflectance. Visible and near-visible light scattering and absorption characteristics of human brain tissue have been determined for various purposes, including photochemical therapy,39 neurosurgical laser design,40 and optical biopsy.41 In theory, these properties can be applied to produce synthetic images with an extremely accurate appearance, although few rendering engines today are capable of incorporating volumetric scattering properties for real-time visualization of biological tissue. Surface reflectance properties, on the other hand, can yield a much more immediate benefit for virtual reality simulators. However, there are few purposes for accurate measurement of surface reflectance aside from realistic visualization; thus, we have seen comparatively few reports on the subject. In one recent effort, in vivo and in vitro observations were used to determine a bidirectional reflection distribution function for the purpose of photorealistic rendering of brain tissue,42 showing promising results.

Back to Top | Article Outline
Computer Graphics and Visualization

Visualization and 3-D rendering technology is probably the most developed of all components that make up a neurosurgical simulator. The need to visualize complex spatial relationships between structures appearing in image data from several modalities in 3-D has driven research efforts in the display of preoperative patient data.

Volume rendering was used to depict the gyral anatomy of the brain fused with positron emission tomographic image data for preoperative planning as early as the late 1980s.43 When volumetric image data are used, direct volume rendering, as opposed to surface rendering, can generally reproduce a more faithful representation of the underlying geometric structure, although it comes at a higher computational cost.1 However, with the enhanced capabilities of modern graphics processing units, recent techniques are capable of high-quality volume renderings of multimodality images in real time.13 In a simulation environment in which volumetric data are often used in conjunction with polygonal representations of surgical instruments and other segmented structures, a rendering technique must be engineered to accommodate both representations.44

If the goal of a simulation is primarily to teach visuospatial understanding and manipulation, then an illustrative rendering of the anatomy and pathology may suffice. In fact, for such purposes, a visualization that communicates the most surgically relevant information is often preferable to a physically realistic rendering.45 If, however, the purpose is surgical training or rehearsal in an immersive virtual environment, one goal of visualization may be to produce a rendering as near in appearance as possible to what the surgeon would see in the operative field. Synthesizing such an image correctly amounts to simulating the transport of light as it reflects from surfaces or scatters through tissue,46 a computationally expensive problem. Although this type of physically based rendering has been difficult to achieve for volumetric data at interactive rates (Visual update rates of 15 Hz are generally required to preserve psychomotor task performance47), recent research has shown some promising developments in using the highly parallel stream processors on modern graphics processing units to compute complex illumination models.48

In terms of graphic display hardware, a topic worth discussion is stereoscopic rendering, which provides one of the primary depth cues needed to understand complex spatial relationships in 3-D.49 Although stereoscopic computer displays have existed for many decades now,50 their high cost and limited quality posed significant obstacles to the growth of virtual reality simulation. However, it appears that stereoscopic rendering is seeing a resurgence at the present time, driven largely by trends in the entertainment industry.51 Although an immersive virtual environment previously required spending many thousands of dollars on display technology, high-resolution 3-D displays are now readily available for commodity desktop and laptop computers, and a colocated visuo-haptic display can even be assembled for less than $1000.52 This technology is a boon for designers because it will enable easy, widespread deployment of immersive simulation software on the personal or portable computers that surgical trainees or practitioners use on a daily basis.

Back to Top | Article Outline
Haptic Interfaces and Force Rendering

Force feedback interfaces physically connect the operator to the simulator and display the interactions forces modeled between the tools and the different objects composing the virtual environment. Haptic devices are characterized by their range of forces, the shape of their workspace, and the number of degrees of freedom that compose the possible movements within the physical workspace of the system. In the context of virtual surgery simulators, haptic devices typically require between 3 and 6 degrees of freedom to accurately model the translational and rotational motions of the instruments (Figure 4). Additional degrees of freedom may be added, for example, at the end effector to simulate grasping or cutting capabilities. Each degree of freedom can be passive or actuated, sensed or not sensed.53 A minimum of 3 actuators are required to accurately produce 3-D force interactions encountered at the tip of a virtual instrument. Rotational torques can be rendered by placing additional motors along the kinematic structure of the device. The increase in complexity, however, comes at the cost of higher apparent inertia, higher friction, and reduced transparency perceived by the operator. Small piezoelectric transducers can also be mounted on the haptic handles to render vibrotactile feedback information at frequencies beyond the nominal bandwidth of the kinematic device.54 These hybrid approaches are very effective for simulating, for instance, the high-pitched vibrations produced by a microsurgical drill.

Figure 4
Figure 4
Image Tools

High-fidelity haptic rendering—the problem of computing contact and reaction forces from the device position and orientation—is, in and of itself, a very challenging computational problem that is often neglected in the design of surgical simulators. After all, a haptic device with perfect fidelity still serves little purpose unless the computer simulation can command the correct forces to display. Even if a rigid- or deformable-body simulation could precisely compute the forces exerted on a virtual instrument, other considerations, including stability55 and energy conservation,53 must be addressed to achieve a truly realistic experience.

Although haptic rendering for point-based interactions in limited degrees of freedom has been a well-understood problem for more than a decade,56 techniques that allow stable and robust haptic interaction between tools and objects with complex geometries57,58 were introduced fairly recently. Methods that permit realistic manipulation of deformable bodies59,60 are fewer still. Whereas deformable-body simulation for purposes such as computer animation is concerned primarily with speed and accuracy, the challenge for haptic rendering lies in correctly coupling the simulation in a closed loop with the haptic interface and human operator in a stable and responsive manner. The fidelity of or even lack of touch feedback is a chief limitation of current simulation systems,7,17 but perhaps this problem can be solved if these recent advances can be integrated into neurosurgery simulators.

Back to Top | Article Outline
Physics Simulation

Goals of physics simulation include replicating the biomechanical response of the anatomy in real time as it is manipulated by the surgeon and simulating the behavior of fluids for processes such as bleeding or irrigation. Accurate and high-resolution simulation of the deformation of tissue is a formidable computational challenge. Although physical simulation has been well studied in the field of computer graphics and animation, the unique requirements of a surgical simulator can make the direct application of many of these techniques difficult. Detailed models and high-fidelity techniques used for feature film animation are often impractical for interactive simulation, whereas the approximations used in video games are generally too coarse for meaningful surgical simulation. Some of the most efficient techniques rely on precomputing possible deformations of the model61,62 but cannot accommodate structural changes to the anatomy that occur, for example, when the surgeon makes an incision or performs a partial resection. Of the various approaches, the 2 most practical approaches for surgery simulation, as history would indicate, are mass-spring models and the finite-element method.

Mass-spring methods, which model the anatomy as a discrete sets of point masses connected by virtual springs, have been popular for surgical simulation mainly because of their modest computational cost requirement and relative simplicity of implementation.63 Because springs are inherently 1-dimensional entities, a mass-spring network cannot directly capture 3-D behaviors such as volume conservation and prevention of volume inversion. Measured biomechanical properties cannot be readily applied to mass-spring models, which can make calibrating them to produce physically plausible results a difficult task.64

The finite-element method65 is an approach often used in structural engineering to simulate the deformation of solids based on principles of continuum mechanics. The process involves discretizing the solid into small, simple geometric elements and then solving constitutive equations for the stresses and strains within the elements (Figure 5). Depending on the accuracy of the constitutive model and the resolution of the meshes used, finite-element analysis normally produces very accurate results. The primary obstacle precluding their use in immersive surgical simulation is the sheer amount of computational power required to solve the systems of equations.63 However, although previous examples of real-time finite-element simulation used restricted linear models and captured a very limited amount of detail, recent approaches66,67 designed to take full advantage of the vast amount of parallel computing capacity available on modern graphics processors indicate that a promising solution may be on the horizon.

Figure 5
Figure 5
Image Tools

Interactive simulation of blood, water, and other fluids can also add a substantial degree of realism to a virtual surgical environment.68 Fluid simulation approaches fall into 2 categories: eulerian methods, in which physical quantities are advected on a fixed grid in a volume of space,69 and lagrangian methods, in which these quantities are instead computed with fluid flow. Because fluid is seldom constrained to a small volume within a surgical scene, forms of the latter approach are usually more efficient and popular for surgical simulation.68,70 Smoothed-particle hydrodynamics 71 is a lagrangian approach that uses a set of moving particles to carry physical quantities such as density, pressure, and velocity as they are computed in the simulation (Figure 5). Naturally, use of a greater number of particles leads to a more accurate simulation, but as with the simulation of solids, computational cost quickly becomes a limitation. Again harnessing the computational power of graphics processing units, recent efforts to accelerate smoothed-particle hydrodynamics fluid simulation have yielded some impressive results.72,73

Back to Top | Article Outline

DISCUSSION

There are numerous opportunities for virtual reality simulation to take hold within the field of neurosurgery. We have seen that depending on the ultimate purpose of the simulator—whether it will be used for surgical training, planning, rehearsal, or some combination thereof—the technology and design requirements may differ greatly. Research in the various technologies that support surgical simulation in a virtual environment is currently very active, being driven largely by needs in other fields. As these technologies are integrated into surgical simulation systems, they will continue to improve the capability and fidelity of virtual reality simulators. There is an ever-growing opportunity for designers and developers to identify procedures or tasks that can be simulated and would benefit from simulation and to create, validate, and deploy new simulators to improve surgical education and patient care.

Although we would like to predict that in 5 or 10 years’ time, neurosurgeons will all be trained and certified on surgical simulators and may even rehearse complex or rare cases on virtual, patient-specific models before heading to the operating room, we realize that this was predicted many times, many years ago.74 Unexpected technological hurdles that limit the realism or possibility of what can be simulated can lead to frustration and dampen the enthusiasm for simulated environments. Similarly, logistical hurdles, economic realities, and entrenched traditions can also hinder the acceptance and use of those simulators that have been developed. It is incumbent on the developers and early adopters of simulation technology to systematically assess its use and validate its benefits. We are beginning to see the results of such rigorous studies in other surgical specialties that indicate a measurable benefit from the application of simulation environments.18,19,75 We are optimistic that with additional technological development, thoughtful application, and rigorous validation, the transition to simulation-based surgical education with virtual environments is finally on the horizon.

Back to Top | Article Outline
Disclosures

S. Chan, Dr Salisbury, and Dr Blevins are supported by National Institutes of Health grant 5 R01 LM01067302. Dr Conti is supported by a research fellowship from the Honda Company and is affiliated with Force Dimension, a company that designs and manufactures high-precision force-feedback device for research, industrial, and medical applications. The other authors have no personal financial or institutional interest in any of the drugs, materials, or devices described in this article.

Back to Top | Article Outline

REFERENCES

1. Malone HR, Syed ON, Downes MS, D’Ambrosio AL, Quest DO, Kaiser MG. Simulation in neurosurgery: a review of computer-based simulation environments and their surgical applications. Neurosurgery. 2010;67(4):1105–1116.

2. Lemole GMJ, Banerjee PP, Luciano C, Neckrysh S, Charbel FT. Virtual reality in neurosurgical education: part-task ventriculostomy simulation with dynamic visual and haptic feedback. Neurosurgery. 2007;61(1):142–149.

3. Bradley P. The history of simulation in medical education and possible future directions. Med Educ. 2006;40(3):254–262.

4. Brown N, Natsupakpong S, Johannsen S, et al.. Virtual environment-based training simulator for endoscopic third ventriculostomy. Stud Health Technol Inform. 2006;119:73–75.

5. Luciano C, Banerjee P, Lemole GM Jr, Charbel F. Second generation haptic ventriculostomy simulator using the ImmersiveTouch system. Stud Health Technol Inform. 2006;119:343–348.

6. Çakmak H, Maaß H, Trantakis C, Strauß G, Nowatius E, Kühnapfel U. Haptic ventriculostomy simulation in a grid environment. Comput Animat Virtual Worlds. 2009;20(1):25–38.

7. Delorme S, Laroche D, Diraddo R, F Del Maestro R. NeuroTouch: a physics-based virtual simulator for cranial microneurosurgery training. Neurosurgery. 2012;71(1 suppl operative):ons32–ons42.

8. Dunkin B, Adrales GL, Apelgren K, Mellinger JD. Surgical simulation: a current review. Surg Endosc. 2007;21(3):357–366.

9. Dimaio SP, Archip N, Hata N, et al.. Image-guided neurosurgery at Brigham and Women’s Hospital. IEEE Eng Med Biol Mag. 2006;25(5):67–73.

10. Stadie AT, Kockro RA, Reisch R, et al.. Virtual reality system for planning minimally invasive neurosurgery: technical note. J Neurosurg. 2008;108(2):382–394.

11. Kikinis R, Gleason PL, Moriarty TM, et al.. Computer-assisted interactive three-dimensional planning for neurosurgical procedures. Neurosurgery. 1996;38(4):640–651.

12. Gering DT, Nabavi A, Kikinis R, et al.. An integrated visualization system for surgical planning and guidance using image fusion and an open MR. J Magn Reson Imaging. 2001;13(6):967–975.

13. Beyer J, Hadwiger M, Wolfsberger S, Bühler K. High-quality multimodal volume rendering for preoperative planning of neurosurgical interventions. IEEE Trans Vis Comput Graph. 2007;13(6):1696–1703.

14. Wong GK, Zhu CX, Ahuja AT, Poon WS. Craniotomy and clipping of intracranial aneurysm in a stereoscopic virtual reality environment. Neurosurgery. 2007;61(3):564–569.

15. Kockro RA, Hwang PY. Virtual temporal bone: an interactive 3-dimensional learning aid for cranial base surgery. Neurosurgery. 2009;64(5 suppl 2):216–229.

16. Neubauer A, Wolfsberger S, Forster MT, Mroz L, Wegenkittl R, Bühler K. Advanced virtual endoscopic pituitary surgery. IEEE Trans Vis Comput Graph. 2005;11(5):497–507.

17. Wolfsberger S, Neubauer A, Bühler K, et al.. Advanced virtual endoscopy for endoscopic transsphenoidal pituitary surgery. Neurosurgery. 2006;59(5):1001–1010.

18. Grantcharov TP, Kristiansen VB, Bendix J, Bardram L, Rosenberg J, Funch-Jensen P. Randomized clinical trial of virtual reality simulation for laparoscopic skills training. Br J Surg. 2004;91(2):146–150.

19. Gurusamy K, Aggarwal R, Palanivelu L, Davidson BR. Systematic review of randomized controlled trials on the effectiveness of virtual reality training for laparoscopic surgery. Br J Surg. 2008;95(9):1088–1097.

20. Lee JY, Mucksavage P, Kerbl DC, Huynh VB, Etafy M, McDougall EM. Validation study of a virtual reality robotic simulator: role as an assessment tool? J Urol. 2012;187(3):998–1002.

21. Korets R, Mues AC, Graversen JA, et al.. Validating the use of the Mimic dV-trainer for robotic surgery skill acquisition among urology residents. Urology. 2011;78(6):1326–1330.

22. McBeth PB, Louw DF, Rizun PR, Sutherland GR. Robotics in neurosurgery. Am J Surg. 2004;188(4A suppl):68S–75S.

23. Nathoo N, Çavuşoğlu MC, Vogelbaum MA, Barnett GH. In touch with robotics: neurosurgery for the future. Neurosurgery. 2005;56(3):421–433.

24. Pandya S, Motkoski JW, Serrano-Almeida C, Greer AD, Latour I, Sutherland GR. Advancing neurosurgery with image-guided robotics. J Neurosurg. 2009;111(6):1141–1149.

25. Spicer MA, Apuzzo ML. Virtual reality surgery: neurosurgery and the contemporary landscape. Neurosurgery. 2003;52(3):489–498.

26. Harders M. Surgical Scene Generation for Virtual Reality-based Training in Medicine. London, UK: Springer-Verlag; 2008.

27. Delingette H, Pennec X, Soler L, Marescaux J, Ayache N. Computational models for image-guided robot-assisted and simulated medical interventions. Proc IEEE. 2006;94(9):1678–1688.

28. Riener R, Harders M. Medical model generation. In. Virtual Reality in Medicine. London, UK: Springer-Verlag; 2012:225–264.

29. Niessen W. Model-based image segmentation for image-guided interventions. In. Image-Guided Interventions. New York, NY: Springer; 2008:219–239.

30. Collins DL, Holmes CJ, Peters TM. Automatic 3-D model-based neuroanatomical segmentation. Hum Brain. 1995;3(3):190–208.

31. Heimann T, Meinzer HP. Statistical shape models for 3D medical image segmentation: a review. Med Image Anal. 2009;13(4):543–563.

32. Falcão AX, Udupa JK. A 3D generalization of user-steered live-wire segmentation. Med Image Anal. 2000;4(4):389–402.

33. Poon M, Hamarneh G, Abugharbieh R. Efficient interactive 3D Livewire segmentation of complex objects with arbitrary topology. Comput Med Imaging Graph. 2008;32(8):639–650.

34. Heckel F, Konrad O, Hahn HK, Peitgen HO. Interactive 3D medical image segmentation with energy-minimizing implicit functions. Comput Graph. 2011;35:275–287.

35. Miller K. Constitutive model of brain tissue suitable for finite element analysis of surgical procedures. J Biomech. 1999;32(5):531–537.

36. Miller K, Chinzei K, Orssengo G, Bednarz P. Mechanical properties of brain tissue in-vivo: experiment and computer simulation. J Biomech. 2000;33(11):1369–1376.

37. Howard MA 3rd, Abkes BA, Ollendieck MC, Noh MD, Ritter RC, Gillies GT. Measurement of the force required to move a neurosurgical probe through in vivo human brain tissue. IEEE Trans Biomed Eng. 1999;46(7):891–894.

38. Kruse SA, Rose GH, Glaser KJ, et al.. Magnetic resonance elastography of the brain. Neuroimage. 2008;39(1):231–237.

39. Svaasand LO, Ellingsen R. Optical properties of human brain. Photochem Photobiol. 1983;38(3):293–299.

40. Eggert HR, Blazek V. Optical properties of human brain tissue, meninges, and brain tumors in the spectral range of 200 to 900 nm. Neurosurgery. 1987;21(4):459–464.

41. Bevilacqua F, Piguet D, Marquet P, Gross JD, Tromberg BJ, Depeursinge C. In vivo local determination of tissue optical properties: applications to human brain. Appl Opt. 1999;38(22):4939–4950.

42. ap Cenydd L, Walter A, John NW, Bloj M, Phillips N. Realistic visualization of living brain tissue. Stud Health Technol Inform. 2011;163:105–111.

43. Levin DN, Hu XP, Tan KK, et al.. The brain: integrated three-dimensional display or MR and PET images. Radiology. 1989;172(3):783–789.

44. Kratz A, Hadwiger M, Fuhrmann A. GPU-based high-quality volume rendering for virtual environments. In: Proceedings from the International Workshop on Augmented Environments for Medical Imaging Including Augmented Reality in Computer-aided Surgery (AMI-ARCS); Copenhagen, Denmark, 2006.

45. Preim B, Bartz D. Visualization in Medicine: Theory, Algorithms, and Applications. Burlington, MA: Morgan Kaufmann; 2007.

46. Jensen HW, Marschner SR, Levoy M, Hanrahan P. A practical model for subsurface light transport. In: Proceedings from the International Conference on Computer Graphics and Interactive Techniques (ACM SIGGRAPH); Los Angeles, California, 2001:511-518.

47. Chen JYC, Thropp JE. Review of low frame rate effects on human performance. IEEE Trans Systems Man Cybernetics. 2007;37(6):1063–1076.

48. Ropinski T, Doring C, Rezk-Salama C. Interactive volumetric lighting simulating scattering and shadowing. In: Proceedings from the IEEE Pacific Visualization Symposium (PacificVis); Taipei, Taiwan, 2010:169-176.

49. Henn JS, Lemole GMJ, Ferreira MAT, et al.. Interactive stereoscopic virtual reality: a new tool for neurosurgical education. J Neurosurg. 2002;96(1):144–149.

50. McAllister DF. Stereo Computer Graphics and Other True 3D Technologies. Princeton, NJ: Princeton University Press; 1993.

51. Smolic A, Mueller K, Merkle P, Kauff P, Wiegand T. An overview of available and emerging 3D video formats and depth enhanced stereo as efficient generic solution. In: Proceedings from Picture Coding Symposiumc; Chicago, Illinois, May 6-9, 2009; 1-4.

52. Forsslund J, Flodin M. Design of a sub-€1000 stereo vision enabled co-located multi-modal display. In: Proceedings from the SIGRAD EuroGraphics Swedish Chapter; Gothenburg, Sweden, 2009.

53. Barbagli F, Salisbury K. The effect of sensor/actuator asymmetries in haptic interfaces. In: Proceedings from the IEEE Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems; Los Angeles, California, 2003:140-147.

54. McMahan W, Kuchenbecker KJ. Haptic display of realistic tool contact via dynamically compensated control of a dedicated actuator. In: Proceedings from the IEEE/RSJ International Conference on Intelligent Robots and Systems; St. Louis, Missouri, 2009:3170-3177.

55. Adams RJ, Hannaford B. Stable haptic interaction with virtual environments. IEEE Trans Robot Automat. 1999;15(3):465–474.

56. Salisbury K, Conti F, Barbagli F. Haptic rendering: introductory concepts. IEEE Comput Graph Appl. 2004;24(2):24–32.

57. Otaduy MA, Lin MC. A modular haptic rendering algorithm for stable and transparent 6-DOF manipulation. IEEE Trans Robot. 2006;22(4):751–762.

58. Ortega M, Redon S, Coquillart S. A six degree-of-freedom god-object method for haptic display of rigid bodies with surface properties. IEEE Trans Vis Comput Graph. 2007;13(3):458–469.

59. Duriez C, Dubois F, Kheddar A, Andriot C. Realistic haptic rendering of interacting deformable objects in virtual environments. IEEE Trans Vis Comput Graph. 2006;12(1):36–47.

60. Barbič J, James DL. Six-dof haptic rendering of contact between geometrically complex reduced deformable models. IEEE Trans Haptics. 2008;1(1):39–52.

61. Cotin S, Delingette H, Ayache N. Real-time elastic deformations of soft tissues for surgery simulation. IEEE Trans Vis Comput Graph. 1999;5(1):62–73.

62. James DL, Pai DK. DyRT: dynamic response textures for real time deformation simulation with graphics hardware. ACM Trans Graph. 2002;21:582–585.

63. Meier U, López O, Monserrat C, Juan MC, Alcañiz M. Real-time deformable models for surgery simulation: a survey. Comput Methods Programs Biomed. 2005;77(3):183–197.

64. Morris D, Salisbury K. Automatic preparation, calibration, and simulation of deformable objects. Comput Methods Biomech Biomed Eng. 2008;11(3):263–279.

65. Zienkiewicz OC. The Finite Element Method: Its Basis and Fundamentals. Burlington, MA: Elsevier; 2005.

66. Taylor ZA, Cheng M, Ourselin S. High-speed nonlinear finite element analysis for surgical simulation using graphics processing units. IEEE Trans Med Imaging. 2008;27(5):650–663.

67. Joldes GR, Wittek A, Miller K. Real-time nonlinear finite element computations on GPU: application to neurosurgical simulation. Comput Methods Appl Mech Eng. 2010;199(49-52):3305–3314.

68. Müller M, Schirm S, Teschner M. Interactive blood simulation for virtual surgery based on smoothed particle hydrodynamics. Technol Health Care. 2004;12(1):25–31.

69. Stam J. Stable fluids. In: Proceedings from the International Conference on Computer Graphics and Interactive Techniques (ACM SIGGRAPH); 1999:121-128.

70. Liu W, Sewell C, Blevins NH, Salisbury K, Bodin K, Hjelte N. Representing fluid with smoothed particle hydrodynamics in a cranial base simulator. Stud Health Technol Inform. 2008;132:257–259.

71. Monaghan JJ. Smoothed particle hydrodynamics. Rep Prog Phys. 2005;68:1703–1759.

72. Harada T, Koshizuka S. Smoothed particle hydrodynamics on GPUs. In: Proceedings from Computer Graphics International; Petropolis, Brazil, June 2007:1-8.

73. Goswami P, Schlegel P, Solenthaler B, Pajarola R. Interactive SPH Simulation and Rendering on the GPU. In: Proceedings from Eurographics/ACM SIGGRAPH Symposium on Computer Animation; 2010:55-64.

74. Satava RM. Virtual reality surgical simulator. The first steps. Surg Endosc. 1993;7(3):203–205.

75. Fried MP, Sadoughi B, Gibber MJ, et al.. From virtual reality to the operating room: the endoscopic sinus surgery simulator experiment. Otolaryngol Head Neck Surg. 2010;142(2):202–207.

Keywords:

Anatomical modeling; Computer haptics; Interactive visualization; Medical devices; Surgical education; Surgical rehearsal; Surgical simulation

Figure. No available...
Figure. No available...
Image Tools

Copyright © by the Congress of Neurological Surgeons

Login

Article Tools

Images

Share

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.