There seems to be an almost universal desire among trainees, instructors, and designers for more realistic, high-fidelity simulators. High-fidelity simulators look and function more like systems in the real world. Many instructors believe that more realistic simulators result in better training. For designers, developing systems that more closely approximate reality presents more rewarding engineering challenges. This mindset permeates almost all domains that use simulation for training, including aviation, military systems, power plant operations, and medicine. However, high fidelity does not necessarily produce high-performance skills.
Recently, 2 articles appeared in Simulation in Healthcare addressing the relationship between simulator fidelity and educational goals in medical applications. Dieckmann et al offered an in-depth theoretical discussion about realism and its effects on the social experiences surrounding mannequin-based simulation exercises.1 In particular, they appealed to Laucken's 3 modes of thinking and discussed simulation in physical, semantical, and phenomenal terms.2 Briefly, the physical mode concerns the characteristics of simulation that can be described in objective, physical terms (eg, color, texture, duration). The semantical mode concerns underlying concepts or meaning in simulation that transcend physical characteristics (eg, elevated heart rate can be conveyed through a display device or direct contact with a mannequin). Last, the phenomenal mode concerns the unique emotional and metacognitive thoughts experienced by each individual engaged in the simulation. Rudolph et al also discuss the elements of fidelity necessary to allow trainees to become engaged in the exercises and suspend disbelief.3 Collectively, these authors suggest that different levels of fidelity may be needed depending upon the goals and experiences intended for groups of trainees.
In both articles, the authors focused primarily on the semantical and phenomenal modes of thinking related to a particular type of medical simulation: mannequin-based training. The scope of simulator-based training in medicine, however, is much broader than the type of system and scenarios described by Dieckmann et al and Rudolph et al. For instance, Gaba4 recently described 11 dimensions of simulation applications. Some of these dimensions concern the unit of participation (individual, team), the health care domain (procedural-surgery, dynamic high-hazard ICU), the purpose of the simulation activity (training, assessment), the type of learning (technical skills, decision-making), and the type of technology (virtual reality, electronic patient). The mannequin-based exercises discussed by Dieckmann et al and Rudolph et al use an electronic patient and emphasize decision making within teams in a dynamic high-hazard domain. On the other hand, virtual reality (VR)-based training systems are another form of simulation frequently used in medicine at the individual level to develop the technical skills needed for specific procedures. VR training systems are qualitatively different than mannequin-based systems. They serve different purposes and the fidelity considerations associated with one may not be entirely applicable to the other.
Fundamentally, a simulator-based training system is only one component of an educational curriculum designed to promote learning and skill development. Gagné et al described 5 primary categories of skills or learned capabilities.5Intellectual skills allow individuals to perform symbolic manipulations and solve problems using concepts and rules. Individuals use cognitive strategies to control their learning processes. Verbal information comprises one's world knowledge of facts. Attitudes are emotionally laden beliefs that affect one's choice of action. Last, psychomotor skills are the patterns and programs of underlying muscular movements needed to execute one's actions.
The simulation systems discussed by Dieckmann et al and Rudolph et al address the first 4 skill categories. Trainees use their verbal information and intellectual skills to solve problems and make decisions about procedures and patient care. After-action analyses and discussions of individual and team performance help to mold cognitive strategies. Last, it can be argued that the emotional experiences generated by these exercises shape the development of attitudes about patient care and one's own abilities.
The development of psychomotor skills, however, is qualitatively different than the other 4 types of skills. For many psychomotor skills, extended and repetitive practice is necessary to achieve some degree of proficiency. Performance feedback is needed at regular intervals to optimize skill acquisition. Moreover, expert-level skills can require thousands of practice trials, and even then, not everyone will reach that level.6,7 Practice is tedious and instructors must often resort to standard forms of extrinsic motivation (eg, rewards, special privileges) to keep trainees engaged.6,7
As such, the goal of this article is to consider the nature of simulator fidelity as it pertains to VR training systems. In particular, we address characteristics of simulation at the physical level that impact performance. We also argue that optimal learning can be achieved only when the level of fidelity is matched to the training objectives and trainee characteristics; however, we support these claims by providing examples of problems that often occur when using high-fidelity VR systems and offer explanations tied to fundamental perceptual processes.
Fidelity can be thought of as the faithfulness of a simulation. Although the term is often applied to training devices, it has also been used in reference to equipment and the environment. Hayes and Singer describe simulation fidelity as “the degree of similarity between the training situation and the operational situation which is simulated” (p. 50).8 Fidelity can also be described in physical and functional terms. Physical fidelity concerns the degree of similarity between the equipment, materials, displays, and controls used in the operational environment and those available in the simulation. Functional fidelity concerns how the processes are implemented (ie, how information requirements are mapped onto response requirements). For example, a VR-based laparoscopic simulator that requires users to select and activate different instruments from a display menu deviates from how instruments are exchanged in the actual operational environment and therefore lacks a degree of functional fidelity. Further, both the functional and physical aspects of fidelity mentioned by Hayes and Singer would be encompassed within the “physical mode” described by Dieckmann et al.1
From an historical perspective, the rationale for higher fidelity systems is often attributed to Thorndike's notions regarding identical elements in learning.9 Specifically, Thorndike argued that the transfer of skills from the training environment to the operational environment would be maximized when elements in the training environment perfectly match those in the operational environment.
Although the identical elements idea has intuitive appeal, years of simulator-based research have not supported this idea. Many of the early studies produced equivocal results.8 However, as more data became available it was clear that improvements in performance were better explained by more general principles of learning that addressed knowledge of results, practice, methods, and instructional guidance.10,11 In addition, the participants at a workshop sponsored by the U.S. Army Research Institute aimed at understanding how much fidelity was needed to attain satisfactory levels of training, concluded that the efficacy of a simulator is inherently tied to an understanding of the task requirements and the instructional context.8 Recent evidence obtained in the medical domain is consistent with these ideas. In a review, Issenberg et al examined over 100 articles for features of high-fidelity medical simulations that lead to effective learning.12 The 3 most cited sources of efficacy mentioned in 25% to 47% of the articles were: educational feedback, repetitive practice, and curriculum integration. All of these sources are characteristics of general learning, not simulator fidelity.
WHY HIGH FIDELITY IS NOT ALWAYS BETTER
High-fidelity simulator systems do not necessarily result in better learning, and recent research has shown that they can also interfere with performance and impede learning.8 In fact, Smallman and St. John have used the term naive realism to describe the desire among users and developers for higher fidelity despite contrary evidence regarding its efficacy.13 There are many reasons why high-fidelity VR systems do not always lead to better learning. In the following section, we discuss several of these related to distorted perception, distortions in virtual displays, and the multimodal nature of perception.
Distortions of Perception
To understand why high-fidelity systems are not always better, one needs to appreciate some fundamental principles of how we perceive and interpret information in natural and simulated environments. First, individuals can be incredibly inaccurate when making judgments about even the most basic stimulus characteristics.14 Psychologists have known for almost 150 years that our perceptual experience of changes in stimulus magnitude does not correspond with the actual physical changes in stimuli.15 Further, the functions describing the psychological perception and physical relationships differ for each sensory modality.16 For most modalities, however, as stimuli become more intense, greater increases in magnitude are needed for those differences to be perceived as equal increases in intensity.
We also have great difficulty judging even the most basic of spatial relationships. In a recent study, observers were asked to stand outdoors and direct the placement of vertical poles to form the vertices of an equilateral triangle under several different viewing distances and conditions.17 The results showed that no single observer could produce an accurate triangle across conditions. Moreover, psychologists and artists alike have known for hundreds of years that people are susceptible to a wide variety of illusions surrounding lighting, shading, size, depth, and orientation.18
One reason we have difficulty making veridical judgments about stimulus characteristics lies with a very important and highly adaptive quality of mind—our experience of physical stimuli is actively changed to provide a less variable impression of the world.14 For instance, the color of our clothes seems to remain the same despite viewing conditions that change throughout the day (eg, morning light, daylight, evening light, florescent light). Each lighting condition produces rather dramatic changes in the physical colors reflected from the clothes. We rarely notice these differences, that is, until we see photographs taken under different sources of light with what seem to be inaccurate colors.19 The “inappropriate” colors in the photographs are actually reasonable representations of the colors that were present and recorded by the film, but not “color corrected” by the mind. Similarly, the size of objects seems to remain constant as they approach or recede. Thus, despite significant changes in the visual angle of an object projected onto our retinas, we do not perceive the object itself to grow or shrink. Last, objects seem to retain their shape despite differences in viewing angle.20 For example, in Figure 1, the image of the window on the left was taken head on and appears rectangular in shape. However, we often view objects from an angle. Thus, the window on the right is still perceived to be a rectangle even though it is clearly trapezoidal in shape.
Distorted Reality in Virtual Displays
These “tricks” performed by the mind minimize changes among stimuli in our environment and help maintain some semblance of regularity or constancy in our perception of the world. However, they are not well represented in virtual displays. In fact, images in virtual displays would need to be purposefully distorted to appear consistent with our mind's view. Thus, because most virtual displays are not “corrected” they appear distorted and unrealistic. Consequently, users often have problems when making distance judgments with realistic 3-dimensional displays. These judgment errors can, in turn, lead to poorer identification of 3-dimensional objects within 3-dimensional displays.13 Thus, higher levels of fidelity can increase the effort needed to extract critical information from displays rendering them more susceptible to error.
These problems can also lead to inappropriate interactions with dynamic displays. Stappers et al described a study in which participants in an immersive virtual environment were asked to throw virtual balls at a target.21 In one condition, the algorithm for the ball's trajectory followed a path described by Newton's laws. In other conditions, the ball followed inappropriate trajectories. The results showed that participants' accuracy was comparable across all trajectories. Moreover, none of the participants was surprised by the different trajectories. In fact, they readily adjusted their behavior to compensate for the dynamic discrepancies.
There is also an assumption that when we view visual scenes we continually sample all of the details therein. However, this also is not true. We perceive what we attend to. In a rather dramatic example, Simons and Chabris had observers attend to critical information in a video (ie, counting the number of successful ball tosses among team members).22 When focused on this task, many observers were oblivious to the appearance of an actor in a gorilla suit who strolled into the scene and waved to the camera.
Further, much of what we perceive is mentally construed. Our minds make assumptions about stimulus characteristics and fill in gaps where information is missing. For example, in audition there is a phenomenon known as phonemic restoration.23 Listeners rarely report hearing anything unusual about words or phrases in which unrelated sounds have been digitally substituted for phonemes within the words. In vision, we are susceptible to change blindness or the inability to detect differences in visual scenes when the flow of events is interrupted. Smallman and St. John have shown that when users monitor complex dynamic displays, their ability to detect changes after an interruption deteriorates to near chance levels; however, users are typically unaware of this deficit and are overconfident in their abilities.13 Interestingly, film editors often take advantage of this perceptual weakness when they need to “hide” imperfections between scenes.
The Multimodal Nature of Perception
It is also important to understand that we perceive the world through a minimum of 9 sensory systems.24 These sensory systems do not function in isolation although we may only attend to one sensory modality at a time. For example, when we walk our brains receive signals from the visual system about the changing patterns of light flowing across our retinas, the sound of our footsteps, the changing positions of our legs and feet, the feel of the ground through our shoes, the changes in acceleration from our vestibular systems, and the odors present in the environment. Moreover, recent neurophysiologic research has identified cortical areas that specialize in processing multimodal sensory information.25 Thus, any attempt to create a simulator that presents information to only a few sensory systems (eg, vision, audition) excludes information from the other sensory systems that would normally be present in the operational environment, as well as the cortical interactions among the represented and unrepresented sensory systems. Ultimately, this mismatch between the genuine and the simulated environment requires the learner to invest additional attentional effort to discount or compensate for the discrepancies.
The differences between the genuine and the simulated environments also contribute to a conspicuous lack of realism. For example, some of the most sophisticated high-fidelity simulators have been created for commercial aviation. For example, the CAE 7000 series (http://www.cae.com/www2004/Products_and_Services/PDF/CAE7000SeriesDatasheet.pdf) meets the highest fidelity requirements mandated by the FAA and can provide training on some of the most advanced aircraft produced by Airbus and Boeing. The Future Flight Central Tower at NASA Ames Research Center (http://www.simlabs.arc.nasa.gov/library_docs/annual_reports/AR04_final_8x11.pdf) is a fully immersive 360 degree simulator of an ATC tower that can support up to 12 controllers. And yet, neither of these examples of leading-edge high-fidelity simulation can reproduce the full complement of informational properties present in the operational environment, both within and across all sensory systems. Thus, even the best simulators suffer from a lack of realism.
DETERMINING APPROPRIATE LEVELS OF FIDELITY
Assume for a moment that it would be possible to overcome all current technological limitations and create a simulator system that could faithfully reproduce all the requisite sensory stimuli. Would that be desirable? Several researchers have argued that there may be little utility in achieving the ultimate level of fidelity, that is, a simulation indistinguishable from reality. Specifically, Carr and others claim that there is an important distinction between perceiving realism and reality.26,27 A simulation that seems real is known to approximate reality. Thus, “realistic” displays are not perceived to be real. In fact, Stoffregen et al argue that any user who perceived a simulation as real would be making a serious error of a different sort.26 Consequently, if the objective of creating a simulation is to reproduce the operational environment as accurately as possible, without putting the user (or patient) in harm's way, then the simulation will never be perceived as real. Dieckmann et al also discussed this difference between reality and perceived realism and stated that as long as trainees understand how the simulation relates to the clinical experience, they will accept the “phenomenal” differences between the simulated and true clinical experiences.1
Given that some departure from reality is expected when training with a simulator, then the key question defaults to what level of fidelity is needed? The answer to that question comes down to the training objectives and cost. It is important to understand that simulators support overall training goals and the fidelity needed should be geared toward those training goals.
Determining the appropriate level of fidelity needed in a training simulator is part of the system analysis and development process. Although a complete description is beyond the scope of this article (more details can be found in Farmer et al11 and Gagné et al5), the basic process begins with establishing the training objectives. The process often requires a team that includes designers, developers, subject matter experts, and the instructor. Frequently, educational psychologists, instructional designers, and human factors psychologists are also involved because of their expertise in learning styles, skill acquisition, and assessment. Once the training objectives have been identified, further analysis is performed to address specific tasks, the trainees, and training needs. The training analysis helps distinguish the difference in knowledge, skills, and abilities of candidates in the current pool and those needed to meet the training objectives. More important, the training analysis also establishes the conditions and metrics for verifying that the training objectives have been met. The needs analysis also addresses different types of skills (eg, intellectual, psychomotor, cognitive strategies) and the methods and media best suited for each.
The training requirements should be used to establish the appropriate levels of fidelity needed for different objectives. For example, if the objective is to train residents to perform laparoscopic surgery, it may not be necessary to faithfully reproduce the appearance of body cavities and tissue properties in a system designed to build psychomotor skills. On the other hand, if the goal is to train individuals to distinguish between healthy and diseased or necrotic tissue, the need for realistic-looking tissue may be paramount. Different levels of physical fidelity may be needed for each system because manual dexterity and the visual identification of diseases are qualitatively different skills.
Determining the optimal levels of fidelity for a set of training goals requires research and analysis. Questions regarding fidelity should be addressed in concert with the other training analyses used to establish the system requirements. Answers to some questions may be obtained during the analyses and others may require that design alternatives be tested with intended users.
More than likely, identifying the levels of fidelity needed to improve the match among reality, VR, and performance will require some basic research. The issues concerning perceptual distortions, distortions in virtual displays, and the multimodal nature of perception described above underscore the complexity that surrounds the representation of sensory information in displays designed to facilitate learning. Although a great deal of research on VR systems exists,28 there is much more that we need to know to create systems that maximize the transfer of skills from the training environment to the operational environment. At present, there is little information available to guide designers how to compensate for perceptual distortions in inherent virtual displays, especially where depth judgments are required. Moreover, many medical VR simulators incorporate visual and haptic displays. Although the merits of haptic displays for surgical procedures have been discussed,29 there is little research addressing the multimodal interaction between vision and touch,30,31 and few if any guidelines for designers.
As a place to start, Smallman and St. John have suggested 2 strategies that designers can follow to help improve user performance and avoid naive realism.13 First, displays and training systems should be created from a minimalist perspective, presenting only the essential material needed for a given level of performance. For instance, in their work with flight simulators, Lintern et al showed that a high degree of learning can be achieved with modest levels of fidelity as long as the critical perceptual features, patterns, and dimensions remain invariant across the training and operational environments.32,33 If a learner can identify these critical elements across tasks, other elements or aspects of fidelity are free to vary without any appreciable affect on learning. Second, Smallman and St. John argue that systems should be augmented with performance/training aids to compensate for perceptual deficiencies. For example, displays can include a “declutter” mode that deemphasizes nonessential information by rendering it less conspicuous. Further, they suggest users may need to be given direct, corrective feedback to overcome their misperceptions with virtual displays.
Last, decisions concerning appropriate levels of fidelity must also take cost into consideration. Higher fidelity simulators come with a higher price tag and there is a trade-off between level of fidelity and cost.34 In economic terms, the most cost effective way of representing stimulus elements is often all that is necessary to meet training goals. Focusing additional resources on the appearance and functionality of elements outside those identified for the training goals drives up the cost of the system and has no appreciable impact on performance.
For instructors, choosing the appropriate level of fidelity also begins with the training objectives. Training goals must be established that clearly specify the trainee population (eg, students, residents) type of skills to be acquired, and the training time available. Instructors may also need to enlist the help of subject matter experts, educational psychologists, instructional designers, and human factors psychologists to determine whether commercial systems actually meet their training objectives. For example, if the objective is to provide students with an introductory exposure to laparoscopic tool manipulation, a system with low to moderate levels of physical and functional fidelity may be all that is needed to meet the goal. On the other hand, if the objective is to train residents to perform laparoscopic intracorporeal knot tying, a task requiring a fair degree of skill, the representation of the instruments, needles, and sutures may require high levels of physical and functional fidelity to achieve the desired goals.
Instructors and those who make purchase decisions at educational institutions must also weigh the costs and benefits of different levels of fidelity. High-fidelity medical VR systems can easily cost 10 times as much as lower fidelity systems and not necessarily provide superior training benefits. For instance, Scerbo et al recently compared the effectiveness of 2 different simulator systems for training phlebotomy.35,36 One system was a relatively inexpensive plastic arm and the other was a much more expensive VR system that included haptic force feedback. Upon completion of training, the researchers found that students who practiced with the cheaper plastic arm performed significantly better than those who practiced with the VR system when required to draw blood from a genuine patient. Moreover, the researchers concluded that the less expensive plastic arm may have produced better performance because it actually had higher physical and functional fidelity.
When training objectives are defined for an entire class of students, cost may be of even greater significance. Many less expensive, lower fidelity systems may be purchased for the price of a single high-fidelity system, thereby making it possible to train greater numbers of students simultaneously. Again, instructors must determine whether the level of skill that can be acquired on lower fidelity systems is sufficient for the number of students that need to be trained in a given period of time.
There may be times, however, when a needs analysis dictates higher levels of fidelity for more advanced trainees. In aviation, for example, the Federal Aviation Administration defines 4 levels flight simulator fidelity (http://rgl.faa.gov/Regulatory_and_Guidance_Library%5CrgAdvisoryCircular.nsf/0/5B7322950DD10F6B862569BA006F60AA?OpenDocument). Pilots at the novice and intermediate levels satisfy training requirements with lower fidelity systems. More experienced pilots seeking advanced licensing and certifications must satisfy requirements in higher fidelity systems. The cost associated with operating the highest fidelity simulators is justified, to some degree, by limiting training to only those pilots with enough experience to take advantage of the features. In other words, valuable simulator time is reserved for only those pilots who have mastered the fundamentals.
In his recent book, The human factor, Vicente argues that the evolution of technology is shaped in large part by 2 driving forces.37 On one side are the efforts of developers to demonstrate what can be done and on the other side are the desires and needs of the consumer. Sometimes these 2 forces can be synergistic, many times they are not. Simulation technology is no different and its evolution in other high-risk domains such as aviation and the military has come to terms with these 2 forces through accountability. Educational and performance-based criteria are often included in the initial requirements specifications.38 Consequently, systems are built to meet the training needs at the outset, and development resources are not invested in components and features that have no impact on the training requirements.
Medical simulation training technology is still in its infancy. In some specialties such as anesthesiology and laparoscopic surgery, there are commercial systems available that have reached a fair degree of maturity. In other specialties, there may be few or no systems available. Across specialties, however, there is ample time for the medical simulation community to learn lessons and adopt best practices from other domains. Developers need to focus on what should be built instead of on what can be built. Likewise, instructors and educators must also examine their requirements and communicate with developers to ensure that systems meet their training objectives.
High-fidelity simulators that look impressive but fail to meet the training needs of instructors and students will be abandoned. This wastes precious training dollars for the consumer, wastes development resources on unnecessary components, and hurts the industry by slowing the adoption of this technology. The economic investment in simulation training technology for medicine is a fraction of what is poured into other high-risk domains; hence, the entire medical simulation community would be well served to focus those limited resources on systems that optimize training effectiveness.
1. Dieckmann P, Gaba D, Rall M. Deepening the theoretical foundations of patient simulation
as social practice. Simul Health
2. Laucken U. Theoretische Psychologie (Theoretical Psychology).
Oldenburg: Bibliotheks-und Informationssytem der Universität Oldenburg; 2003.
3. Rudolph JW, Simon R, Raemer DB. Which really matters? Questions on the path to high engagement in healthcare simulation
. Simul Health Care
4. Gaba D. The future vision of simulation
in health care. Qual Safe Health Care
. 2004;13 (Suppl 1):i2–i10.
5. Gagné RM, Wager WW, Golas KC, et al. Principles of Instructional Design
. 5th ed. Belmont, CA: Thomson Wadsworth; 2005.
6. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med
7. Schneider W. Training
high performance skills: fallacies and guidelines. Hum Factors
8. Hayes RT, Singer MJ. Simulation Fidelity in Training System Design: Bridging the Gap Between Reality and Training
. New York: Springer-Verlag; 1989.
9. Thorndike EL. Educational Psychology
. New York: Lemcke & Buechner; 1903.
10. Gagné RM. Training
devices and simulators: some research issues. Am Psychol
11. Farmer E, van Rooij J, Riemersma J, et al. Handbook of Simulation-Based Training
. Burlington, VT: Ashgate; 2003.
12. Issenberg SB, McGaghie WC, Petrusa ER, et al. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach
13. Smallman HS, St. John M. Naïve realism
: misplaced faith in realistic displays. Ergon Des
14. Dember WN, Warm JS. Psychology of Perception
. 2nd ed. New York: Holt, Rinehart, and Winston; 1979.
15. Gescheider G. Psychophysics: Method, Theory, and Application
. 2nd ed. Hillsdale, NJ: Erlbaum; 1985.
16. Stevens SS. The psychophysics of sensory function. In: Rosenblith WA, ed. Sensory Communication
. Cambridge, MA: MIT Press; 1961:1–33.
17. Norman JF, Crabtree CE, Clayton AM, et al. The perception
of distances and spatial relationships in natural outdoor environments. Perception
18. Ninio J. The Science of Illusions
. Ithaca, NY: Cornell University Press; 2001. (Original French edition, La Science des Illusions, Editions Odile Jacob 1998.) 19. Goldstein EB. Sensation and Perception.
Pacific Grove, CA: Brooks/Cole; 1996.
19. Goldstein EB. Sensation and Perception.
Pacific Grove, CA: Brooks/Cole; 1996.
20. Gregory RL. Eye and Brain: The Psychology of Seeing
. 5th ed. Princeton, NJ: Princeton University Press; 1998.
21. Stappers PJ, Overbeeke K, Gaver W. Beyond the limits of real-time realism
: moving from stimulation correspondence to information correspondence. In: Hettinger LJ, Haas MW, eds. Virtual and Adaptive Environments: Application, Implications, and Human Performance Issues
. Mahwah, NJ: Erlbaum; 2003;91–110.
22. Simons DJ, Chabris CF. Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception
23. Samuel AG. Phonemic restoration: insights from a new methodology. J Exp Psychol Gen
24. Geldard FA. The Human Senses
. 2nd ed. New York: Wiley; 1972.
25. Kayser C, Petkov CI, Augath M, et al. Integration of touch and sound in auditory cortex. Neuron
26. Stoffregen TA, Bardy BG, Smart LJ, et al. On the nature and evaluation of fidelity in virtual environments. In: Hettinger LJ, Haas MW, eds. Virtual and Adaptive Environments: Application, Implications, and Human Performance Issues
. Mahwah, NJ: Erlbaum; 2003:111–128.
27. Carr K. Introduction. In: Carr K, England R, eds. Simulated and Virtual Realities: Elements of Perception.
London: Taylor & Francis; 1995:1–9.
28. Stanney KM. Handbook of Virtual Environments: Design, Implementation, and Applications
. Mahwah, NJ: Erlbaum; 2002.
29. Kim HK, Rattner DW, Srinivasan MA. Virtual-reality
based laparoscopic surgical training
: the role of simulation
fidelity in haptic feedback. Comput Aided Surg
30. Gerovich O, Marayong P, Okamura AM. The effect of visual and haptic feedback on computer-assisted needle insertion. Comput Aided Surg
31. Tholey G, Desai JP, Castellanos AE. Force feedback plays a significant role in minimally invasive surgery: results and analysis. Ann Surg
32. Lintern G. An informational perspective on skill transfer in human-machine systems. Hum Factors
33. Lintern G, Roscoe SN, Sivier J. Display principles, control dynamics and environmental factors in pilot performance and transfer of training
. Hum Factors
34. Kneebone RL. Crossing the line: simulation
and boundary areas. Simul Healthcare
35. Scerbo MW, Bliss JP, Schmidt EA, et al. The efficacy of a medical virtual reality
simulator for training
phlebotomy. Hum Factors
36. Scerbo MW, Schmidt EA, Bliss JP. Comparison of a virtual reality
simulator and simulated limbs for phlebotomy training
. J Infus Nurs
37. Vicente K. The Human Factor
. New York: Routledge; 2003.
38. Czaja SL, Nair SN. Human factors engineering and systems design. In: Salvendy G, ed. Handbook of Human Factors and Ergonomics
. 3rd ed. New York: Wiley; 2006:32–49.