Journal Logo

Technical Reports

The Physical-Virtual Patient Simulator

A Physical Human Form With Virtual Appearance and Behavior

Daher, Salam PhD; Hochreiter, Jason PhD; Schubert, Ryan MS; Gonzalez, Laura PhD, APRN, CNE, CHSE-A; Cendan, Juan MD; Anderson, Mindi PhD, APRN, CPNP-PC, CNE, CHSE-A, ANEF, FAAN; Diaz, Desiree A. PhD, RN-BC, CNE, CHSE-A; Welch, Gregory F. PhD

Author Information
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: April 2020 - Volume 15 - Issue 2 - p 115-121
doi: 10.1097/SIH.0000000000000409

Abstract

Healthcare providers use multiple types of patient simulators for safely training and teaching such as mannequins (eg, SimMan3G,1 pediatric HAL,2 CAE simulators3), computer-based (eg, Shadow Health,4 i-Human,5 Second Life6), and mixed reality simulators7–9 (eg, CAE Vimedix10).11,12 Each type has advantages and drawbacks. Standardized patients (SPs) are living humans trained to act like patients in a standardized manner13 (eg, learning physical examination or history-taking skills14). However, SPs cannot exhibit certain symptoms at will, such as changing physiology (eg, temperature) or appearance (eg, pupil reactions), which may be useful diagnostic cues.15 In addition, it can be difficult to schedule SPs.16 In particular, recruiting children and infants as SPs is challenging, especially because of child labor laws.17–19 More feasible options for simulating children include mannequin- or software-based simulations.

Current mannequins originated in the 1960s with Resusci Anne—the first and most widely known cardiopulmonary resuscitation mannequin.20,21 Soon after, mannequin-based simulators, such as SimOne, CASE, GAS, and others, were developed to increase patient safety and train anesthesiologists.22,23 Anesthesiology training does not require dynamic visuals (eg, facial expressions), as anesthetized patients are generally nonresponsive. Today, mannequins are widely used for education, training, and research.24 Different simulation scenarios have been used for training across multiple healthcare domains (eg, medical-surgical, trauma, critical care, and obstetrics). Many of these simulations inherently require dynamic appearance changes with real human patients, such as facial expressions, gestures, and abnormal visual findings (such as facial droop from a stroke), which cannot be easily portrayed on a mannequin.25,26 Much like those first used by anesthesiologists, modern mannequins still have a static appearance. They typically cannot dynamically change skin color, perform gestures or facial expressions, or exhibit localized temperature, and they generally do not have automated touch responses.

Computer-based patient simulation can show dynamic visuals, typically with the patient rendered on a flat computer screen. Monitors, television (TV) screens, and projectors generally focus purely on visual and auditory cues, omitting tactile ones such as temperature, pulse, and the occupation of 3-dimensional (3D) volume. Although it is possible to have a temperature map for a patient, it is not common to see scenario-driven localized temperature cues on a TV screen or a computer monitor.

In simulation, it is often difficult to get the symptoms “just right,” making them too easy or too hard to see compared with a real human. An inaccurate presentation of symptoms can lead to diagnostic and treatment errors in the simulation environment or reinforce incorrect behaviors.27 For example, if the simulator cannot naturally represent mottled skin, participants may not recognize it as a symptom on their own. To mitigate such simulator limitations, the facilitator may explicitly cue the participant,28,29 eg, saying “mottled skin.” However, this also provides the participant with a hint toward the diagnosis, perhaps resulting in arriving at a correct diagnosis more easily. If the facilitator does not provide the hint, then the simulation lacks some information that would normally be available. This might unintentionally lead to greater difficulty in arriving at the correct diagnosis.

Augmented reality (AR) combines the user's physical environment with dynamic visuals through head-mounted displays (HMDs)3,7,8 or projectors,9,25,30 for instance, by supplementing a physical mannequin with dynamic imagery. However, modern HMDs (eg, HoloLens,31 Meta32) have restricted fields of view and are heavy,33 and multiuser scenarios require synchronizing imagery across various devices. In addition, the augmented graphics from HMDs typically occlude the user's hands.34–36 Front-projection AR has the opposite problem, where the shadow of the user's hand can occlude the projection.37 Rear-projection AR can solve the issue of occlusion but requires enough space for the projectors behind any augmented objects.

Using rear-projection AR, we created the Physical-Virtual Patient Simulator (PVPS), which represents the patient on a physical human-shaped shell. It can display subtle multisensory symptoms (eg, localized temperature and pulse), a variety of scenarios (eg, neurological assessment, sepsis, burns), and a diversity of patients in terms of shape (eg, child, adult, male, female, obese, amputee) and appearance (eg, varied skin, eye, hair color). To allow for maximum exposure, it supports quick changes between clinical cases; our prototype takes less than a second to change the imagery, approximately 10 seconds to change the shell, and up to approximately 2 minutes to reach the desired temperature. Our simulator is also virtual: dynamic imagery is projected onto the human-shaped shell. Touching the shell can trigger responses, such as capillary refill, revealing the patient's teeth and eyes, speech, and facial expressions, allowing healthcare providers to directly experience recognizing certain subtle signs and symptoms firsthand.

Simulating a patient in tangible 3D space can be advantageous over flat representations, as this is far more representative of actual humans. The increased physicality of the patient and ability to more realistically portray symptoms can prompt more realistic provider behavior, potentially leading to more accurate and informed diagnoses.25,38–40 Bruises might be located on the patient's sides, which may be difficult to portray on flat surfaces without requiring users to rotate the patient in unrealistic ways. Similarly, 3D simulators can facilitate more natural estimations of the positions and sizes of injuries (eg, burns). Patient eye gaze behavior, an important subtle nonverbal signal, is easier to interpret on 3D simulators than on flat ones because of the Mona Lisa effect,41 allowing for more realistic scenarios. Furthermore, 3D simulators require learners and users to observe the patient from many angles, as they would when assessing a human. Our prototype simulates a child patient, which we evaluated in a preliminary study.

METHODS

Implementation of the PVPS

Hardware

The PVPS (Figs. 1, 2) consists of a 51 × 76 × 76-cm metal frame that houses electronic equipment inside and an interchangeable translucent plastic “shell” on top. The shell's shape can vary to represent a variety of patients; our prototype is shaped like a small child spanning from the head to the knees. The PVPS is transportable and can accommodate in situ training and experiments in different healthcare environments. The imagery is rear projected onto the shell using 2 AAXA P300 Pico projectors42 (resolution 1920 × 1080 pixels) that provide imagery for the patient's head and body, respectively. In the 3D game engine Unity,43 we created a virtual representation of the physical setup, with 2 virtual cameras approximately matching the positions, orientations, and fields of view of the physical projectors pointing toward the 3D model of the patient. The imagery rendered by these virtual cameras is sent directly to the projectors for display on the shell. To form the shell, we created a 3D computer-aided design (CAD) model of a child patient with appropriate proportions,44–47 had it milled to create a “positive” mold, and had the shell material vacuum-formed over the mold. We vacuum-formed the shell with 1/16-inch Optix 2447 plastic sheet material,48 which allows for projected imagery to form clearly on the surface. The CAD model, mold, and shell were designed to comprise a relatively smooth surface shape so that the virtual patient could exhibit some degree of animated movement in areas such as the nose and fingers, without creating the disturbing visual distortions that would otherwise occur if the imagery moved over areas of sharp shape changes. We simulated the projection of the virtual character onto the CAD model before manufacturing to ensure that the shape was suitable for projection.

FIGURE 1
FIGURE 1:
Photo of the PVPS highlighting the hardware elements (child-shaped shell, heaters, speakers, projectors, and haptic acoustic devices). The imagery on the shell shows a patient with sepsis.
FIGURE 2
FIGURE 2:
Photo of healthcare providers interacting with the PVPS.

We added speakers near the head for the patient speech and breathing sounds. All sounds were prerecorded, with the inhale/exhale sounds created to match the respiratory rate and animations of each scenario. Five Honeywell HCE100B Heat Bud Ceramic Heaters49 installed below the head, sides, and bottom of the shell can independently provide heat (low/high intensity) to specific body parts. To simulate pulse, audio signals were sent to 2 acoustic haptic Techtile Toolkit50 devices located under the patient's arms (Fig. 1).

Scenario-Driven Content

We developed content for a normal healthy child to serve as a baseline (see Video, Supplemental Digital Content 1, that shows interaction with a healthy patient, http://links.lww.com/SIH/A469). We adapted a previously validated checklist of a pediatric patient demonstrating early signs/symptoms of sepsis51 to the PVPS. We developed software to illustrate that the simulator can convey multiple subtle symptoms, such as skin mottling, cyanosis, ptosis, delayed capillary refill, tachypnea, fever, hypotension, and tachycardia. In addition, we developed content for other child patient scenarios with associated signs/symptoms that could be represented using the same PVPS. For example, we developed a model for a child subjected to physical abuse, showing symptoms such as ecchymosis, cigarette burns, and bites; and a burn patient showing burns, blisters, and swelling. The behavior and speech of the patient can be fully automated (ie, via artificial intelligence), directly mapped from a real person (eg, someone speaking “live” into a microphone), or controlled by a hybrid “Wizard of Oz”52 approach in which an operator triggers prerecorded responses and behaviors. Because our medical scenarios are relatively specific, we used prerecorded responses to ensure consistency. A total of 446 audio clips for patient speech were recorded in the following 3 different tones: 148 for a healthy patient, 149 for a patient in pain, and 149 for a low-energy (lethargic) patient. As it is challenging to get a child to act and record in a studio setting, we initially recorded an adult. Using MorphVOX software,53 we created and applied a set of modifications to these recordings to simulate a child's voice. The resulting audio was cleaned and imported in Unity along with the 3D character detailed hereinafter. The Rogo Digital LipSync54 plugin was used to automate the visual lip motion of the 3D character to match the audio clips, and we created an accompanying graphical user interface to trigger the clips and other animations.

Software

After researching children's proportions,44–47 we created a 3D child character using the 3D modeling software package Maya55 (Fig. 3). The full-body 3D model had a low number of polygons (4825 vertices and 4923 polygons) to ensure fast real-time interaction. We created textures in Adobe Photoshop56 and used a UV mapping technique to map the 2D textures, which use (U, V) coordinates in a 2D plane, onto the 3D mesh, which uses (X, Y, Z) coordinates in 3D space. Various body parts in the 3D character were rigged for animation using joints and blendshapes. The eyeballs, jaw, neck, torso, breathing, arms, hands, fingers, and legs were rigged using joints to smoothly control the mesh vertex positions during animations (Fig. 4). The rest of the facial muscles besides the eyeballs and jaw were controlled using blendshapes. Twenty-two blendshapes were created for facial expressions and for phonemes (eg, blink, open/close mouth, open/close lips, open/close eyelids, smile, frown, nose wrinkle, disgust, fear, sadness, pupil changes, etc) that can be further combined to create more complex variations with different intensities. The blendshapes for phonemes allow the character to appropriately move his/her lips regardless of the scenario.

FIGURE 3
FIGURE 3:
Three-dimensional model of a healthy patient without a shirt.
FIGURE 4
FIGURE 4:
Low-polygon 3D character with rigged joints shown in green.

In Unity, the virtual model includes geometry that should not be visible on the final physical surface (such as the back half of the patient) and complex internal geometry (eg, eyeballs, modeled mouth) with only certain parts that should be contextually visible. The content of the projected imagery on the physical projection surface should correspond to only the front-most surface of all component pieces of the virtual model. Because of this, we cannot simply render the model from behind, even using a clipping plane at a fixed depth. Instead, we use a rendering process that renders all geometric surfaces that are the farthest away from the camera, regardless of which way the surface is facing. This is essentially the reverse of the computer graphics depth-check that is performed during typical rendering.

The control screen presented to the simulation operator provides an interface to trigger speech, trigger dynamic visuals (eg, open/move eyes, change pupil size, open mouth, move head, facial expressions, move arms/fingers, capillary refill, remove patient's shirt, etc), and toggle vitals (temperature strip, blood pressure, O2 saturation) (Fig. 5).

FIGURE 5
FIGURE 5:
Graphical user interface for a healthy patient. The controller uses this interface to control the patient's responses. The controller can trigger verbal and nonverbal responses, change the facial expressions, move the body (hands, fingers, neck, eyes), trigger capillary refill for each finger, and show and hide props. The responses and the sound characteristics can be different depending on the scenario.

Testing

Technical Testing

First, we tested the hardware components of the PVPS, comparing imagery projection quality on different plastic materials with various transparencies and thicknesses. With a surface thermometer, we spot checked the temperatures of various locations on the shell with different configurations of the heaters. To simulate a tactile pulse, we modified an existing pulse sound sample to represent the intended rates (eg, 80 bpm for the child abuse scenario, 100 bpm for the sepsis scenario) and sent the audio files to haptic-acoustic devices. Three nurses measured and verified the pulse on our child surface. We iteratively developed the simulation software using feedback from professors in nursing and medicine. We recorded and analyzed the actions of our medical team members performing a mock simulation on an ordinary mannequin to assist in further development of audio responses, graphical reactions, and other simulator capabilities. After we integrated the software and hardware, nursing professors conducted simulation sessions using the PVPS and provided formative feedback.

Human-Subject Experiment

After obtaining institutional review board approval, we conducted a formative human-subject study with 22 nurse practitioner (NP) students in an advanced health assessment class where they interacted with simulated child patients using the PVPS. Twelve participants interacted with a sepsis patient, and 10 participants interacted with a child showing signs of abuse.

Protocol

Students were first familiarized with the PVPS via a video providing an overview of the simulator's capabilities with a healthy child patient. Next, pairs of students collaboratively assessed a child patient using the PVPS. By design, a slightly inconsistent story was provided for the child abuse scenario between the patient report (patient fell off the sofa) and the patient's responses to probing questions about his condition (patient fell off his bike) to see whether participants would notice the discrepancies. The participants were observed interacting with the patients from the laboratory's control room using a video system that had a (roughly) 2-second delay. The delay was from the video recording system used in the university simulation laboratory, which streams the video feed to the controller in a remote room; there was no delay caused by the simulator itself. We did not want the controller to be in the same room as the participants so that they would not be affected by the presence of someone other than the simulated patient. The patients' animated behaviors and audio clips were initiated proactively in some cases and in response to participant behaviors or questions other times.

Instrument

After the simulation, participants were asked to provide open responses to “How easy was it to interact with the patient?”; “Did the patient seem real? Why or why not?”; and “List the findings you identified during your assessment that led you to your diagnosis.” We were interested to see which cues they noticed on their own without intervention from the observer or researchers. We categorized and aggregated these qualitative answers based on commonly used words and phrases, summarized hereinafter.

RESULTS

After their assessments, 19 of the 22 participants described their interactions with the patient as easy, whereas three did not. According to a one-way χ2 test, these results are significantly different from an equal split of easy versus not easy responses (z = 11.636, P = 0.0006), indicating that most participants found the interaction easy. All three who considered the interaction to be difficult assessed the sepsis patient; one specifically found it challenging to hear the patient, whose speech was intentionally recorded to sound lethargic and subdued. When asked whether the patient seemed real, participants provided mixed responses. Of the 12 participants who interacted with the sepsis child, two described it as “the most real I have ever seen in simulation” and a “great use of technology.”

Four particularly liked the patient's speech and answers to questions, one indicated that the patient “had feelings,” and another mentioned the patient “felt warm [and had] mottled skin.” One participant remarked on the delay in patient responses, which was due to the camera/audio of the video communication system. Of the 10 participants who interacted with the child in the abuse case, seven indicated that the patient's responses, behavior, and reactions seemed real, whereas two felt the lack of a lower body decreased realism.

Participants were asked to list the cues they noticed that led to their diagnosis. The 12 participants who assessed the sepsis patient noticed multiple cues, including lethargy or weakness in voice/attitude (eight participants), temperature (5 participants), mottled skin (4 participants), cough (3 participants), oxygen saturation (2 participants), blue/red lips (2 participants), facial expressions (2 participants), audible wheezes (1 participant), and respiratory rate (1 participant). Similarly, the 10 participants in the child abuse scenario noticed physical trauma/wounds (6 participants); cigarette burns (6 participants); bruises/contusions (6 participants); fearful attitude (6 participants); abrasions, skin lacerations, and scrapes (5 participants); inconsistent story (3 participants); facial expressions (2 participants); distress/anxiety (1 participant); swelling (1 participant); and bite marks (1 participant).

DISCUSSION

Limitations and Future Work

Given the wording of some of the survey questions, some participants may have been more inclined to report a positive reaction to the interaction or perceived realism. However, given the subjective descriptive examples many participants volunteered to support their overall impressions, this effect seemed to be minimal, if present.

The current PVPS is only able to present one side of the patient at a time (front or back), which could be addressed in the future with more advanced projectors and smaller heaters. Although the tight registration of virtual imagery to the physical shell is one of the novel and advantageous features of the PVPS, the imagery is inherently “bound” to the shell: animations that move “away” from the shell could appear distorted. In practice, we use only relatively small movements, which appear natural, but we are working on minimizing or mitigating such distortions to increase the range of motion for animations. The projected imagery could also potentially be supplemented with virtual imagery from AR HMDs. It may be possible to introduce robotic elements to allow for physical movements of various parts of the shell, such as the patient's arms and head. As indicated by participant responses, the lack of legs decreased the perceived realism of the simulator. Future work includes extending the shell to a full body and creating variations (eg, male, female, amputee, obese, etc).

Previously, we developed and demonstrated a method for automated touch detection and response on nonparametric shells using infrared light, cameras, and projectors.57,58 Although the initial development was carried out on a head-shaped surface using a single projector, we are working on supporting touch sensing over larger surfaces, such as the PVPS or other full-body simulators, using multiple cameras and projectors. This would allow for automated touch interactions that can be directly integrated with the simulation scenario. In addition, we are interested in evaluating the relative importance of different elements of the PVPS by selectively changing the presence or fidelity of certain cues. This includes comparing imagery on a 3D, physical human-shaped shell to a flat-screen display, varying the locations of dynamic visuals relative to the rest of the cues, and evaluating the importance of heat. Finally, we plan to explore the effect of more human-like synthetic skin on the simulation.

Conclusion

We described the development of the PVPS, a simulator that allows healthcare providers to interact with a patient that has physical form and can dynamically change appearance. The PVPS can represent combinations of subtle signs/symptoms corresponding to a variety of conditions. The natural interface allows direct interaction with the patient, minimizing facilitator intervention, which can interfere with the participant's experience, assessment, and learning. We tested the PVPS via a formative human-subject study where graduate NP students engaged in 2 different healthcare scenarios, interacting with the PVPS and reporting the symptoms and cues they noticed. We are encouraged that participants recognized many multisensory cues (eg, dynamic visuals, localized temperature, pulse, voice, patient attitude), without the need for facilitator intervention, and that they found the PVPS cues realistic.

REFERENCES

1. Laerdal Web site. Available at: https://www.laerdal.com/us/. Accessed September 19, 2018.
2. Gaumard Web site. Available at: https://www.gaumard.com/aboutsims. Accessed September 19, 2018.
3. CAE Healthcare Web site. Available at: https://caehealthcare.com. Accessed September 19, 2018.
4. Shadow Health Web site. Available at: https://shadowhealth.com/. Accessed September 19, 2018.
5. iHuman Web site. Available at: http://www.i-human.com/. Accessed September 19, 2018.
6. Second Life Web site. Available at: https://secondlife.com/. Accessed September 19, 2018.
7. Rolland J, Davis L, Hamza-Lup F, et al. Development of a training tool for endotracheal intubation: distributed augmented reality. Stud Health Technol Inform 2003;94:288–294.
8. Davis L, Hamza-Lup FG, Daly J, et al. Application of augmented reality to visualizing anatomical airways. In: Helmet-and Head-Mounted Displays VII. Orlando, FL: International Society for Optics and Photonics; 2002;4711:400–406.
9. Samosky JT, Nelson DA, Wang B, et al. BodyExplorerAR: enhancing a mannequin medical simulator with sensing and projective augmented reality for exploring dynamic anatomy and physiology. In: Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction. Kingston, Ontario, Canada: ACM; 2012:263–270.
10. CAE VidmedixAR Web site. Available at: https://caehealthcare.com/ultrasound-simulation/vimedix/. Accessed September 19, 2018.
11. Aggarwal R, Mytton OT, Derbrew M, et al. Training and simulation for patient safety. Qual Saf Health Care 2010;19(Suppl 2):i34–i43.
12. Daher S. Optical see-through vs. spatial augmented reality simulators for medical applications. In: IEEE Virtual Reality (VR). 2017:417–418. doi: 10.1109/VR.2017.7892354.
13. Barrows HS. An overview of the uses of standardized patients for teaching and evaluating clinical skills. AAMC. Acad Med 1993;68:443–451.
14. Ramsey PG, Curtis JR, Paauw DS, Carline JD, Wenrich MD. History-taking and preventive medicine skills among primary care physicians: an assessment using standardized patients. Am J Med 1998;104(2):152–158.
15. Levine AI, DeMaria S Jr., Schwartz AD, Sim AJ. The Comprehensive Textbook of Healthcare Simulation. Springer Science & Business Media; 2013.
16. Adamo G. Simulated and standardized patients in OSCEs: achievements and challenges 1992-2003. Med Teach 2003;25(3):262–270.
17. Anderson M, Holmes TL, LeFlore JL, Nelson KA, Jenkins T. Standardized patients in educating student nurses: one school's experience. Clin Simul Nurs 2010;6(2):e61–e66.
18. Tsai TC. Using children as standardised patients for assessing clinical competence in paediatrics. Arch Dis Child 2004;89(12):1117–1120.
19. Glantz LH. Conducting research with children: legal and ethical issues. J Am Acad Child Adolesc Psychiatry 1996;35(10):1283–1291.
20. Owen H. Early use of simulation in medical education. Simul Healthc 2012;7(2):102–116.
21. Laerdal History. Laerdal Web site. Available at: http://www.laerdal.com/us/doc/367/History. Accessed July 2, 2018.
22. Gaba DM, DeAnda A. A comprehensive anesthesia simulation environment: re-creating the operating room for research and training. Anesthesiology 1988;69(3):387–394.
23. Cooper JB, Taqueti VR. A brief history of the development of mannequin simulators for clinical education and training. Qual Saf Health Care 2004;13(Suppl 1):i11–i18.
24. Okuda Y, Bryson EO, DeMaria S Jr., et al. The utility of simulation in medical education: what is the evidence? Mt Sinai J Med 2009;76(4):330–343.
25. Daher S, Hochreiter J, Norouzi N, Gonzalez L, Bruder G, Welch G. Physical-virtual agents for healthcare simulation. In: Proceedings of the 18th International Conference on Intelligent Virtual Agents. Sydney, Australia:ACM; 2018:99–106.
26. Mackenzie CF, Harper BD, Xiao Y. Simulator limitations and their effects on decision-making. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Los Angeles, CA: SAGE Publications 1996; 1996;40, No. 14:747–751.
27. Champion HR, Gallagher AG. Surgical simulation—a ‘good idea whose time has come’. Br J Surg 2003;90(7):767–768.
28. Anderson M, Ackermann AD. Maximizing realism. n.d. Available at: http://sirc.nln.org/mod/page/view.php?id=63. Accessed September 2018.
29. Paige JB, Morin KH. Simulation fidelity and cueing: a systematic review of the literature. Clin Simul Nurs 2013;9(11):e481–e489.
30. Rivera-Gutierrez D, Welch G, Lincoln P, et al. Shader lamps virtual patients: the physical manifestation of virtual patients. Stud Health Technol Inform 2012;173:372–378.
31. Microsoft Hololens Web site. Available at: https://www.microsoft.com/en-us/hololens. Accessed October 4, 2018.
32. Meta Augmented Reality Web site. Available at: https://www.metavision.com/. Accessed October 4, 2018.
33. Hochreiter J, Daher S, Bruder G, Welch G. Cognitive and touch performance effects of mismatched 3D physical and visual perceptions. In: 2018 I.E. Conference on Virtual Reality and 3D User Interfaces. 2018:1–386.
34. Van Krevelen DWF, Poelman R. A survey of augmented reality technologies, applications and limitations. Int J Virtual Reality 2010;9(2):1–20.
35. Schmalstieg D, Hollerer T. Augmented Reality: Principles and Practice. Boston MA, USA: Addison-Wesley Professional; 2016.
36. Schmalstieg D, Fuhrmann A, Hesina G, et al. The studierstube augmented reality project. Presence-Teleop Virt 2002;11(1):33–54.
37. Fuhrmann A, Hesina G, Faure F, Gervautz M. Occlusion in collaborative augmented environments. Comput Graph 1999;23(6):809–819.
38. Chuah JH, Robb A, White C, et al. Exploring agent physicality and social presence for medical team training. Presence Teleop Virt 2013;22(2):141–170.
39. Pan Y, Steed A. A comparison of avatar-, video-, and robot-mediated interaction on users' trust in expertise. Front. Robot. AI 2016;3:12.
40. Kotranza A, Lok B. Virtual human+ tangible interface= mixed reality human an initial exploration with a virtual breast exam patient. In: 2008 I.E. Virtual Reality Conference. Reno, NE: IEEE; 2008:99–106.
41. Al Moubayed S, Edlund J, Beskow J. Taming Mona Lisa: communicating gaze faithfully in 2D and 3D facial projections. ACM Transact Interact Int Syst 2012;1(2):25.
42. AAXA P300 Pico Projector - DLP Hand-held Mini Projector - LED Pocket Projector. AAXA Web site. Available at: http://www.aaxatech.com/products/p300_pico_projector.htm. Accessed July 2, 2018.
43. Unity. Unity3D Web site. Available at: https://unity3d.com. Accessed July 2, 2018.
44. Fryar CD, Gu Q, Ogden CL. Anthropometric reference data for children and adults: United States, 2007-2010. Vital Health Stat 11 2012;252:1–48.
45. Bear-Lehman J, Kafko M, Mah L, Mosquera L, Reilly BB. An exploratory look at hand strength and hand size among preschoolers. J Hand Ther 2002;15(4):340–346.
46. Hohendorff B, Weidermann C, Burkhart KJ, Rommens PM, Prommersberger KJ, Konerding MA. Lengths, girths, and diameters of children's fingers from 3 to 10 years of age. Ann Anat 2010;192(3):156–161.
47. Scott Moses MD. Height Measurement in Children. Family Practice Notebook Web site. Available at: https://fpnotebook.com/Endo/Exam/HghtMsrmntInChldrn.htm. Accessed July 2, 2018.
48. Plaskolite. Plaskolite Web site. Available at: http://www.plaskolite.com/Search?search=optix%202447. Accessed July 2, 2018.
49. Honeywell HCE100B Heat Bud Mini Home Heater | Honeywell Store. Honeywell Web site. Available at: https://www.honeywellstore.com/store/products/heat-bud-ceramic-portable-mini-heater-hce100-series.htm. Accessed July 2, 2018.
50. Techtile YCAM InterLab Tool Kit. Techtile Web site. Available at: http://www.techtile.org/en/techtiletoolkit/. Accessed July 2, 2018.
51. Diaz DA, Anderson M, Chu C, Kling C, Orth M. A pediatric sepsis early recognition simulation and checklist: final data and lessons learned. Clin Pediatr 2017;2(Suppl 5):39.
52. Dahlbäck N, Jönsson A, Ahrenberg L. Wizard of Oz studies: why and how. In: Proceedings of the 1st International Conference on Intelligent User Interfaces. Orlando, FL: ACM; 1993:193–200.
53. Morph VOX Pro Voice Changing Software. Screaming Bee Web site. Available at: https://screamingbee.com/Product/MorphVOX.aspx. Accessed July 2, 2018.
54. Rogo Digital – LipSync. Rogo Digital Web site. Available at: https://lipsync.rogodigital.com/. Accessed July 2, 2018.
55. Maya Computer Animation & Modeling Software. Autodesk Web site. Available at: https://www.autodesk.com/products/maya/overview. Accessed July 2, 2018.
56. Adobe Photoshop CC. Adobe Web site. Available at: https://www.adobe.com/products/photoshop.html. Accessed July 2, 2018.
57. Hochreiter J, Daher S, Nagendran A, Gonzalez L, Welch G. Touch sensing on non-parametric rear-projection surfaces: a physical-virtual head for hands-on healthcare training. IEEE Virtual Reality 2015;69–74.
58. Hochreiter J, Daher S, Nagendran A, Gonzalez L, Welch G. Optical touch sensing on nonparametric rear-projection surfaces for interactive physical-virtual experiences. Presence Teleop Virt 2016;25(1):33–46.
Keywords:

Physical-virtual patient simulator; pediatric patient simulation; development and evaluation; pilot study; co-location of multisensory cues; physical human form; simulated patient behaviors; sepsis; child abuse

Supplemental Digital Content

© 2020 Society for Simulation in Healthcare