Virtual reality (VR) simulation is an established option for temporal bone surgical training. Most VR simulators are based on computed tomography imaging, whereas the Visible Ear Simulator (VES) is based on high-fidelity cryosections of a single temporal bone specimen. Recently published OpenEar datasets combine cone-beam computed tomography (CBCT) and micro-slicing to achieve similar model quality. This study explores integration of OpenEar datasets into VES to enable case variation in simulation with implications for patient-specific modeling based on CBCT.
The OpenEar dataset consists of segmented, coregistered, multimodal imaging sets of human temporal bones. We derived drillable bone segments from the dataset as well as triangulated surface models of critical structures such as facial nerve or dura. Realistic visualization was achieved using coloring from micro-slicing, custom tinting, and texture maps. Resulting models were validated by clinical experts.
Six of the eight OpenEar datasets could be integrated in VES complete with instructional guides for various temporal bone surgical procedures. Resulting models were of high quality because of postprocessing steps taken to increase realism including colorization and imaging artifact removal. Bone artifacts were common in CBCT, resulting in dehiscences that most often could not be found in the ground truth micro-slicing data.
New anatomy models are included in VES version 3.5 freeware and provide case variation for training which could help trainees to learn more quickly and transferably under variable practice conditions. The use of CBCT for VR simulation models without postprocessing results in bone artifacts, which should be considered when using clinical imaging for patient-specific simulation, surgical rehearsal, and planning.