Institutional members access full text with Ovid®

Share this article on:

Augmented Reality of the Middle Ear Combining Otoendoscopy and Temporal Bone Computed Tomography

Marroquin, Roberto*; Lalande, Alain*; Hussain, Raabid*; Guigou, Caroline; Grayeli, Alexis Bozorg*,†

doi: 10.1097/MAO.0000000000001922
Research Methodology

Hypothesis: Augmented reality (AR) may enhance otologic procedures by providing sub-millimetric accuracy and allowing the unification of information in a single screen.

Background: Several issues related to otologic procedures can be addressed through an AR system by providing sub-millimetric precision, supplying a global view of the middle ear cleft, and advantageously unifying the information in a single screen. The AR system is obtained by combining otoendoscopy with temporal bone computer tomography (CT).

Methods: Four human temporal bone specimens were explored by high-resolution CT-scan and dynamic otoendoscopy with video recordings. The initialization of the system consisted of a semi-automatic registration between the otoendoscopic video and the 3D CT-scan reconstruction of the middle ear. Endoscope movements were estimated by several computer vision techniques (feature detectors/descriptors and optical flow) and used to warp the CT-scan to keep the correspondence with the otoendoscopic video.

Results: The system maintained synchronization between the CT-scan image and the otoendoscopic video in all experiments during slow and rapid (5–10 mm/s) endoscope movements. Among tested algorithms, two feature-based methods, scale-invariant feature transform (SIFT); and speeded up robust features (SURF), provided sub-millimeter mean tracking errors (0.38 ± 0.53 mm and 0.20 ± 0.16 mm, respectively) and an adequate image refresh rate (11 and 17 frames per second, respectively) after 2 minutes of procedure with continuous endoscope movements.

Conclusion: A precise augmented reality combining video and 3D CT-scan data can be applied to otoendoscopy without the use of conventional neuronavigation tracking thanks to computer vision algorithms.

*Le2i Laboratory, University of Burgundy-Franche Comté

Otolaryngology-Head and Neck Surgery Department, University Hospital of Dijon, Dijon, France

Address correspondence and reprint requests to Roberto Marroquin, M.Sc., Laboratoire Le2i, Université de Bourgogne Franche-Comté, UFR Sciences et Techniques, allée Alain Savary, 21000 Dijon, France; E-mail: roberto-enrique.marroquin-cortez@u-bourgogne.fr

The financial support is from the Oticon Medical, Société ORL de Bourgogne, CNRS, and Collin Medical SA.

The authors disclose no conflicts of interest.

Supplemental digital content is available in the text.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Website (http://journals.lww.com/otology-neurotology).

Copyright © 2018 by Otology & Neurotology, Inc. Image copyright © 2010 Wolters Kluwer Health/Anatomical Chart Company