For people with limited vision, wearable displays hold the potential to digitally enhance visual function. As these display technologies advance, it is important to understand their promise and limitations as vision aids.
The aim of this study was to test the potential of a consumer augmented reality (AR) device for improving the functional vision of people with near-complete vision loss.
An AR application that translates spatial information into high-contrast visual patterns was developed. Two experiments assessed the efficacy of the application to improve vision: an exploratory study with four visually impaired participants and a main controlled study with participants with simulated vision loss (n = 48). In both studies, performance was tested on a range of visual tasks (identifying the location, pose and gesture of a person, identifying objects, and moving around in an unfamiliar space). Participants' accuracy and confidence were compared on these tasks with and without augmented vision, as well as their subjective responses about ease of mobility.
In the main study, the AR application was associated with substantially improved accuracy and confidence in object recognition (all P < .001) and to a lesser degree in gesture recognition (P < .05). There was no significant change in performance on identifying body poses or in subjective assessments of mobility, as compared with a control group.
Consumer AR devices may soon be able to support applications that improve the functional vision of users for some tasks. In our study, both artificially impaired participants and participants with near-complete vision loss performed tasks that they could not do without the AR system. Current limitations in system performance and form factor, as well as the risk of overconfidence, will need to be overcome.
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
1Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire
2Department of Computer Science, Dartmouth College, Hanover, New Hampshire
3School of Optometry and Vision Sciences, Cardiff University, Cardiff, United Kingdom *email@example.com
Submitted: November 16, 2017
Accepted: February 18, 2018
Funding/Support: Microsoft (US) (to EAC, WJ, and X-DY); and Oculus (to EAC).
Conflict of Interest Disclosure: None of the authors have reported a financial conflict of interest. The sponsor provided financial and material support, but had no role in the study design, conduct, analysis and interpretation, or writing of the report.
Author Contributions and Acknowledgments: Conceptualization: MK, JG, MJD, WJ, X-DY, EAC; Data Curation: MK, JG, MJD, EAC; Formal Analysis: MK, MJD, EAC; Funding Acquisition: EAC; Investigation: MK, JG, MJD, WJ, EAC; Methodology: MK, JG, MJD, WJ, X-DY, EAC; Project Administration: MK, WJ, EAC; Software: MK, JG; Supervision: WJ, X-DY, EAC; Validation: MK, EAC; Visualization: MK, JG, MJD, EAC; Writing – Original Draft: MK, MJD, WJ, X-DY, EAC; Writing – Review & Editing: MK, JG, MJD, WJ, X-DY, EAC.
The authors thank the Carroll Center for the Blind, Bruce Howell, and Robin Held for their advice and assistance; Klara Barbarossa and Jonathan Huang for help with data collection; and two anonymous reviewers for helpful feedback on the manuscript.