Secondary Logo

Share this article on:

Low Vision Enhancement with Head-mounted Video Display Systems: Are We There Yet?

Deemer, Ashley D., OD, FAAO1*; Bradley, Christopher K., PhD1; Ross, Nicole C., OD, FAAO2; Natale, Danielle M., OD, FAAO3; Itthipanichpong, Rath, MD1; Werblin, Frank S., PhD4; Massof, Robert W., PhD, FAAO1

doi: 10.1097/OPX.0000000000001278
FEATURE ARTICLE – PUBLIC ACCESS

SIGNIFICANCE Head-mounted video display systems and image processing as a means of enhancing low vision are ideas that have been around for more than 20 years. Recent developments in virtual and augmented reality technology and software have opened up new research opportunities that will lead to benefits for low vision patients. Since the Visionics low vision enhancement system (LVES), the first head-mounted video display LVES, was engineered 20 years ago, various other devices have come and gone with a recent resurgence of the technology over the past few years. In this article, we discuss the history of the development of LVESs, describe the current state of available technology by outlining existing systems, and explore future innovation and research in this area. Although LVESs have now been around for more than two decades, there is still much that remains to be explored. With the growing popularity and availability of virtual reality and augmented reality technologies, we can now integrate these methods within low vision rehabilitation to conduct more research on customized contrast-enhancement strategies, image motion compensation, image-remapping strategies, and binocular disparity, all while incorporating eye-tracking capabilities. Future research should use this available technology and knowledge to learn more about the visual system in the low vision patient and extract this new information to create prescribable vision enhancement solutions for the visually impaired individual.

1Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, Maryland

2New England College of Optometry, Boston, Massachusetts

3LifeBridge Health Krieger Eye Institute, Baltimore, Maryland

4University of California, Berkeley, Berkeley, California *adeemer1@jhmi.edu

Submitted: February 1, 2018

Accepted: June 28, 2018

Funding/Support: Reader's Digest Partners for Sight Foundation (to RWM); National Eye Institute (R01EY026617; to RWM); and National Eye Institute (grant R44EY028077; to FSW).

Conflict of Interest Disclosure: ADD, CKB, NCR, DMN, and RI – no financial conflicts of interest. FSW – shareholder and the chief executive officer of Visionize LLC; the author was responsible for the preparation of this manuscript. RWM – scientific advisory board of Evergaze LLC; the author was responsible for the preparation of this manuscript and the decision to submit this article for publication.

Author Contributions: Conceptualization: CKB, NCR, DMN, FSW, RWM; Investigation: RI; Methodology: RWM; Resources: NCR, RI, FSW, RWM; Software: CKB, RI, FSW; Supervision: RWM; Visualization: ADD, RWM; Writing – Original Draft: ADD; Writing – Review & Editing: ADD, CKB, NCR, RI, FSW, RWM.

Figure

Figure

Low vision refers to chronic disabling vision impairment caused by disorders of the visual system that cannot be corrected with glasses/contact lenses, medical treatment, or surgery. The types of vision impairments that come under the rubric of low vision include reductions in visual acuity, loss of contrast sensitivity, central scotomas, peripheral visual field loss, night blindness, slow glare recovery, photophobia, metamorphopsia, and oscillopsia. Often, low vision patients have combinations of these impairments. As a result of their vision impairments, low vision patients have difficulty with or are unable to perform valued activities.1,2 Consequently, low vision can have a significant impact on patients' daily functioning, independence, social interactions, quality of life, and ultimately physical and mental health.3–5

Low vision rehabilitation focuses on maximizing visual function through the use of adaptive strategies and accommodations and with vision-enhancing assistive technology. Linear magnification (e.g., closed-circuit television and large print), relative distance magnification (e.g., high add, microscope, and hand magnifier), and angular magnification (e.g., telescopes, binoculars, bioptics, and head-mounted video display systems) are the primary low vision enhancement strategies used to compensate for reduced visual acuity.6 Other common low vision enhancement strategies include illumination control with filters (e.g., sunglasses and color-tinted lenses), task lighting (e.g., high-intensity light sources and illuminated magnifiers), and head-mounted video display systems that use automatic gain control, which compensate for abnormal light and dark adaptation and glare recovery.6,7 Contrast enhancement (e.g., contrast stretching, contrast reversal, edge enhancement, and color and luminance contrast substitution), which is implemented primarily with closed-circuit television magnifiers, computer accommodation software, colored filters (to transform color contrast to luminance contrast), and head-mounted video display systems, is used to compensate for reduced contrast sensitivity.8,9

Traditionally, magnification has been accomplished using conventional optics; however, there are limitations including fixed level of magnification and reduced field of view, as well as narrow depth of field and close working distance for higher levels of near magnification. Furthermore, other than color filters, optical devices cannot be used to enhance image contrast. Head-mounted video display systems equipped with optical or digital zoom magnification from system-mounted forward-looking video cameras, illumination control, and contrast-enhancement capabilities have been used in low vision rehabilitation for more than 20 years to overcome the limitations of conventional optical devices.7,10–12 These systems are intended to enable hands-free functioning, provide binocular or biocular (same image presented to each eye without retinal disparity) viewing, and modify the image presented to the retina in real time to compensate for the patient's specific visual limitations under changing viewing conditions.13,14 In principle, there is an image-processing strategy for each type of vision impairment that will optimally enhance the patient's perception of visual information in the retinal image. The ideal head-mounted low vision enhancement system would be able to compensate for each type of vision impairment, without compromising field of view, binocularity, working distance, and resolution, and more importantly aid in the effective and efficient performance of the varied tasks of daily living without forcing the user to accept performance trade-offs to accommodate system limitations.15 In recent years, there have been remarkable advances in personal computing (e.g., smartphones), making customized real-time digital image processing to optimally enhance low vision realizable and affordable. The big question now is: Do we know enough about visual impairments and their relation to daily functioning to design customized low vision enhancement algorithms that would be optimal for the individual patient?

Over the past 25 years, progress has been relatively slow in developing and implementing image-processing algorithms that can provide customizable enhancement of the patient's view of the environment. For example, customized contrast-enhancement and image-remapping strategies first demonstrated in the laboratory more than three decades ago16,17 have not yet been implemented in commercial head-mounted low vision enhancement system systems. Although the past rate of progress has been limited by the capabilities and cost of enabling technology, recent advances in cameras, displays, and computing power now make adoption of vision enhancement strategies demonstrated in the laboratory feasible, if not imminent. The purposes of this article are to provide a review of the current state of head-mounted low vision enhancement system technology and to suggest areas of research and development in personalized digital image processing that could be translated to practice in the future.

Back to Top | Article Outline

HEAD-MOUNTED LOW VISION ENHANCEMENT SYSTEMS

The Visionics Low Vision Enhancement System (Golden Valley, MN) was the first commercially available low vision enhancement system based on a head-mounted video display system.18 The displays were two 19-mm-diameter black-and-white cathode ray tubes mounted in the temple arms of the headset and imaged at the user's far point for each eye through exit pupils in the plane of the user's entrance pupils by field-correcting relay optics and final aspheric mirrors. The display images were 50 × 40 degrees with 40-degree binocular overlap, yielding a 60-degree horizontal binocular field of view. Display resolution was 5 arcmin/pixel (equivalent to 20/100 visual acuity). A monochrome charge-coupled-device video camera in front of each eye provided unmagnified stereo video images of the environment for orientation. A single, tiltable (level to 45-degree down gaze), center-mounted, “cyclopean” video camera equipped with motor-driven optical zoom magnification (×1.5 to ×12) and autofocus (with an auxiliary flip-up macro lens) provided the same magnified video image to both eyes. All video cameras had automatic gain control to maintain constant average display luminance. The Visionics low vision enhancement system was battery powered with user controls for switching between orientation and zoom cameras or external video and switching between manual focus control and autofocus. The user also could control magnification, contrast stretching, and contrast reversal.

Visionics low vision enhancement system competitors that came on the market in the late 1990s and had similar features to the Visionics low vision enhancement system included the Enhanced Vision Systems V-max (Enhanced Vision Systems, Inc., Huntington Beach, CA), the Keeler NuVision (Keeler Ophthalmic Instruments, Inc., Broomal, PA), the Innoventions Magnicam (Innoventions, Inc., Conifer, CA), and the Bartimaeus Clarity TravelViewer-to-go (Bartimaeus Group, McLean, VA), the latter two of which used stand-mounted or handheld instead of head-mounted video cameras.13,14 The most enduring head-mounted video display low vision enhancement system was the Enhanced Vision Systems Jordy (Enhanced Vision Systems, Inc., Huntington Beach, CA), the successor to the V-max.

Several evaluative studies were conducted with these early low vision enhancement systems. The use of these devices resulted in significantly better distance and intermediate task performance than did previously prescribed optical aids.19 The head-mounted low vision enhancement system technology provided some improvement in home performance of activities of daily living, but optical aids remained optimal for most of those tasks. Younger patients performed better overall.20 Newly diagnosed patients responded most positively to the technology; otherwise, preference could not be predicted by age, sex, diagnosis, or previous electronic magnification experience.21 No significant differences in outcomes between low vision enhancement system devices were reported.20,21

All early head-mounted low vision enhancement systems eventually faded from the marketplace. To incorporate the computing power needed for image-enhancement strategies, large computer processing systems and hardware were needed. At the time, these were expensive components, and the companies serving the boutique low vision market did not have the resources to advance the technology to the next level. However, recently, head-mounted low vision enhancement system technology has made a comeback. Products currently available include the eSight Eyewear (eSight Corp., Toronto, ON, Canada), NuEyes Pro Smartglasses (NuEyes USA, Newport Beach, CA), CyberTimez Cyber Eyez (Cyber Timez, Winchester, VA), Evergaze seeBOOST (Evergaze, LLC, Richardson, TX), IrisVision (Visionize, L.L.C., Berkeley, CA), and the return of a redesigned Enhanced Vision Systems Jordy. These new systems use more modern-color microdisplay or cell phone displays (liquid crystal display or organic light-emitting diode) and smaller, higher-resolution color video cameras with cell phone camera optics. Table 1 compares the specifications of these new head-mounted low vision enhancement systems with each other and with the specifications of the original Visionics low vision enhancement system. With the exception of the IrisVision, they all have smaller fields of view than did the original Visionics low vision enhancement system. Some devices such as the NuEyes, Cyber Eyez, and eSight also incorporate nonvision features such as optical character recognition for text-to-speech and speech-output artificial intelligence software for face and object recognition. With major investments in technology development to serve the consumer virtual and augmented reality and personal theater markets, feature-rich enabling technology for head-mounted low vision enhancement systems undoubtedly will continue to evolve and advance to become more powerful and more attractive to wear at lower cost. To take advantage of these trends, attention now has to be focused on developing and testing image-processing strategies to optimize low vision enhancement.

TABLE 1

TABLE 1

Back to Top | Article Outline

CONTRAST-ENHANCEMENT STRATEGIES

Reduced contrast sensitivity is common among low vision patients and is often cited as a major contributor to reductions in the patient's ability to function visually.22,23 Patients with low vision often report difficulty seeing facial features, interpreting facial expressions, and recognizing familiar people. Significant contrast sensitivity loss can make seeing facial details, which are already low in contrast, even more difficult.24–26 Difficulties with orientation and mobility also are attributed frequently to reductions in contrast sensitivity in this population.27,28 To generalize, one might expect that any activity that depends on recognition, identification, and interpretation of visual information depends heavily on the person's ability to see varying levels of detail in the image.

Most vision scientists prefer to describe images and image processing in the spatial frequency domain (sums of sinusoidal spatial modulations of luminance along each meridian with the amplitude and phase varying as a function of spatial frequency and of orientation). In this framework, contrast sensitivity at each spatial frequency can be interpreted as how much the amplitude at each frequency is reduced by the visual system. Thus, the visual system is characterized as a linear filter for which the contrast sensitivity function is analogous to a modulation transfer function.

As shown with the y intercepts in the top panel and the scatterplot in the middle panel of Fig. 1, the maximum height of the contrast sensitivity function corresponds to contrast sensitivity measured using a Pelli-Robson, MARS, or other letter chart, and the cutoff frequency (the spatial frequency that requires 100% contrast to be visible, as shown with the x intercepts in the top panel of Fig. 1) corresponds to visual acuity measured using an ETDRS or other high-contrast letter chart (shown in the bottom panel of Fig. 1).30 As illustrated in the top panel of Fig. 1, Chung and Legge29 demonstrated that, to a good approximation, the contrast sensitivity function has the same shape for people with low vision as it does for normally sighted people when plotted on log contrast sensitivity versus log spatial frequency coordinates.

FIGURE 1

FIGURE 1

The contrast threshold function is the inverse of the contrast sensitivity function (−log contrast sensitivity). As illustrated by the area within the red and black curves in Fig. 2, only contrasts versus spatial frequency in the image that fall above the contrast threshold function will be visible to the person. For most natural images, including images of faces, contrast decreases inversely proportional to spatial frequency (1/f contrast spectrum, which when plotted on log contrast vs. log spatial frequency coordinates is a line with negative slope).31,32 As shown by the solid green line in Fig. 2, a normal face at 10 ft has high contrast at low spatial frequencies and 1/f drop in contrast with increasing spatial frequency. At approximately 10 cycles/degree, contrast in the face image is too low to be visible to the normally sighted person (solid line falls below the normal contrast threshold function). For the low vision patient, all information in the face image greater than 3 cycles/degree is invisible. If the face image is magnified by ×5, its contrast threshold function will be shifted to the left by 0.7 log cycles/degree (dashed green line). However, because this patient's contrast thresholds are elevated overall by 1 log unit, magnification would not improve the visibility of the part of the face contrast spectrum that was visible to the normally sighted person but below the patient's contrast threshold in the unmagnified image (falls within the black curve but not the red curve). Indeed, in this example, magnification could make the visibility of the face image even worse for the patient. Contrast at spatial frequencies between 0.5 and 1 log cycles/degree on the display has to be increased by at least an amount ranging from 0.1 to 0.75 log unit over the frequencies of interest to optimize the visibility of the unmagnified face image. Peli et al.16,33 first demonstrated the feasibility of this frequency-selective contrast-enhancement strategy in 1984, but a considerable amount of work still needs to be done to develop and test the optimal contrast-enhancement algorithm for low vision users representing a wide variety of visual impairment.

FIGURE 2

FIGURE 2

As shown in Fig. 3, for a normally sighted person, both the maximum contrast sensitivity and the cutoff frequency increase with mean luminance.34,35 The same luminance dependence of the contrast sensitivity function has been shown for patients with retinal diseases.36 The increase in these two parameters with increasing luminance explains why increasing ambient light can be helpful to low vision patients, although manipulating the ambient light level alone does nothing to change image contrast, which in the environment is determined by the ratio of reflectances. Although visual acuity/cutoff frequency increases with luminance for both normally sighted and low vision observers, that occurs only over a limited range, after which it asymptotes at a maximum value. Therefore, contrast enhancement alone is not sufficient compensation for most patients, and it also is necessary to magnify the image to compensate for the loss of resolution (no benefit is gained from enhancing contrast of spatial frequencies that exceed the cutoff). However, as illustrated in Fig. 2, middle spatial frequency bands (e.g., 3 to 7 cycles/degree) are often above the resolution limit but still below contrast threshold for the patient.29 Using digital image processing, with current technology, we can now enhance the contrast of selected spatial frequency bands in live video images without significant frame delays.

FIGURE 3

FIGURE 3

It has been demonstrated in past studies that individuals with low vision can recognize frequency-selective contrast-enhanced images of faces better than faces that are unprocessed with the enhancement system.16,33,37 In addition, removing the image detail may actually improve recognition by reducing crowding in the image for the low vision patient.9 Because of between-patient variations in the contrast sensitivity function,29 it most likely will be necessary to custom prescribe the optimal parameters for contrast enhancement.

Most video magnifiers incorporate contrast stretching. As illustrated in the middle panel of Fig. 4, contrast stretching consists of mapping pixel intensities above a criterion value to the maximum intensity, mapping pixel intensities below another criterion to the minimum intensity, and linearly rescaling the pixel intensities that fall between the two criteria. To minimize distorting the color or introducing color artifacts, contrast stretching should be limited to the luminance component of the video signal (L channel in the Lab color space), as was done in the middle panel of Fig. 4. However, this type of contrast stretching at the pixel level does not take spatial frequency information into account.

FIGURE 4

FIGURE 4

Edge enhancement is another contrast-enhancement strategy that many of the current head-mounted low vision enhancement systems use to compensate for reduced contrast sensitivity. Edge enhancement selectively stretches the contrast at sharp luminance gradients in the image (at edges of objects and features).38,39 Usually, edges are defined by high spatial frequencies. One very old photographic method that could be applied to enhancing contrast of digital video images in a defined frequency band is unsharp masking. For this technique, the video image is masked with a blurred negative copy of the image (low frequencies only), which leaves only frequencies that are higher than the cutoff frequency of the mask. The contrast of the masked image is stretched and then multiplied with the original image. The rightmost panel of Fig. 4 illustrates the result of unsharp masking applied only to the luminance channel of the original image in the leftmost panel. The technology now exists to implement these contrast-enhancement strategies, but our knowledge of what constitutes optimal contrast enhancement for different patients and how effective contrast enhancement can be for different patients is still inadequate.

Back to Top | Article Outline

IMAGE-REMAPPING STRATEGIES

Typically, there is one-to-one, one-to-many (in the case of magnification), or many-to-one (in the case of minification) mapping of camera pixels to display pixels. Loshin and Juday17 suggested that pixel mapping could be customized to distort images to prevent visual information from falling in the patient's scotoma. As shown in Fig. 5, the image can be torn and stretched around the scotoma. This form of local image remapping produces distortions in the image. Some demonstration studies have been reported that suggest that such remapping could be beneficial to the patient, especially for reading.40–42 However, to properly implement this remapping strategy, the tear and distortion of the image on the display would have to be stabilized on the retina to keep it registered with the patient's scotoma regardless of the direction of gaze. This approach requires eye tracking built into the head-mounted system with real-time image remapping occurring at video frame rates.

FIGURE 5

FIGURE 5

We developed an image remapping method for magnification, which could best be described as a virtual bioptic telescope. This new magnification strategy, which is now incorporated in the IrisVision, consists of a magnified region of interest, called the “magnification bubble” that is embedded in a larger unmagnified field of view. Our approach borrows from some of the earlier ideas of Loshin and Juday,17 but like a bioptic telescope, head movements rather than eye movements are used to relocate the bubble to a new location. This approach avoids inherent problems associated with current eye-tracking systems such as poor accuracy and precision, poor reliability, time lags due to frame delays, and difficulty with calibrations for users who cannot fixate reliably. The size and shape of the magnification bubble can be manipulated by the user to accommodate individual preferences and the requirements of specific tasks. For example, as shown in Figs. 6A, B, a rectangular view might be optimal for reading and a circular view for seeing facial expressions. The amount of magnification within the bubble also can be controlled by the user. Unlike a conventional bioptic telescope, which overlays a magnified image on the unmagnified field of view and creates an artifactual ring scotoma, the magnification bubble distorts the image by remapping the transition from the magnified image in the bubble to the unmagnified surrounding field so as not to overlay and lose any visual information.

FIGURE 6

FIGURE 6

Presumably, low vision patients fixate the bubble with the macula, or with a preferred retinal locus if the macula is impaired, to look at the magnified area of interest.43–45 Experiments simulating scotomas in normally sighted subjects imply that training can influence the development and location of the preferred retinal locus.46,47 Because there are currently no eye-tracking capabilities incorporated into the system, head movements alone change the visual information that falls in the bubble. Users can accomplish this movement of the bubble to a new region of interest quite comfortably and with little training. In addition to being able to manipulate magnification within the bubble and to change the size and shape of the bubble, the user has the option of manually relocating the bubble to any part of the video display. With an integrated eye-tracking system in the future, it might be feasible to use eye movements to relocate the bubble on the display so that it always stays superposed on the part of the retina used for fixation (assuming that the preferred retinal locus is consistent under different viewing conditions and directions of gaze). Saunders and Woods48 have discussed system requirements that must be met to make such gaze-controlled image processing acceptable to the user. The development of accurate, precise, and reliable gaze-controlled real-time image processing in head-mounted displays, which is being pursed for many different applications, creates research opportunities for problems that have received relatively little attention in the low vision rehabilitation field.

One recent study completed by Aguilar and Castet49 tested a gaze-controlled system that magnified a portion of text while maintaining global viewing of the rest of the text. Their results suggest that user preference and reading speed were greater for the gaze-controlled condition when compared with uniformly applied magnification without any specificity to region of interest (mimicking commercial closed-circuit televisions), but there was no significant difference between the gaze-controlled system and a system with zoom-induced text reformatting. We know that limitations exist in eye pointing and visual search with this type of gaze-contingent system from experiments described by Ashmore et al.50 using a fisheye magnification system in normally sighted observers. Based on information they gathered, hiding this magnification bubble during visual search leads to improvement in speed and accuracy over eye pointing with no bubble or with a bubble that is continually attached to the user's gaze. We also know that fixation stability in patients with central vision impairment is generally poor,51,52 which may add complication to eye-tracking systems during calibration and subsequently during image remapping. More research is needed to determine the most effective way to incorporate eye tracking and gaze control with magnification within low vision enhancement systems when performing various tasks.

Conceptually, one could extend the image-remapping strategy to produce custom distortions in images that are designed to undo the effects of, and thereby correct, metamorphopsia. This approach also would require precise eye tracking to register the compensating image distortions with the part of the retina experiencing metamorphopsia. Virtual and augmented reality systems are creating a demand for high-performance eye-tracking systems in head-mounted displays to increase computing efficiency with gaze-referenced high level of detail rendering of graphical objects, so ongoing research and development are soon likely to produce the enabling eye-tracking technology at a price consumers can afford.

Back to Top | Article Outline

MOTION COMPENSATION STRATEGIES

The visual vestibulo-ocular reflex produces eye movements that compensate for head movements to keep the image stationary on the retina (i.e., “doll's eye” phenomenon). When viewing an angularly magnified image, the velocity of image motion relative to head motion is magnified by the same factor.53 When there is a mismatch between velocities recorded by the visual system and those recorded by the vestibular system, the individual can adapt to a limited extent with neural changes in the gain of the vestibulo-ocular reflex. However, the range of physiological gain change is too small to be useful for the levels of magnification used for low vision enhancement, and this results in image slip on the retina.53,54 With increased levels of magnification, the image presented to the retina spans a larger field of view, and the magnitude of vestibulo-ocular reflex eye movements needed to properly compensate for image motion eventually falls outside the range of gain control of the reflex. Image motion velocities greater than 20 degrees/s (which, e.g., are commonly experienced when walking) progressively decrease contrast sensitivity and visual acuity with increasing velocity,55 which often is described clinically as dynamic visual acuity.7,14 Because the camera incorporated in a head-mounted low vision enhancement system moves with the users' head movements, magnified image motion not only decreases image resolution but also increases the risk of motion sickness and other symptoms of visual discomfort as the user intentionally or inadvertently moves his/her head.56 Embedding the magnification bubble in a bioptic design allows for the visual information outside the bubble to be presented with no added magnification, which minimizes the overall image motion experienced with increased levels of magnification. Although this strategy reduces the susceptibility to motion sickness, magnified image motion within the bubble still imposes limits on the user's performance because of dynamic visual acuity limitations on resolution.14

Mismatches between head and image motion beyond the range of vestibulo-ocular reflex adaptation are a problem that has plagued virtual and augmented reality systems for decades.57–59 However, advances in angular and linear motion sensor technology, now routinely incorporated in smartphones and modern virtual and augmented reality systems, and increases in computer graphics speed and power have made it possible to inexpensively match image motion to head motion plus the vestibulo-ocular reflex. Improvements in the accuracy and precision of MicroElectroMechanical systems accelerometers and gyroscopes aid in better calculation of linear and angular motion needed to compensate for magnified image motion. For example, with the current capabilities of a smartphone, it is now possible to convert angular magnification to linear magnification in head-mounted low vision enhancement systems. This strategy has been implemented in the IrisVision by texture mapping a high-resolution image from the camera, or from stored or streamed media content, onto a virtual screen that moves in virtual reality at the negative of the velocity of natural head movements. This strategy eliminates artifactual image motion from the visual vestibulo-ocular reflex when viewing snapshot images and media content. With further development, this approach has the potential of eliminating magnified image motion from head-mounted video camera movements. This strategy makes any amount of magnification practical in an arbitrarily large field of view that can be explored with head movements.

When coupled with eye tracking, image motion compensation strategies could be used to neutralize oscillopsia in conditions such as nystagmus or bilateral vestibular loss. One could also consider extending motion compensation strategies to enhancement of visual flow fields, which are important to perceiving self-motion in the environment, maintaining balance and preventing falls, and judging closing velocities to prevent collisions. Such an extension is likely to press the state of current technology. With the constantly accelerating rate of technology development, it is not too early to begin investigating these possibilities.

Back to Top | Article Outline

AUGMENTED REALITY STRATEGIES

Unlike virtual reality, which refers to a computer-generated environment in which the user is immersed and with which the user can interact, augmented reality refers to graphic overlays on, or graphic objects inserted in, live images of the real environment. Peli's60 strategy of vision multiplexing, which can be implemented optically, digitally, or with hybrid technology, is a pioneering application of augmented reality to low vision enhancement. The term “multiplexing” is used when multiple streams of information share a single mode of transmission. In the case of vision, multiplexing refers to a controlled form of spatial diplopia (superposed semitransparent images that are visible simultaneously) or temporal diplopia (alternate presentation or alternate suppression of superposed images).60,61 Two examples of optical strategies for implementing vision multiplexing are Peli prisms,62 which in the case of hemianopic visual fields superimpose images from unseen portions of the peripheral field onto nonfixating seeing areas of remaining field, and the intraocular telescope, which magnifies a wide-field, centrally viewed image in one eye only. Both methods can be implemented in augmented reality with a head-mounted (e.g., seeBOOST) or heads-up (e.g., Google Glass and Cyber Eyez) display.60

Considerable basic research has been conducted on perceptual “filling-in” phenomena with artificial scotomas and the physiological blind spot.63,64 Most low vision patients with central scotomas are unaware of blind areas in their vision because the visual system covers them over with images that blend in with the background.65,66 This filling-in phenomenon causes objects, words, facial features, and other visual information to unexpectedly vanish or be replaced with incomprehensible patterns manufactured by the visual system. Consequently, the scotoma interferes with reading, visual search, face recognition, reaching for and grasping objects, and detecting obstacles. Making the scotoma visible by controlling the filling-in (e.g., with stabilized graphic annuli that are distinct from the rest of the background)67 may prove to be helpful in that at least the person would know where the blind spot is and how it has to be moved to look behind it.68 Currently, we do not know what kind of filling-in will occur with augmented reality graphic images (or image remapping around scotomas as discussed earlier) and whether it will cause confusion. There is much research that still needs to be done to further understand the filling-in process and to determine if and how we can control the image that fills in the blind spot.

Combined with object and face recognition artificial intelligence software (e.g., OrCam MyEye [OrCam Technologies Ltd, Jerusalem, Israel] and Microsoft Seeing AI application [Microsoft Corp., Redmond, WA]), augmented reality strategies could be used to assist low vision patients by augmenting visible but uninterpretable visual information (e.g., highlighting obstacles to assist mobility, highlighting scan paths or fixation history to assist visual search, and tagging or captioning objects and faces to assist with recognition). Combined with Global Positioning System–based navigation software, augmented reality strategies also could be used to assist low vision patients with wayfinding. Many such systems undoubtedly will be developed for the normally sighted consumer, so the implementation of such hybrid augmented reality strategies might be more of an issue of adaptation or accommodation rather than an independent effort to develop a dedicated low vision enhancement system product.

Back to Top | Article Outline

FUTURE INNOVATION

The low vision field needs to prepare to realize the promise of head-mounted low vision enhancement systems as virtual reality and augmented reality technology development expands. This preparation can be accomplished only with innovative low vision research. We need to learn more about eye movements in the visually impaired population, how the neural visual system adapts to visual impairments, and how people with different types and degrees of visual impairment respond to various image-processing strategies that are now possible to implement at video frame rates with head-mounted computer technology. Using these strategies to simplify and create improved fixation patterns may, for example, enhance reading performance in patients with central vision impairment.69 Future advances in head-mounted display technology that can be used as a platform for the ultimate head-mounted low vision enhancement system still need to address the issues of size, weight, field of view, resolution, battery life, user interfacing, and attractive design while also having the computer power to integrate customizable low vision enhancement operations. The development of display technology with improved diffractive and lightweight optics has helped in the advancement of these systems, but to increase both resolution and field of view, even better high-density displays must be developed along with high-density drivers. Fortunately, the requirements for low vision applications are less demanding in this regard than are the requirements of the general augmented reality and virtual reality consumer market, so the burden of enabling hardware development does not fall on the low vision industry.

Arguably, the single most important required innovation still outstanding is to provide binocular disparity information to the patient under conditions of magnification. Because scotomas are rarely binocularly symmetric, binocular viewing, with its increased field of view and accompanying depth cues, offers potential advantages to the patient. However, diplopia and rivalry have negative consequences, and constant suppression would defeat attempts to gain a binocular advantage.70,71 Also, scotomas and monocularly measured preferred retinal locations are not likely to fall on corresponding points.72 We do not know how patients respond when this happens, but it is certainly an area of needed research and exploration. Binocular disparity of objects in different depth planes drives vergence, which will alter the overlap of scotomas in the two eyes resulting in a change in the binocular scotoma.73 This result has consequences for avoiding diplopia in remapped images. Diplopia also is a concern for magnification within a region of interest centered on the preferred retinal locus because angular magnification (pixel magnification) is used, which also magnifies binocular disparity. Thus, the unmagnified part of the scene might be fused but not necessarily zero disparity because of Panum's area, and angular magnification in the region of interest then magnifies the disparity angle beyond the Panum tolerance limit causing diplopia.

The next generation of low vision enhancement will use image-processing algorithms individually prescribed to optimize each patient's vision. Although there is some existing research highlighting the benefits of image-remapping and contrast-enhancement image-processing techniques,9 additional research on optimizing algorithm parameters and their implementation in commercially available technology must expand to increase the potential benefit of low vision enhancement systems for individual patients. These new devices and low vision enhancement strategies require skilled rehabilitation services, and the field therefore has to be prepared to train the patient.

Eye care professionals will likely have to soon learn new methods of vision rehabilitation. A similar situation was created on a small scale with the introduction of the implantable miniature telescope74 and with various competing versions of phosphene-based prosthetic vision systems.75,76 Although optics and closed-circuit televisions are still the convention, with new head-mounted low vision enhancement system and artificial intelligence products, we are experiencing the vanguard of a new wave of technology that soon will demand significant changes in how low vision patients are evaluated and the rehabilitation services that are provided to them. The limitations of optical aids point out the potential gaps electronic systems may fill across all individual users within the low vision population. With growing technology, we move closer to truly customizing products for each individual visual impairment.

Back to Top | Article Outline

REFERENCES

1. Rovner BW, Casten RJ. Activity Loss and Depression in Age-related Macular Degeneration. Am J Geriatr Psychiatry 2002;10:305–10.
2. West SK, Rubin GS, Broman AT, et al. How Does Visual Impairment Affect Performance on Tasks of Everyday Life? The See Project Salisbury Eye Evaluation. Arch Ophthalmol 2002;120:774–80.
3. Rovner BW, Casten RJ, Tasman WS. Effect of Depression on Vision Function in Age-related Macular Degeneration. Arch Ophthalmol 2002;120:1041–4.
4. Salive ME, Guralnik J, Glynn RJ, et al. Association of Visual Impairment with Mobility and Physical Function. J Am Geriatr Soc 1994;42:287–92.
5. Rubin GS, Bandeen-Roche K, Huang GH, et al. The Association of Multiple Visual Impairments with Self-reported Visual Disability: See Project. Invest Ophthalmol Vis Sci 2001;42:64–72.
6. Dickinson C. Low Vision: Principles and Practice. Oxford: Butterworth; 1988.
7. Genensky S, Baran P, Moshin H, et al. A Closed Circuit TV System for the Visually Handicapped. Am Found Blind Res Bull 1969;19:191.
8. Moshtael H, Aslam T, Underwood I, et al. High Tech Aids Low Vision: A Review of Image Processing for the Visually Impaired. Transl Vis Sci Technol 2015;4:6.
9. Wolffsohn JS, Peterson RC. A Review of Current Knowledge on Electronic Vision Enhancement Systems for the Visually Impaired. Ophthalmic Physiol Opt 2003;23:35–42.
10. Vargas-Martin F, Peli E. Augmented-view for Restricted Visual Field: Multiple Device Implementations. Optom Vis Sci 2002;79:715–23.
11. Peli E. Head Mounted Display as a Low Vision Aid. In: Proceedings of the Second International Conference on Virtual Reality and Persons with Disabilities. Northridge, CA: Center on Disabilities, California State University, Northridge; 1994:115–22.
12. Leat SJ, Mei M. Custom-devised and Generic Digital Enhancement of Images for People with Maculopathy. Ophthalmic Physiol Opt 2009;29:397–415.
13. Massof R. Electro-optical Head-mounted Low Vision Enhancement. Pract Optom 1998;9:214–20.
14. Harper R, Culham L, Dickinson C. Head Mounted Video Magnification Devices for Low Vision Rehabilitation: A Comparison with Existing Technology. Br J Ophthalmol 1999;83:495–500.
15. Ehrlich JR, Ojeda LV, Wicker D, et al. Head-mounted Display Technology for Low-vision Rehabilitation and Vision Enhancement. Am J Ophthalmol 2017;176:26–32.
16. Peli E, Peli T. Image-enhancement for the Visually Impaired. Opt Eng 1984;23:47–51.
17. Loshin DS, Juday RD. The Programmable Remapper: Clinical Applications for Patients with Field Defects. Optom Vis Sci 1989;66:389–95.
18. Massof RW, Rickman DL, Lalle PA. Low-vision Enhancement System. J Hopkins Apl Tech D 1994;15:120–5.
19. Massof R, Baker FH, Dagnelie G, et al. Low Vision Enhancement System: Improvements in Acuity and Contrast Sensitivity. Optom Vis Sci 1995;72(Suppl.):20.
20. Culham LE, Chabra A, Rubin GS. Clinical Performance of Electronic, Head-mounted, Low-vision Devices. Ophthalmic Physiol Opt 2004;24:281–90.
21. Culham LE, Chabra A, Rubin GS. Users' Subjective Evaluation of Electronic Vision Enhancement Systems. Ophthalmic Physiol Opt 2009;29:138–49.
22. Colenbrander A, Fletcher D. Contrast Sensitivity and ADL Performance. Invest Ophthalmol Vis Sci 2006;47(Suppl.):5834.
23. Legge GE, Rubin GS, Luebker A. Psychophysics of Reading. V. The Role of Contrast in Normal Vision. Vision Res 1987;27:1165–77.
24. Fiorentini A, Maffei L, Sandini G. The Role of High Spatial Frequencies in Face Perception. Perception 1983;12:195–201.
25. Hayes T, Morrone MC, Burr DC. Recognition of Positive and Negative Bandpass-filtered Images. Perception 1986;15:595–602.
26. Nasanen R. Spatial Frequency Bandwidth Used in the Recognition of Facial Images. Vision Res 1999;39:3824–33.
27. Marron JA, Bailey IL. Visual Factors and Orientation-mobility Performance. Am J Optom Physiol Opt 1982;59:413–26.
28. Pelli DG. The Visual Requirements of Mobility. In: Vision Low, ed. Woo GC. New York: Springer; 1987:134–46.
29. Chung ST, Legge GE. Comparing the Shape of Contrast Sensitivity Functions for Normal and Low Vision. Invest Ophthalmol Vis Sci 2016;57:198–207.
30. Robson JG. Spatial and Temporal Contrast-sensitivity Functions of the Visual System. J Opt Soc Am 1966;56:1141–2.
31. Field DJ. Relations between the Statistics of Natural Images and the Response Properties of Cortical Cells. J Opt Soc Am (A) 1987;4:2379–94.
32. Geisler WS. Visual Perception and the Statistical Properties of Natural Scenes. Annu Rev Psychol 2008;59:167–92.
33. Peli E, Goldstein RB, Young GM, et al. Image Enhancement for the Visually Impaired Simulations and Experimental Results. Invest Ophthalmol Vis Sci 1991;32:2337–50.
34. Barten PG. Formula for the Contrast Sensitivity of the Human Eye. In: Miyake Y, Rasmussen DR, eds. Proceedings of SPIE 5294, Image Quality and System Performance, December 18, 2003. Bellevue, WA: International Society for Optics and Photonics; 2003:231–9.
35. Vanmeeteren A, Vos JJ. Resolution and Contrast Sensitivity at Low Luminances. Vision Res 1972;12:825–33.
36. Alexander KR, Derlacki DJ, Fishman GA. Contrast Thresholds for Letter Identification in Retinitis Pigmentosa. Invest Ophthalmol Vis Sci 1992;33:1846–52.
37. Peli E, Lee E, Trempe CL, et al. Image Enhancement for the Visually Impaired: The Effects of Enhancement on Face Recognition. J Opt Soc Am (A) 1994;11:1929–39.
38. Dawson BM. Image Filtering for Edge Enhancement. Technol Trends 1986;20:93–8.
39. Peli E, Goldstein RB, Woods RL, et al. Wide-band Enhancement of TV Images for the Visually Impaired. Invest Ophthalmol Vis Sci 2004;45(Suppl.):4355.
40. Loshin DS, Juday RD, Barton RS. Design of a Reading Test for Low-vision Image Warping. Visual Inform Process II 1961;1993:67–72.
41. Ho JS, Loshin DS, Barton RS, et al. Testing of Remapping for Reading Enhancement for Patients with Central Visual Field Losses. Visual Inform Process. IV 1995;2488:417–24.
42. Gupta A, Mesik J, Engel SA, et al. Beneficial Effects of Spatial Remapping for Reading with Simulated Central Field Loss. Invest Ophthalmol Vis Sci 2018;59:1105–12.
43. Crossland MD, Engel SA, Legge GE. The Preferred Retinal Locus in Macular Disease: Toward a Consensus Definition. Retina 2011;31:2109–14.
44. Fuchs W. Pseudo-fovea. In: Ellis WD, ed. A Source Book of Gestalt Psychology. London, England: Kegan Paul, Trench, Trubner & Company; 1938:357–61.
45. Timberlake GT, Mainster MA, Peli E, et al. Reading with a Macular Scotoma. I. Retinal Location of Scotoma and Fixation Area. Invest Ophthalmol Vis Sci 1986;27:1137–47.
46. Barraza-Bernal MJ, Rifai K, Wahl SA. Preferred Retinal Location of Fixation Can Be Induced when Systematic Stimulus Relocations Are Applied. J Vis 2017;17:11.
47. Liu R, Kwon M. Integrating Oculomotor and Perceptual Training to Induce a Pseudofovea: A Model System for Studying Central Vision Loss. J Vis 2016;16:10.
48. Saunders DR, Woods RL. Direct Measurement of the System Latency of Gaze-contingent Displays. Behav Res Methods 2014;46:439–47.
49. Aguilar C, Castet E. Evaluation of a Gaze-controlled Vision Enhancement System for Reading in Visually Impaired People. PLoS One 2017;12:e0174910.
50. Ashmore M, Duchowski AT, Shoemaker G. Efficient Eye Pointing with a Fisheye Lens. In: Proceedings of Graphics Interface 2005, May 7, 2005. Toronto: Canadian Human-Computer Communications Society;203–10.
51. Culham LE, Fitzke FW, Timberlake GT, et al. Assessment of Fixation Stability in Normal Subjects and Patients Using a Scanning Laser Ophthalmoscope. Clin Vision Sci 1993;8:551–61.
52. Rohrschneider K, Becker M, Kruse FE, et al. Stability of Fixation: Results of Fundus-controlled Examination Using the Scanning Laser Ophthalmoscope. Ger J Ophthalmol 1995;4:197–202.
53. Demer JL, Porter FI, Goldberg J, et al. Dynamic Visual Acuity with Telescopic Spectacles: Improvement with Adaptation. Invest Ophthalmol Vis Sci 1988;29:1184–9.
54. Demer JL, Porter FI, Goldberg J, et al. Predictors of Functional Success in Telescopic Spectacle Use by Low Vision Patients. Invest Ophthalmol Vis Sci 1989;30:1652–65.
55. Kelly DH. Visual Processing of Moving Stimuli. J Opt Soc Am (A) 1985;2:216–25.
56. Peli E. Visual Perceptual, and Optometric Issues with Head-mounted Displays (HMD). Playa del Rey, CA: Society for Information Display; 1996.
57. Kennedy RS, Drexler J, Kennedy RC. Research in Visually Induced Motion Sickness. Appl Ergon 2010;41:494–503.
58. Hettinger LJ, Riccio GE. Visually Induced Motion Sickness in Virtual Environment. Presence 1992;1:306–10.
59. Akiduki H, Nishiike S, Watanabe H, et al. Visual-vestibular Conflict Induced by Virtual Reality in Humans. Neurosci Lett 2003;340:197–200.
60. Peli E. Vision Multiplexing: An Engineering Approach to Vision Rehabilitation Device Development. Optom Vis Sci 2001;78:304–15.
61. Apfelbaum H, Apfelbaum D, Woods R, et al. The Effect of Edge Filtering on Vision Multiplexing. Digest of Technical Papers—SID International Symposium 2005;36:1398–401.
62. Peli E, Jung JH. Multiplexing Prisms for Field Expansion. Optom Vis Sci 2017;94:817–29.
63. Andrews PR, Campbell FW. Images at the Blind Spot. Nature 1991;353:308.
64. Ramachandran VS, Gregory RL. Perceptual Filling in of Artificially Induced Scotomas in Human Vision. Nature 1991;350:699–702.
65. Schuchard RA. Perception of Straight Line Objects across a Scotoma. Invest Ophthalmol Vis Sci 1991;32(Suppl.):816.
66. Zur D, Ullman S. Filling-in of Retinal Scotomas. Vision Res 2003;43:971–82.
67. Spillmann L, Otte T, Hamburger K, et al. Perceptual Filling-in from the Edge of the Blind Spot. Vision Res 2006;46:4252–7.
68. Pratt JD, Stevenson SB, Bedell HE. Scotoma Visibility and Reading Rate with Bilateral Central Scotomas. Optom Vis Sci 2017;94:279–89.
69. Calabrese A, Bernard JB, Faure G, et al. Clustering of Eye Fixations: A New Oculomotor Determinant of Reading Speed in Maculopathy. Invest Ophthalmol Vis Sci 2016;57:3192–202.
70. Ross N, Goldstein J, Massof R. Association of Self-reported Task Difficulty with Binocular Central Scotoma Locations. Invest Ophthalmol Vis Sci 2013;54: E-abstract 2188.
71. Tarita-Nistor L, Gonzalez EG, Markowitz SN, et al. Binocular Interactions in Patients with Age-related Macular Degeneration: Acuity Summation and Rivalry. Vision Res 2006;46:2487–98.
72. Tarita-Nistor L, Eizenman M, Landon-Brace N, et al. Identifying Absolute Preferred Retinal Locations during Binocular Viewing. Optom Vis Sci 2015;92:863–72.
73. Arditi A. The Volume Visual-field: A Basis for Functional Perimetry. Clin Vision Sci 1988;3:173–83.
74. Hau VS, London N, Dalton M. The Treatment Paradigm for the Implantable Miniature Telescope. Ophthalmol Therapy 2016;5:21–30.
75. Chen SC, Suaning GJ, Morley JW, et al. Rehabilitation Regimes Based upon Psychophysical Studies of Prosthetic Vision. J Neural Eng 2009;6. Available at: http://iopscience.iop.org/article/10.1088/1741-2560/6/3/035009. Accessed August 13, 2018.
76. Xia P, Hu J, Peng Y. Adaptation to Phosphene Parameters Based on Multi-object Recognition Using Simulated Prosthetic Vision. Artif Organs 2015;39:1038–45.
© 2018 American Academy of Optometry