Prentice Medal Lecture 2013: Visual Accessibility: A Challenge for Low-Vision Research : Optometry and Vision Science

Secondary Logo

Journal Logo

FEATURE REVIEW ON LINE

Prentice Medal Lecture 2013

Visual Accessibility

A Challenge for Low-Vision Research

Legge, Gordon E.*

Author Information
Optometry and Vision Science 91(7):p 696-706, July 2014. | DOI: 10.1097/OPX.0000000000000310
  • Free
  • Press Release

Abstract

Low vision may be defined as any chronic form of vision impairment, not correctable by glasses or contact lenses, that adversely affects everyday function. Visual accessibility refers to factors that make an environment, device, or display usable by vision. In this article, I discuss the concept of visual accessibility with special reference to low vision. What role can vision science play in enhancing visual accessibility for people with low vision? I propose that greater efforts to embed low-vision research in real-world contexts and collaboration with other disciplines will accelerate progress. I describe examples from my current research projects on architectural accessibility and reading accessibility.

The topic of this article is visual accessibility and low-vision research. Because I am a person with low vision, I have both a personal and professional interest in this topic.

Low vision conveys the idea that vision has more than just the two states: sighted and blind. The term low vision was coined in the 1950s by Eleanor Faye and Gerald Fonda. They and other mid-20th century pioneers concerned with vision impairment—including William Feinbloom and George Hellinger in the U.S., and Norman Bier in the U.K.—emphasized the importance of residual vision and the need to address functional limitations associated with low vision.

Low vision is often defined as a letter acuity less than 20/60 (6/18) or a visual field of less than 20 degrees in the better eye. Sometimes, we define low vision in more practical terms as the inability to read the newspaper at a distance of 40 cm with best refractive correction. This definition is used because most people with low vision have problems with reading. (This definition is becoming obsolete as hard-copy newspapers recede into history.) A third, more general definition of low vision is any chronic form of vision impairment not correctable by glasses or contact lenses that adversely affects everyday function.

According to the National Eye Institute, there are between 3.5 and 5 million Americans with low vision,1 and the number is rising as our population ages. In 2013, the World Health Organization estimated that there are 285 million people worldwide with vision impairment, 39 million blind and 246 million with low vision.2 These worldwide figures include many people in less developed countries whose impaired vision is due to uncorrected refractive errors or untreated cataracts.

Low-vision research focuses primarily on the role of vision in everyday activities, with less emphasis on clinical descriptions of the eye or vision. The two major research questions are: What basic principles explain the functional limitations of people with low vision? What strategies can be adopted to ameliorate these limitations? Low-vision research has played an important role in bringing topics such as reading and driving under scrutiny in vision science.

My theme in this article is to propose that we embed low-vision research more explicitly in the real world where our findings can have a direct impact on the lives of people with low vision. I suggest that we extend our laboratory-based vision science in two ways: first, by addressing the complicated interactions among variables that affect real-world visual function, and second, by working in partnership with other disciplines to reduce the barriers to visual accessibility. Where we succeed, we will contribute to vision science by showing how vision functions in the real world, and we will find better ways to reduce barriers facing people with visual impairment. I will give examples from my current research projects on architectural accessibility and reading accessibility.

TWO APPROACHES: REHABILITATION AND ACCESSIBILITY

I begin by reviewing some general concepts that may help to shape the discussion of low vision and accessibility.

Accessibility means removing barriers to participation in all domains of society including mobility, reading, social interaction, education, employment, and recreation. Rehabilitation emphasizes a person’s functional limitations and provides remedies and strategies for overcoming these limitations. To illustrate this distinction, a rehabilitation program might train people with low vision in mobility strategies for detecting low-contrast steps or obstacles, whereas accessible architecture would emphasize environmental designs to enhance the visibility of such hazards.

Rehabilitation seeks to empower people with disabilities to function effectively in the world as it is. In contrast, accessibility strives to modify the world to remove barriers and accommodate people with disabilities. Rehabilitation is often associated with a “medical model” that seeks to cure or ameliorate disability. Accessibility is associated with a “social justice” model or “civil rights” model that promotes accommodation of people with disabilities by human cultural institutions, systems, and products. Advocacy for accessibility has sometimes given rise to disability rights movements. For a historical review, see the work by Switzer.3

Andrew Solomon,4 in his influential book Far from the Tree, cogently points out that the disability rights movement stresses the richness of individual “identity” and experience, whereas the medical rehabilitation model stresses defect or illness (sometimes using gentler terms like syndrome or condition). The medical model wants to correct or cure or eradicate, whereas the disability rights movement wants society to accommodate, diversify, and include. Sometimes, the effort to cure or eradicate runs counter to the goal of expanding accessibility through accommodation. Advocates for accessibility may see medical research and rehabilitation as paternalistic and misdirected.

Arditi5 has reviewed the historical roots of the field of vision rehabilitation. Initially, it emerged in the form of social services for the blind, such as special residential schools and sheltered workshops. The distinction between blindness and low vision was rarely explicit, and there was little attention to the special problems facing people with residual vision. Further contributing to the lack of focused attention were the heterogeneity of low-vision conditions and the sight-saving philosophy. This philosophy advocated protecting the eyes by minimizing the use of impaired vision.

In the mid 20th century, a few insightful optometrists and ophthalmologists began to emphasize the importance of residual vision and the need to address functional limitations associated with low vision. In the last quarter of the 20th century, low vision emerged as a topic in vision science. Collectively, those of us with interests in low vision, in both laboratory and clinical research, have made substantial progress in understanding the functional limitations of people with visual impairment.

Although the topic of vision rehabilitation has emerged as an active discipline, accessibility issues pertaining to low vision have received less attention. Arditi6 attributes this to the lack of a coherent voice from the community of people with low vision and the corresponding lack of consciousness raising and lobbying on their behalf. Most people with low vision have acquired their vision impairment later in life from disorders such as macular degeneration, diabetic retinopathy, and glaucoma. They tend not to identify themselves with low vision per se but consider vision loss as a consequence of aging. They are often dealing with other physical or cognitive disabilities as well. Unlike advocacy groups for the blind or deaf, there is little in the way of leadership in this group demanding accessibility.

Those of us engaged in low-vision research or related policy making face a dilemma. Where should we expend our research dollars and research effort—in finding remedies, or in enhancing accessibility? Most of us would agree that we need to do both, but the balance in dollars and effort is a topic for deliberation. The message of this article is to suggest that more research should be done to support the accessibility of people with low vision.

The concepts of “universal design” and “inclusive design” have emerged from the accessibility perspective. Universal design is the aspirational goal of designing a system—architectural, electronic, mechanical—whose primary functions are within the capabilities of the entire population. Achieving this goal is of primary necessity for dealing with key aspects of culture such as architectural designs of public spaces.

Inclusive design is more pragmatic. Here, the goal is to extend the usability of mainstream products to the maximum fraction of the population without sacrificing functionality or commercial viability. A small fraction of the population may not be able to use the design, and their needs should be addressed with an alternative solution. For inherently visual designs, alternative nonvisual solutions may be necessary, such as VoiceOver software on the iPhone or tactile features on currency bills. Modern digital devices for reading have the potential for inclusive design because of the opportunity for personalized display options. I return to digital reading below.

Inclusive designs, broadly defined, should take into account distinct domains of capability and their interactions, such as vision, hearing, manual dexterity, and cognitive status.7 Design considerations for these domains may not be separable or even compatible. Keep in mind that many people with low vision have other coexisting health problems. For instance, it has been reported that people with age-related macular degeneration have high rates of depressive disorder8 and cognitive deficits.9 In the Salisbury Eye Evaluation project—a population-based study of 2520 individuals aged 65 to 84 years—about 7% were visually impaired.10. Data on self-reported comorbidities were gathered from a standard list of 13 items such as arthritis, heart disease, and cancer. On average, both the visually impaired and visually normal participants reported more than two comorbidities (Gary S. Rubin, personal communication).

A traditional approach to visual accessibility has been to retrofit an existing system to accommodate people with low vision, for example, adding high-contrast strips at the top of stairs or designing third-party screen-magnifying or screen-reading software for computer access. These solutions have been extremely valuable, but, where possible, inclusive design is preferable to special-purpose solutions. This is because such solutions are often costly, typically lag behind the evolution of mainstream developments, and sometimes stigmatize people with disabilities.

How can vision researchers contribute to enhancing visual accessibility? In the following two sections, I describe how my colleagues and I have begun addressing visual accessibility in architecture and reading.

VISUAL ACCESSIBILITY AND ARCHITECTURAL DESIGN

In the context of architecture, we define visual accessibility as the use of vision to travel efficiently and safely through an environment, to perceive the spatial layout of key features in the environment, and to keep track of one’s location and orientation in the environment.11 Major factors affecting visual accessibility include the vision status of the pedestrian (characterized by acuity, contrast sensitivity, and field); geometrical properties of the architectural design, including landmarks, objects, and obstacles; nature and arrangement of lighting (natural and artificial); surface properties (including color, texture, and gloss); and contextual cues. Visual accessibility refers not only to the visibility of important features of the space but also to accurate perception of large-scale characteristics such as the size and shape of the space, and the distance and direction to key features within the space.

Two examples from my personal experience may serve to introduce the concept of visual accessibility.

Fig. 1 shows photographs of an outdoor bench near my building on campus, one taken on a bright, sunny day and the other on an overcast day. The second and third columns show the effects of mild and severe blur, simulating information loss due to mild and severe acuity reduction.

F1-3
FIGURE 1:
Photographs of an outdoor bench in bright sunlight and overcast conditions, shown with mild and severe blur. See text.

In the sunny image (top row), the high-contrast cast shadow at the front of the bench and the dark background provided by the building shadow provide good cues for the presence of the bench. These cues are resistant to blur, but rely on the presence of directional lighting. In more uniform lighting, as in the overcast photograph (bottom row), the visibility of the bench is seriously reduced, particularly when blurred. (I can testify personally to how painful it can be to miss seeing this cement bench.)

This example also illustrates that texture cues may disappear in low vision. Fine texture details of surfaces can support depth and orientation judgments, but when spatial resolution is low (such as the severe blur here), the texture disappears; the brickwork, cement, and grass become uniform regions of color.

Fig. 2 illustrates the impact of lighting direction. The figure shows an indoor scene at four times during the morning of July 21, all easy to interpret with normal vision. We are looking down a hallway in my campus building. There are glass doors to the outside on the left and an open downward staircase to the right. At 7:30 am, there is no direct sunlight near the stairs. By 8:50 am, there are bright patches of sunlight on the carpet and near the step down. There are strong shadow edges, potentially producing false cues to steps or obstacles. At 9:32 am, the pattern has changed again, with the step down now in deep shadow. Finally, by 10:25 am, the dark carpet is bounded by sun, producing a dark rectangular profile that nearly matches the stairwell aperture in shape and lightness. These are enormous lighting and pattern variations, potentially confusing for low vision, which might hide the presence of the stairs.

F2-3
FIGURE 2:
An indoor scene, shown at four times during a summer morning, illustrating major variations in the distribution of luminance because of the change in lighting direction from the sun.

Fig. 3 shows the same series of scenes, blurred severely to represent the information available to someone with very low acuity. The three-dimensional (3D) cues are mostly gone. The carpet and the stairwell look like similar objects. The sunlight at 8:50 and 9:32 creates a kind of abstract painting of light and dark features. Someone with severe low vision, arriving for the first time, would have no idea that there is a stairwell lurking. The 3D visual world becomes a two-dimensional space of blobs.

F3-3
FIGURE 3:
The series of images from Fig. 2 are shown with severe low-pass filtering (blur).

I am working with colleagues at the University of Minnesota, University of Utah, and Indiana University on an interdisciplinary project entitled Designing Visually Accessible Spaces (http://www.cs.utah.edu/research/areas/percept/DEVA/). We are studying the problem of visual accessibility with three interdisciplinary approaches: psychophysical, computational, and engineering.

In our psychophysical studies, we have asked how environmental factors interact with an observer’s vision status to determine hazard visibility.

Fig. 4A shows our apparatus for studying the visibility of steps and ramps.11 We built a 24-ft-long walkway from sections of wooden staging in a windowless classroom. Lighting is from the overhead fluorescents or artificial windows built from light boxes positioned to study directional lighting. Fig. 4B shows the five targets: Step Up, Step Down, Ramp Up, Ramp Down, and a Flat continuation of the walkway. In the experiments, subjects are located on the walkway, with free viewing in the direction of the target. In a trial, they make a forced-choice decision about which of the five targets is present. Between trials, the subject looks away while one of the five targets is put in position for the next trial. Visibility is measured as percent correct recognition across a block of trials.

F4-3
FIGURE 4:
Apparatus used in psychophysical studies of target visibility. (A) Ramps and steps apparatus. (B) The five targets (from Legge et al.11).

In a series of experiments, we have examined the visibility of steps and ramps as a function of a variety of stimulus variables including viewing distance, contrast, lighting arrangement, surface texture, and subject motion. We have tested subjects with normal vision wearing blur goggles to simulate low acuity11,12 as well as subjects with actual low vision.13 In most cases, the effects of stimulus variables were similar for our low-vision subjects and our normally sighted subjects with blur goggles. We have also analyzed the contrast and shape features that distinguish ramps from steps using a probabilistic cue analysis.11

To mention one salient empirical finding: Under most viewing conditions, a step up is more visible than a step down. This is an unfortunate asymmetry given that failing to see a step down is usually more dangerous than failing to see a step up. Another finding of practical importance is that subjects recognize the targets better following locomotion; that is, they do better in recognizing the steps and ramps after walking along the walkway to the viewing location compared with stationary observation from the same location.12

Our computational work has the goal of predicting whether architectural features will be visible to people with low vision. To do this, there are two key requirements: First, modeling must deliver photometrically and geometrically accurate renderings of 3D spaces so that the size and contrast of image features can be represented quantitatively for visual analysis. Most 3D modeling and simulation systems do not faithfully retain photometrically accurate representations of spaces. For this purpose, we have used the open-source Radiance Synthetic Imaging software package (http://radsite.lbl.gov/radiance/refer/ray.html). This software uses ray tracing to accurately represent light intensities in a 3D rendering of a space, taking into account geometrical and reflectance properties of objects and surfaces, and detailed models of the spectral distribution of light sources.

Second, modeling must also represent the impact of a low-vision observer’s reduced acuity and contrast sensitivity. Following Peli’s development of a filter representing the effects of the human contrast sensitivity function (CSF),14 we have developed a blurring filter, calibrated to simulate different levels of acuity loss. To represent the effects of acuity reduction in low vision, we shifted the CSF for normal vision leftward along the spatial-frequency axis by an amount equal to the ratio of low-vision to normal clinical acuity. We used standard Fourier transform methods to produce filtered images representing the effective resolution loss for a low-vision observer.

Fig. 5 illustrates this filtering. The top two images show our Step Up and Step Down from a 10-ft viewing distance with no filtering. The lower rows show the steps with simulated 20/200 filtering (the boundary for legal blindness), 20/400, and finally 20/800 (moderately severe low vision). It is evident that the Step Up is consistently more visible because of the luminance contrast between the riser and the adjoining flat surfaces.

F5-3
FIGURE 5:
Photographs of the Step-Up and Step-Down targets used in our psychophysical studies, with acuity-calibrated low-pass filtering to represent the visual information associated with reduced levels of acuity. The pattern of squares in the upper left of each image is a chart mounted on the rear wall used for luminance calibration.

Finally, our engineering goal is to create software tools to assist architects in evaluating the visual accessibility of their designs.

Fig. 6 shows the processing pipeline we have in mind for visualizations. The left panel is the scene, based on a 3D Radiance model of our step with down lighting to enhance contrast. This image is processed in two ways. The upper panel shows the acuity-filtered image. The lower panel shows depth edges in orange that could pose hazards, as identified by the computer algorithm. The visualization in the right panel uses both analyses; the green lines are labels for hazard features predicted to be visible for someone with the modeled acuity, and the red lines are labels for features predicted to be below visibility threshold. Both the blurry visualization and the red-and-green labeling could help guide an architect in selecting lighting arrangement, surface colors, or geometry to enhance visibility in the scene.

F6-3
FIGURE 6:
Illustration of the processing pipeline to produce two types of visualizations of 3D architectural scenes relevant to low vision. Such visualizations may be of value to designers. See the text for more details.

Several insights have emerged from this ongoing project. Addressing visual accessibility is a hard problem because vision science has not adequately investigated the critical features defining object visibility in the real world. The introspection of normally sighted people is not a reliable guide for predicting the problems of accessibility encountered with low vision. We need practical models of low vision capable of predicting real-world object visibility. Progress will benefit from interdisciplinary collaboration between people with expertise in vision science, low-vision rehabilitation, computer vision/graphics, and architectural design. Those of us with low vision can expedite the process by helping to identify the accessibility problems to be solved.

VISUAL ACCESSIBILITY AND READING

Reading is a marvelous cultural invention that relies on several highly tuned aspects of visual function including good acuity in central vision, the oculomotor system for guiding reading saccades, and perhaps specific cortical mechanisms such as the visual word-form area. The shapes of written symbols for reading may have been tailored to match the pattern-recognition specialization of human vision.15,16

Reading poses problems for almost everyone with low vision. Traditional hard-copy reading is not inclusive of people who are blind or have low vision. Cattaneo and Vecchi17 pointed out that Marshal McLuhan,18 in his famous essay The Gutenberg Galaxy, referred to the invention of movable type as bringing about the “tyranny of the visual.” This “tyranny” has persisted for centuries and has excluded many people with impaired vision from the literate mainstream. Because not much could be done to make print accessible, low-vision reading received little attention. Some analog solutions proved helpful, such as the development of optical magnifiers and large-print books.

The modern electronic era has softened this “tyranny,” first by moving text from hard copy onto video screens where it can be manipulated visually, and then into digital representations that can be customized visually or converted to auditory or tactile formats.

We define reading accessibility as rapid and effortless access to the wide variety of text formats commonly found in contemporary society.

The ongoing migration to E-reading should make possible customizable reading displays for low vision. But what role is there for vision science?

Historically, the development of metrics for acuity has gone hand in hand with an interest in visual capability for reading. Letters are dominant as optotypes on acuity charts because they are convenient for testing and because they are directly relevant to reading. The Snellen big-E chart (Fig. 7A) has now given way to modern logMAR acuity charts,19,20 which yield more reliable measures of acuity. We have also learned that letter acuity tells only part of the story for reading vision. This has prompted the development of several reading-acuity charts in recent years, reviewed by Rubin,21 including the MNREAD chart developed in my laboratory.22,23 There is now interest in implementing these tests on electronic displays (Fig. 7B), in part because of the growing prominence of E-readers for both normal and low vision. This gives a whole new meaning to the Big E when it comes to reading and vision.

F7-3
FIGURE 7:
(A) Traditional Snellen letter acuity chart. (B) iPad3 showing a sentence from the MNREAD reading-acuity test. A color version of this figure is available at http://www.optvissci.com.

According to a recent publication on best practices for large print by the Council of Citizens with Low Vision International,24 the text variables of primary importance to consumers with low vision are spacing, print size, contrast, and font style (in this order). All of these are modifiable by digital devices. Over the past 30 years, my colleagues and I, and others, have measured the impact of these and many other text variables on reading performance by people with normal and low vision. Much of this work is summarized in Legge.25

In addition to measuring the parametric dependence of reading speed on text properties such as print size and contrast, we and others have analyzed how reading performance depends on deficits in acuity, contrast sensitivity, and field status.26–30

What is missing? We have not yet translated our extensive knowledge about reading and visual deficits to the customization of text for people with low vision. This gap was emphasized in a recent symposium dealing with online print accessibility conducted by the World Wide Web Consortium.31 The symposium participants decried the lack of information on the accessibility needs of people with low vision “… there are few resources that provide clear guidance on text customization. Additionally, most of this customization has not been well integrated in mainstream user agents (web browsers, etc.), nor is it sufficiently included in some accessibility standards and support material …” In a cogent critique of the gap between psychophysical studies of reading and their application to low-vision e-reading, Wayne Dick, a computer scientist with low vision, wrote in a symposium paper32 “Clinical research has not discovered a clear map to sound policy or assistive technology for reading with low vision.”

In short, innovations in reading technology have outstripped our knowledge about low-vision reading. Rarely do eye-care specialists or visually impaired consumers have clear procedures for choosing reading technology. Some savvy individuals undoubtedly discover good solutions on their own, but vision science should provide guidelines for “prescribing” appropriate reading technology for those who have neither the time nor ability to sort through the options.

How might vision researchers contribute to enhancing digital reading accessibility? I will briefly describe two examples of possible approaches.

Most Web designers and others who prepare text used by people with low vision have a poor understanding of the interacting effects of acuity, physical print size, and viewing distance. Current Web accessibility checkers focus primarily on whether digital documents are formatted properly for speech-based screen-reading software, and not for low-vision access.

Similar to my earlier example of visualizing 3D architectural layouts, we can use a CSF-based filter to provide an approximation to the reduced visibility of text from restricted acuity and contrast sensitivity in low vision.

Fig. 8 shows samples of text in three fonts—Courier, Times New Roman, and Verdana. The filtering simulates viewing 18-point print, at 40 cm (16″) with three steps of acuity, ranging from 20/20 (unfiltered) to 20/200. Such acuity-calibrated visualizations could be used by Web designers in evaluating text accessibility for different fonts and formats. This type of software could also be used by clinicians and family members to visualize the reading challenges faced by people with low vision. Of course, such visualizations provide only an approximation to the limitations due to acuity and need empirical validation. They do not capture other impediments to reading such as field restriction.

F8-3
FIGURE 8:
Text samples are rendered in three fonts—Courier, Times New Roman, and Verdana. The text has been low-pass filtered to represent the information available when reading 18-point print at a viewing distance of 40 cm with the indicated visual acuity. From your inspection, are there differences in the tolerance of the fonts to acuity reduction?

My next example illustrates why isolating the effects of single text properties on reading, such as print size, font, or line length, is insufficient to address reading accessibility. Instead, we need to address the interacting effects of these display variables.

Many eye-care clinicians report that their low-vision patients are now using iPads, Kindles, and other e-readers. It would be valuable to understand the eye status, reading tasks, and display configurations for which these devices are most useful.

The viability of a particular display for continuous reading may depend on the number of words that can be displayed at one time. This screen capacity in turn depends on the person’s reading requirements for character size, number of characters per line, and line separation.

Suppose someone with low vision wants to use an iPhone or iPad for reading. The MNREAD test shows that she needs a print size of at least 2 degrees, and other tests indicate that she also needs at least 12 characters per line and 10 words per screen for fluent reading. Fig. 9 simulates 2-degree text displayed on an iPad3 and an iPhone5 for two fonts (Courier and Times) at viewing distances of 16 in. (top row) and 8 in. (bottom row). Only the iPad at 8 in. viewing distance meets the requirement of at least 12 characters per line and 10 words per screen for both fonts.

F9-3
FIGURE 9:
Simulation of 2-degree text displayed on an iPad3 and an iPhone5 for two fonts (Courier and Times) at viewing distances of 16 in. (top row) and 8 in. (bottom row).

This example illustrates the interacting effects of display geometry, acuity, viewing distance, print size, and font. Currently, we only have trial and error to assist this reader in selecting and configuring a mobile device for reading.

These examples demonstrate that vision science can play a role in rational decision making in the choice of accessible e-reading devices. The process of enhancing reading accessibility will ultimately benefit from collaboration between vision scientists, eye-care clinicians, and software/display designers.

CONCLUSIONS

I close by posing three challenges for vision researchers with an interest in low vision. We can contribute to visual accessibility (1) by studying the interacting effects of variables in the real world that determine object visibility, (2) by developing more quantitative models of low-vision function, and (3) by collaborating with low-vision specialists, software and hardware developers, and experts in the design professions to solve problems of visual accessibility.

WITH APPRECIATION

I am deeply grateful to the American Academy of Optometry for honoring me with the 2013 Prentice medal. I am particularly grateful and humbled to be included on the list of eminent Prentice award winners. I also congratulate the other 2013 American Academy of Optometry award recipients.

The two photographs in Fig. 10 represent important aspects of my life trajectory. The photograph on the left shows me with my postdoctoral mentor Fergus Campbell from Cambridge University. We are standing outside Isaac Newton’s cottage at Woolsthorpe, near Cambridge. If you look carefully, you may see the apples on our heads. I was an undergraduate physics major, and Newton was a hero of mine; my work in vision science has benefited greatly from my undergraduate studies. Fergus was trained as an ophthalmologist and encouraged me to care deeply about the applications of vision research including low vision. The photograph on the right was taken at Louis Braille’s birthplace in Coupvray, France. I am inspecting the table used by Louis’s father for harness making. There is a sharp tool on the table, called a serpette, like the tool that caused Louis’s eye injury and resulted in his blindness. Braille’s brilliant invention of a tactile code for writing made reading accessible for blind people. I use Braille as well as highly magnified print and computer speech for reading.

F10-3
FIGURE 10:
Left: Photograph of Gordon Legge and Fergus Campbell in front of Isaac Newton’s cottage at Woolsthorpe, England (ca. 1984). Right: Gordon Legge at Louis Braille’s family home at Coupvray, France (2011). A color version of this figure is available at http://www.optvissci.com.

As of 2013, I have supervised 19 students through the completion of their PhD degrees. They are a source of tremendous pride for me. Much of what I have been able to accomplish in research has been in partnership with these excellent students. In chronological order of their degree dates, they are Dan Kersten (1983), Gary Rubin (1983), Yuanchao Gu (1990), Lisa Isenberg (1992), Vic Riley (1994), Hiro Akutsu (1995), John Hilton (1995), Bosco Tjan (1996), Wendy Braje (1997), Paul Beckmann (1998), Tim Klitz (2000), Alberto Ortiz (2002), Nick Giudice (2004), Sing-Hang Cheung (2005), Deyue (Dion) Yu (2009), Amy Kalia (2009), MiYoung Kwon (2010), Chris Kallie (2012), and Tiana Bochsler (2013).

I have also had wonderful postdocs who have worked with me and who have played critical roles in my research. They include Denis Pelli, Gary Rubin, David Parish, Sonia Ahn, Steve Mansfield, Susana Chung, Beth O’Brien, Brian Stankiewicz, Paul Beckmann, Mark Brady, Allen Cheong, Hye-Won Lee, Fang Fang, Joseph Miller, Amy Kalia, Tingting Liu, and Aurelie Calabrese.

Most of all, I am grateful to my wife Wendy Willson Legge and my son Alex Legge. Wendy was my earliest and most dedicated subject and shares credit for whatever I have achieved. Alex is currently en route to a career in medicine and shares many interests with me including baseball and the neuroscience of vision.

Gordon E. Legge

75 E River Rd

Minneapolis, MN 55455

e-mail: [email protected]

ACKNOWLEDGMENTS

I would like to thank several people for help with figure and manuscript preparation—Tiana Bochsler, Aurelie Calabrese, Rachel Gage, Yingchen He, Chris Kallie, Dan Kersten, Wendy Legge, Rob Shakespeare, and Bill Thompson. Thank you to Aries Arditi, Ian Bailey, and Greg Goodrich for instructive discussion concerning the history of the term “low vision”. I also want to thank Aries Arditi, Susana Chung, John Foley, and Gary Rubin for comments on a draft of this article.

I gratefully acknowledge the National Institutes of Health (NIH) for supporting my research on low vision over many years. My research on architectural accessibility has been supported by NIH grant EY017835 and my research on vision and reading by NIH grant EY002934.

I particularly appreciate the advice and encouragement of NIH program officers Connie Atwell, Michael Oberdorfer and Cheri Wiggs.

Received March 10, 2014; accepted May 8, 2014.

REFERENCES

1. NIH/National Eye Institute (NEI). National Plan for Eye and Vision Research. Available at: http://www.nei.nih.gov/strategicplanning/np_low.asp. Accessed March 6, 2014.
2. World Health Organization (WHO) Media Centre. Visual Impairment and Blindness Fact Sheet No. 282. Available at: http://www.who.int/mediacentre/factsheets/fs282/en/. Accessed February 13, 2014.
3. Switzer JV. Disabled Rights: American Disability Policy and the Fight for Equality. Washington, DC: Georgetown University Press; 2003.
4. Solomon A. Far from the Tree: Parents, Children and the Search for Identity. New York, NY: Scribner; 2012.
5. Arditi A. Accessibility and vision rehabilitation science: tear down that wall! Paper presented at the Minisymposium Beyond Large Print: Advances in Accessibility and Technology for the Visually Impaired. The Association for Research in Vision and Ophthalmology (ARVO) meeting, Seattle, WA, May 9, 2013.
6. Arditi A. The visually impaired older minority in the twenty-first century. In: Stanford EP, Nelson TC, eds. Aging & Diversity in the 21st Century. Washington, DC: AARP Research Info Center; 2009:122–30. Available at: http://www.visibilitymetrics.com/sites/visibilitymetrics.com/files/downloads/Arditi%20-%20Visually%20Impaired%20Older%20Minority.pdf. Accessed May 8, 2014.
7. Tenneti R, Johnson D, Goldenberg L, Parker RA, Huppert FA. Towards a capabilities database to inform inclusive design: experimental investigation of effective survey-based predictors of human-product interaction. Appl Ergon 2012; 43: 713–26.
8. Brody BL, Gamst AC, Williams RA, Smith AR, Lau PW, Dolnak D, Rapaport MH, Kaplan RM, Brown SI. Depression, visual acuity, comorbidity, and disability associated with age-related macular degeneration. Ophthalmology 2001; 108: 1893–900.
9. Whitson HE, Ansah D, Sanders LL, Whitaker D, Potter GG, Cousins SW, Steffens DC, Landerman LR, Pieper CF, Cohen HJ. Comorbid cognitive impairment and functional trajectories in low vision rehabilitation for macular disease. Aging Clin Exp Res 2011; 23: 343–50.
10. West SK, Munoz B, Rubin GS, Schein OD, Bandeen-Roche K, Zeger S, German S, Fried LP. Function and visual impairment in a population-based study of older adults. The SEE project. Salisbury Eye Evaluation. Invest Ophthalmol Vis Sci 1997; 38: 72–82.
11. Legge GE, Yu D, Kallie CS, Bochsler TM, Gage R. Visual accessibility of ramps and steps. J Vis 2010; 10: 8.
12. Bochsler TM, Legge GE, Kallie CS, Gage R. Seeing steps and ramps with simulated low acuity: impact of texture and locomotion. Optom Vis Sci 2012; 89: 1299–307.
13. Bochsler TM, Legge GE, Gage R, Kallie CS. Recognition of ramps and steps by people with low vision. Invest Ophthalmol Vis Sci 2013; 54: 288–94.
14. Peli E. Contrast in complex images. J Opt Soc Am (A) 1990; 7: 2032–40.
15. Changizi MA, Zhang Q, Ye H, Shimojo S. The structures of letters and symbols throughout human history are selected to match those found in objects in natural scenes. Am Nat 2006; 167: 117–39.
16. Poirier FJ, Gosselin F, Arguin M. Subjectively homogeneous noise over written text as a tool to investigate the perceptual mechanisms involved in reading. J Vis 2013; 13.
17. Cattaneo Z, Vecchi T. Blind Vision: The Neuroscience of Visual Impairment. Cambridge, MA: MIT Press; 2011.
18. McLuhan M. The Gutenberg Galaxy: The Making of Typographic Man. Toronto, ON: University of Toronto Press; 1962.
19. Bailey IL, Lovie JE. New design principles for visual acuity letter charts. Am J Optom Physiol Opt 1976; 53: 740–5.
20. Ferris FL 3rd, Kassoff A, Bresnick GH, Bailey I. New visual acuity charts for clinical research. Am J Ophthalmol 1982; 94: 91–6.
21. Rubin GS. Measuring reading performance. Vision Res 2013; 90: 43–51.
22. Mansfield JS, Ahn SJ, Legge GE, Luebker A. A new reading-acuity chart for normal and low vision. In: OSA Technical Digest Series: Ophthalmic and Visual Optics/Noninvasive Assessment of the Visual System, vol.3. Washington, DC: Optical Society of America Technical Digest; 1993: 232–5.
23. Mansfield JS, Legge GE. The MNREAD acuity chart. In: Legge GE. Psychophysics of Reading in Normal and Low Vision. Mahwah, NJ: Lawrence Erlbaum Associates; 2007: 167–91.
24. Council of Citizens with Low Vision International (CCLVI). Best Practices and Guidelines for Large Print Documents Used by the Low Vision Community. Available at: http://www.cclvi.org/large-print-guidelines.html. Accessed February 13, 2014.
25. Legge GE. Psychophysics of Reading in Normal and Low Vision. Mahwah, NJ: Lawrence Erlbaum Associates; 2007.
26. Legge GE, Ross JA, Isenberg LM, LaMay JM. Psychophysics of reading. XII. Clinical predictors of low-vision reading speed. Invest Ophthalmol Vis Sci 1992; 33: 677–87.
27. Whittaker SG, Lovie-Kitchin J. Visual requirements for reading. Optom Vis Sci 1993; 70: 54–65.
28. Cacho I, Dickinson CM, Smith HJ, Harper RA. Clinical impairment measures and reading performance in a large age-related macular degeneration group. Optom Vis Sci 2010; 87: 344–9.
29. Latham K, Tabrett DR. Guidelines for predicting performance with low vision aids. Optom Vis Sci 2012; 89: 1316–26.
30. Crossland MD, Rubin GS. Text accessibility by people with reduced contrast sensitivity. Optom Vis Sci 2012; 89: 1276–81.
31. Worldwide Web Consortium (W3C). Worldwide Accessibility Initiative. Text Customization for Readability Online Symposium, 19 November 2012. Available at: http://www.w3.org/WAI/RD/2012/text-customization/. Accessed March 6, 2014.
32. Dick WE. Discovering typographic environments for reading with low vision. Presented at the Online Symposium on Text Customization for Readability, November 19, 2012. Available at: http://www.w3.org/WAI/RD/2012/text-customization/r2. Accessed March 6, 2014.
Keywords:

low vision; visual impairment; visual accessibility; vision rehabilitation; reading; mobility

© 2014 American Academy of Optometry