Secondary Logo

Journal Logo

Role of Peripheral Visual Cues in Online Visual Guidance of Locomotion

Marigold, Daniel S.

Exercise and Sport Sciences Reviews: July 2008 - Volume 36 - Issue 3 - p 145-151
doi: 10.1097/JES.0b013e31817bff72
Articles
Free

Vision is normally the predominant sensory system used for guiding locomotion. Online visual control is critical for adjusting lower limb trajectory and ensuring proper foot placement. Research suggests that peripheral visual cues play a large role in this online control, particularly in challenging situations.

Visual information is used online to plan and fine-tune the locomotor pattern.

Department of Physiology, Universite de Montreal, Montreal, Quebec, Canada

Address for correspondence: Daniel S. Marigold, Ph.D., Département de Physiologie, Université de Montréal, C.P. 6128, Succursale Centre-ville, Montréal, Québec, Canada H3C 3J7 (E-mail: daniel.marigold@umontreal.ca).

Accepted for publication: March 11, 2008.

Associate Editor: E. Paul Zehr, Ph.D.

Back to Top | Article Outline

INTRODUCTION

"…vision evolved in animals, not to enable them to 'see' the world, but to guide their movements through it."(7[p183])

Vision is normally intricately linked with action and plays an essential role in guiding locomotion. Details regarding the layout of the environment, identification and characteristics of objects and surfaces, and self-motion information are captured and processed by the visual system. In addition, vision provides information about the body relative to the environment, which is referred to as visual kinaesthesis or visual exproprioception (6,16,17,24). This is in contrast to visual exteroception, which affords information on environmental characteristics such as obstacle height (16,17,24). These types of visual information are critical for implementing avoidance strategies, for making proactive adjustments to accommodate different ground terrain, and for navigation (12,14,16). It is no wonder that visual processing consumes such a large portion of the brain.

Vision is unique in that it is able to provide necessary information for successful locomotion at a distance. Thus, visual information may be used to preplan the path to a specific goal. Such feedforward visual control is important, in that the locomotor pattern may be modified before balance is disrupted by a hazardous surface or to ensure that an obstacle can be avoided (12). When navigating in cluttered environments and we are forced to rapidly alter our direction or modify our step to avoid contact with an object or undesirable surface, vision may be used in a feedback manner to ensure safe travel (14). Thus, "online" visual control is used to guide or alter a current and/or future movement by checking our progress and facilitating the updating of information regarding the spatial world. Here, online visual control is defined as visual information gained as a person moves through and interacts with the environment. In this sense, online visual control has both a feedforward (planning) component and a feedback (correction) component.

Although online feedforward planning may dominate in situations where visual information obtained several steps in advance determines a gait modification is necessary, when time is of the essence, online visual feedback mechanisms may play a larger role. This visual feedback may in fact serve to update (or fine-tune) the visual information gained in a feedforward manner depending on the situation. Patla and colleagues (16,19) have demonstrated that unexpected changes in step width and length require visual information one step ahead, whereas sudden changes in direction require visual information two steps in advance. More specifically, success rates are greater than 80% when a visual cue is present one step ahead for a step modification, whereas the success rate of a change in direction for the same amount of time is close to zero (16). Interestingly, fixation strategies seem to optimize safe travel under challenging conditions including stepping to targets and walking across multisurface terrain as evident by the fact that people generally fixate one to two steps ahead in these situations (8,12,22). This review will focus predominantly on the online corrections made within one to two steps before a gait modification. In addition, this review will underscore the importance of peripheral visual information and argue that peripheral visual cues play a role in online guidance of locomotion in challenging situations such as during obstacle avoidance and over different types of ground terrain. What has emerged from recent research on this topic is the idea that peripheral vision from the lower visual field is particularly important for this task (13,14).

Back to Top | Article Outline

EVIDENCE FOR ONLINE VISUAL GUIDANCE OF LOCOMOTION

The study of how vision is used to guide locomotion really began with the work of Gibson in the 1950s (6). Gibson proposed that the patterns and changes of patterns in the optic array defined by the visual field are stimuli for the control of locomotion because they provide information about the direction and speed of self-motion (6). Given the fact that this optic flow is generated during movement, it is likely that it is obtained online. One way in which this information may be used to control locomotion is in steering toward a goal (28,29; although note the contrasting views of other researchers described in theses articles). For example, Warren et al. (29) have shown using an immersive virtual environment that as optic flow information is added, people increasingly rely on this information and demonstrate straighter paths with smaller heading errors. In addition, optic flow may be used to calculate tau, a measure of the time to contact with an object. For instance, Warren et al. (30) have argued that tau is used for controlling the vertical impulse applied to the ground by the lower limb to ensure accurate foot placement onto irregularly spaced targets while running on a treadmill.

Therefore, movement of a person through the environment allows for a vast array of visual information to be obtained. The question then becomes, "How is online visual information incorporated into the locomotor pattern?" And more specifically, "How is visual exproprioceptive and exteroceptive information integrated to generate a gait modification?" A theoretical framework for this is illustrated schematically in Figure 1, which helps to show how research on the use of online visual control described below contributes to our understanding of how we safely ambulate in our complex environment. Once a path is determined, visual information from peripheral and central visual fields allow for continual updating of the spatial environment as an individual walks. Visual exteroceptive information regarding obstacles and challenging ground terrain provides valuable insight for what will be termed a "hazard detector." This hazard detector evaluates the presence and threat of obstacles, unstable surfaces, pedestrians, and other salient visual objects. The gain of this detector can be preset based on the task or context of the situation. Interestingly, as the demand of the task becomes more challenging, people require greater amounts of visual sampling and fixate locations closer to the current step (12,16). The information integrated by the hazard detector may then be used as part of an online feedforward planning signal (via path "b" in Fig. 1) to adjust the locomotor pattern to either anticipate-for example, a change in ground terrain-or avoid an upcoming obstacle. Alternatively, if the detection of the hazard requires an immediate online correction, visual information may use a faster online route (via path "a" in Fig. 1). The nervous system may exploit a forward model that integrates an efference copy of the motor command to move the leg with sensory feedback to predict the state of the limb (Fig. 1), which is then combined with the relevant visual exteroceptive information enabling a fast online visual feedback pathway (2). Modifications of the locomotor pattern in one step (~400-600 ms) or two steps (~800-1200 ms) naturally dictate relatively quick visuomotor processing. Indeed, one series of experiments has recently demonstrated that changes in muscle activity can occur within approximately 120 ms after a sudden appearance of an obstacle dropped on a moving treadmill belt (14).

Figure 1

Figure 1

Visual exproprioceptive information (provided predominantly by peripheral visual cues as discussed below) can facilitate a prediction of the current state of the lower limbs. The visual exproprioceptive information regarding the position of the lower limbs with reference to the environment can be integrated with muscle proprioceptive feedback from the lower limbs and an efference copy of the motor command. This information is then compared, with additional information from the hazard detector to drive an online correction. The remainder of this review will describe studies that have investigated how vision is used online to guide locomotion and indicate, when appropriate, how this might fit into the above model.

The use of visual information in an online manner occurs in many different contexts. One of the most studied tasks is stepping over an obstacle in the travel path (14,15,17,18,21,24,25). In this scenario, the height of the obstacle must be determined to elevate the limb for successful clearance, and the distance to the obstacle must be continually updated as one approaches to ensure proper foot placement. These commonly used measures (lead and trail limb horizontal distance and toe clearance over the obstacle) are illustrated in Figure 2. In a recent study, Patla and Greig (18) carried out an obstacle avoidance task under various visual conditions (e.g., static and dynamic visual sampling vs full vision). Incorrect foot placement before the obstacle rather than inadequate limb elevation was responsible for failures; foot placement variability decreased as an individual approached the obstacle in all vision conditions, but the greatest decrease was seen with the full vision condition. This suggests that distance to the obstacle is monitored online as one approaches it and may use feedforward planning. Similarly, Rietdyk and Rhea (25) found that when position cues were present while wearing goggles that block the lower visual field (including the position of the obstacle in the step preceding crossover), foot placement distances relative to the obstacle were similar to the full vision condition, whereas the distances were significantly increased when no position cues were provided. This further supports the notion that visual information regarding lower limb position relative to the obstacle (i.e., visual exproprioception) is used online to control lead and trail limb foot placement before crossing over an obstacle. Intermittent gaze fixations on the obstacle may facilitate this updating of obstacle distance (21). Visual exproprioception is also used online to fine-tune limb trajectory when stepping over an obstacle (15,17,24) and may contribute to limb state estimation (Fig. 1). Toe clearance and toe clearance variability over the obstacle increase when stepping over obstacles of varying heights when vision from the lower visual field is occluded (17). Interestingly, visual exteroception regarding obstacle height characteristics is used in a feedforward manner (potentially through path "b" in Fig. 1). The presence of obstacle height information (i.e., enhanced visual exteroception via an obstacle height cue available during crossing) when the lower visual field is blocked (i.e., visual exproprioceptive information eliminated) does not influence toe clearance and toe clearance variability measures during obstacle crossing (24).

Figure 2

Figure 2

Another aspect of locomotion that uses online visual information is in the control of landing and stepping to a stationary target. When vision is occluded while stepping down to a lower level, anterior-posterior center of mass velocity is attenuated, knee flexion and ankle plantar flexion are greater, peak vertical ground reaction force increases, and individuals maintain a greater amount of bodyweight on the support limb compared with normal vision (1). Although swing limb trajectory may be predominantly under online feedforward visual control (8), modifications of the stepping limb during swing phase are still possible using online visual feedback (23). Specifically, the nervous system preplans the swing trajectory to accurately step to a target before the foot leaves the ground as evident from the fact that visual denial of the stepping target does not influence the swing limb (8). In contrast, recent evidence has shown that when greater precision is required during a step, the accuracy of stepping to a target is reduced when vision is occluded at the point of foot lift-off (23). This is illustrated in Figure 3 for both fast and slow steps to straight ahead and diagonal locations.

Figure 3

Figure 3

Route planning for navigation is an additional avenue that has provided evidence of the use of online visual control. During route navigation in complex environments, visual cues gained online can be used in a feedforward sense to plan the path with feedback mechanisms adjusting the locomotor trajectory when necessary. Fajen and Warren (3), using a behavioral dynamics approach to modeling visual control of locomotion for steering and avoiding obstacles, have shown that the path a person takes is based on the responses to visually specific goals (or attractors) and obstacles (or repellers) obtained online as the person interacts with the environment. The strength of the goal increases with its angle from the current heading and decreases with distance. In contrast, avoidance of the obstacle decreases with angle and distance. Through modeling the steering dynamics by linearly combining the obstacle and goal terms, the route path was predictable. Furthermore, Patla et al. (20) have developed a new model for route planning which could account for the chosen travel path approximately 90% of the time. This model, called safe corridor identification, involves online assessment of safe corridors based on obstacle-free passages and corridor width, while minimizing deviations in the current travel direction and from the end goal. Individuals were required to walk through a cluttered environment avoiding tall pylons to reach an end goal. Support for online control was evident from the time lag (~1.3s: corresponding to approximately two steps) between fixation on a pylon and the initiation of a turn, intermittent fixations between the goal and the path/pylon region to plan the path or check that the current path is appropriate, and the accuracy of the model incorporating online features.

Back to Top | Article Outline

ROLE OF PERIPHERAL VISUAL INFORMATION DURING LOCOMOTION

The visual information available to the retina is largely captured through the peripheral visual field rather than by the central visual field. Although detailed information of different salient features of the environment can be obtained by saccadic eye movements to these areas (4,12), thus bringing the region of the retina with the greatest visual acuity, the fovea, onto the image, the visual field is much too large to view simultaneously. For instance, the fovea region only extends out to an angle of eccentricity of approximately 1° and the parafoveal region from approximately 1° to 5°; the peripheral visual field on the other hand extends out to the remainder of the visual field (4). As a result, the visual system must rely on a combination of peripheral and central visual fields to guide action in the complex settings in which we walk.

Recent evidence has demonstrated the importance of peripheral visual cues during walking. Indeed, the peripheral visual field seems to be critical for walking on different ground terrain, stepping over obstacles, and avoiding obstacles such as people and other objects (5,10,13,14,17,20,24-28). The importance of peripheral vision in the control of locomotion is emphasized among individuals with a visual impairment in which visual information from this region is restricted. This visual loss may stem from different visual disorders including retinitis pigmentosa, which is characterized by a progressive visual field loss starting in the periphery. Peripheral visual field loss associated with retinitis pigmentosa results in deficits in mobility (5). For instance, walking speed decreases and individuals are more likely to bump into obstacles or other objects, stumble, or neglect to detect stairs. These mobility deficits can also be seen with peripheral visual field loss associated with aging (26). In addition, persons with retinitis pigmentosa demonstrate different gaze fixation strategies when navigating (27). Specifically, Turano et al. (27) found that individuals with retinitis pigmentosa directed approximately 80% of their gaze fixations downward, at objects, or at the environmental layout compared with normal vision individuals who spent the majority of the time fixating ahead or at the goal. This suggests that normally, objects such as obstacles and layout characteristics are obtained from peripheral visual cues and that with peripheral visual field loss, information regarding the details of these items must be acquired through direct fixation. In support, Turano et al. (28) using an immersive virtual environment of a forest scene with and without a goal found that people with peripheral visual field loss (from either retinitis pigmentosa or glaucoma) had greater heading errors when the goal was absent and argued that peripheral visual cues are critical for establishing and/or updating the spatial structure of the environment.

Back to Top | Article Outline

CONTRIBUTION OF PERIPHERAL VISUAL CUES TO ONLINE CONTROL OF LOCOMOTION

One of the central arguments in this review is that peripheral visual cues play a role in the online guidance of locomotion. Figure 4 illustrates this proposed role. To safely negotiate the environment, the ability to detect obstacles, sudden drop-offs, and/or changes in ground terrain that may compromise stability is paramount. Active examination of the ground conditions would serve to allow timely and appropriate visuomotor processing to fine-tune foot placement. Furthermore, online detection of pedestrians in the peripheral visual field could avoid potential collisions.

Figure 4

Figure 4

We recently found support for the argument that peripheral visual cues play a role in the online control of locomotion using an obstacle avoidance task (14). In this study, individuals had to step over an obstacle that was randomly released onto a moving treadmill upon which they were walking on. The time available to correctly step over the obstacle varied from approximately 200 to 450 ms and thus required rapid online visuomotor processing. Vision was manipulated such that individuals either fixated the obstacles (i.e., central vision condition) while walking or fixated a target approximately two steps ahead (i.e., peripheral vision condition) such that the obstacle was located at an angle of eccentricity of approximately 27° (Fig. 5). In the latter condition, individuals were told to maintain fixation on the target until they detected the release of the obstacle and then were free to move their eyes when and how they wanted. Individuals rarely redirected their gaze in the peripheral vision condition despite having sufficient time to do so (Fig. 5): in only 18% of trials did individuals make a saccadic eye movement, which was always directed to the landing area rather than the obstacle (14). These results are suggestive of peripheral visual cues being used online to detect the presence of the obstacle (potentially through a rapid online visual feedback pathway; path "a" in Fig. 1) and to initiate and monitor the lower limb avoidance reaction.

Figure 5

Figure 5

Peripheral visual cues are also used online for the crossover step to avoid an obstacle when the obstacle is seen well in advance. Gaze fixations during obstacle approach are directed to the ground in front and/or the obstacle and may aid in feedforward planning of the approach phase (21). However, during the crossover step, fixations are directed toward the landing area (21). This implies that either peripheral visual cues of the obstacle are sufficient for monitoring the lower limb trajectory over the obstacle or that this phase is preplanned. As previously discussed, blocking the lower visual field results in increased toe clearance and toe clearance variability, supporting the idea that peripheral visual cues (i.e., visual exproprioception) are important for clearing the obstacle (17,24,25).

To negotiate differences in ground terrain present when we walk, appropriate visual information must be acquired. Using a multisurface terrain paradigm, we found that individuals fixate approximately two steps ahead, and fixations are directed to task-relevant areas, that is, surfaces eventually stepped on (12). This fixation strategy would accommodate a change in direction or step modification (16,19) if visual input processed online deems these actions necessary. The question becomes, "Are peripheral visual cues from the lower visual field used to monitor online the current lower limb trajectory and/or ground conditions when fixating two steps ahead?" To address this idea, we had participants walk on the multisurface terrain with normal vision and while wearing special glasses that blocked approximately 30° to 40° of the lower visual field (13). Therefore, participants were unable to obtain visual information of their lower limbs and approximately 1 to 1.5 steps in front when fixating 2 steps ahead. When the lower visual field was occluded, participants altered their gait pattern such that they walked slower and took shorter steps (i.e., a cautious gait strategy). More importantly, participants pitched their head downward toward the ground to a greater extent. This is illustrated in Figure 6, where head pitch angle for one trial of one young and one older adult is shown. Notice that the overall gain of the head pitch angle is increased in both age groups, indicating that these results are independent of age. These results suggest that individuals require visual information from the lower visual field when walking across multisurface terrain. That a head movement is required when the lower visual field is occluded indicates that the absent peripheral visual information is normally used online to adjust limb trajectory and/or foot placement on the unstable ground.

Figure 6

Figure 6

The ability to navigate in a cluttered environment and avoid objects necessitates the use of the peripheral visual field. It is difficult to walk in a crowded shopping mall fixating the names of various stores as one is trying to find a particular item, all the while avoiding the many moving people aimlessly wondering. Consequently, pedestrians in, for example, a mall may be monitored in the peripheral visual field. Jovancevic et al. (10) reported that 60% of fixated pedestrians who were not on a collision course with the participant had another fixation once they switched to a path (obtained via peripheral visual cues) where a collision was eminent. Similarly, investigation of gaze fixations while individuals walked and avoided an intricate pattern (or grid) of pylons to get to an end goal revealed fixations to pylons where individuals changed directions occurred in approximately 10% of the time in early rows of the grid and 50% in later rows (20). These results suggest that the location of the turn is acquired in the peripheral visual field, particularly early in the walking trial, because fixations on the actual object to avoid were rare. Thus, active monitoring of visual stimuli in the periphery can be used to guide future fixations and alter the locomotor pattern to accommodate the goal of the current task.

Back to Top | Article Outline

SUMMARY AND IMPLICATIONS

To conclude, vision provides details regarding heading direction, upcoming hazards, and limb position relative to the environment to safely navigate cluttered environments. Much of this information is obtained from the optic flow created through movement. This visual exteroceptive and exproprioceptive information is used online in feedforward planning and for online feedback corrections, where the latter is often used to update the visual information previously acquired. As evident from this review, vision from the peripheral visual field plays a large role in this online control of locomotion. Visual information from the peripheral visual field is important for monitoring changes in ground terrain and for adjusting foot placement and lower limb trajectory for stepping over obstacles. Indeed, it is possible that peripheral vision, from the lower visual field in particular, may provide the predominant source of visual exproprioceptive information for limb state estimation and the resulting error signal driving online feedback corrections.

From a clinical point of view, the importance of the lower visual field during locomotion may pose a problem for individuals who wear multifocal glasses. Many activities of daily living require vision from the lower visual field including negotiating curbs and other obstacles, changes in ground terrain, and stairs. Multifocal glasses impair depth perception and contrast sensitivity when looking through the lower portion of the lens and, as a result, are likely to interfere with these tasks (11). Recently, Johnson et al. (9) demonstrated that older adults who wore multifocal glasses had increased vertical toe clearance variability when stepping to a new height compared with those who wore single-vision glasses and therefore were more likely to trip. It is not surprising then that older adults who wear multifocal glasses are twice as likely to fall compared with those who do not wear these glasses (11). Thus, individuals who are at a high risk for falling should use caution when wearing multifocal glasses in complex and challenging environments.

Although the model and supporting data described in this review have shown how and when vision contributes to online corrections during locomotion, several major questions remain to be addressed. In particular, where and how does visuomotor processing occur in the brain? Work using single-cell recordings in walking animals may provide a much-needed understanding of the intricacies of online visual guidance of locomotion.

Back to Top | Article Outline

Acknowledgments

The author thanks Dr Trevor Drew for comments regarding an earlier version of this manuscript. Support was provided by the Canadian Institutes of Health Research. This review is dedicated to the memory of Dr Aftab Patla. Dr Patla was a pioneer in the study of visual control of locomotion. His innovative studies have influenced researchers from many different disciplines, and his passion for research will be carried on by the many students he mentored over the years.

Back to Top | Article Outline

References

1. Buckley, J.G., M.J. MacLellan, M.W. Tucker, A.J. Scally, and S.J. Bennett. Visual guidance of landing behaviour when stepping down to a new level. Exp. Brain Res. 184:223-232, 2008.
2. Desmurget, M., and S. Grafton. Forward modeling allows feedback control for fast reaching movements. Trends Cogn. Sci. 4:423-431, 2000.
3. Fajen, B.R., and W.H. Warren. Behavioral dynamics of steering, obstacle avoidance, and route selection. J. Exp. Psychol. Hum. Percept. Peform. 29:343-362, 2003.
4. Findlay, J.M., and I.D. Gilchrist. Active Vision: The Psychology of Looking and Seeing, Oxford: Oxford University Press, 2003.
5. Geruschat, D.R., K.A. Turano, and J.W. Stahl. Traditional measures of mobility performance and retinitis pigmentosa. Optom. Vis. Sci. 75:525-537, 1998.
6. Gibson, J.J. Visually controlled locomotion and visual orientation in animals. Br. J. Psychol. 49:182-194, 1958.
7. Goodale, M.A., and G.K. Humphrey. The objects of action and perception. Cognition. 67:181-207, 1998.
8. Hollands, M.A., and D.E. Marple-Horvat. Visually guided stepping under conditions of step cycle-related denial of visual information. Exp. Brain Res. 109:343-356, 1996.
9. Johnson, L., J.G. Buckley, A.J. Scally, and D.B. Elliott. Multifocal spectacles increase variability in toe clearance and risk of tripping in the elderly. Invest. Ophthalmol. Vis. Sci. 48:1466-1471, 2007.
10. Jovancevic, J., B. Sullivan, and M. Hayhoe. Control of attention and gaze in complex environments. J. Vis. 6:1431-1450, 2006.
11. Lord, S.R., J. Dayhew, and A. Howland. Multifocal glasses impair edge-contrast sensitivity and depth perception and increase the risk of falls in older people. J. Am. Geriatr. Soc. 50:1760-1766, 2002.
12. Marigold, D.S., and A.E. Patla. Gaze fixation patterns for negotiating complex ground terrain. Neuroscience. 144:302-313, 2007.
13. Marigold, D.S., and A.E. Patla. Visual information from the lower visual field is important for walking across multi-surface terrain. Exp. Brain. Res. March 6, 2008 [Epub ahead of print]. DOI 10.1007/s00221-008-1335-7.
14. Marigold, D.S., V. Weerdesteyn, A.E. Patla, and J. Duysens. Keep looking ahead? Re-direction of visual fixation does not always occur during an unpredictable obstacle avoidance task. Exp. Brain Res. 176:32-42, 2007.
15. Mohagheghi, A.A., R. Moraes, and A.E. Patla. The effects of distant and on-line visual information on the control of approach phase and step over an obstacle during locomotion. Exp. Brain Res. 155:459-468, 2004.
16. Patla, A.E. Understanding the roles of vision in the control of human locomotion. Gait Posture. 5:54-69, 1997.
17. Patla, A.E. How is human gait controlled by vision? Ecol. Psychol. 10:287-302, 1998.
18. Patla, A.E., and M. Greig. Any way you look at it, successful obstacle negotiation needs visually guided on-line foot placement regulation during the approach phase. Neurosci. Lett. 397:110-114, 2006.
19. Patla, A.E., S.D. Prentice, C. Robinson, and J. Neufeld. Visual control of locomotion: strategies for changing direction and for going over obstacles. J. Exp. Psychol. Hum. Percept. Perform. 17:603-634, 1991.
20. Patla, A.E., S. Tomescu, M. Greig, and A. Novak. Gaze fixation patterns during goal-directed locomotion while navigating around obstacles and a new route-selection model. In: Eye Movements: A Window on Mind and Brain, R.P.G. van Gompel, M.H. Fischer, W.S. Murray, and R.L. Hill (Eds), Oxford: Elsevier Ltd, 2007.
21. Patla, A.E., and J.N. Vickers. Where and when do we look as we approach and step over an obstacle in the travel path? Neuroreport. 8:3661-3665, 1997.
22. Patla, A.E., and J.N. Vickers. How far ahead do we look when required to step on specific locations in the travel path during locomotion? Exp. Brain Res. 148:133-138, 2003.
23. Reynolds, R.F., and B.L. Day. Visual guidance of the human foot during a step. J. Physiol. 569:677-684, 2005.
24. Rhea, C.K., and S. Rietdyk. Visual exteroceptive information provided during obstacle crossing did not modify the lower limb trajectory. Neurosci. Lett. 418:60-65, 2007.
25. Rietdyk, S., and C.K. Rhea. Control of adaptive locomotion: effect of visual obstruction and visual cues in the environment. Exp. Brain Res. 169:272-278, 2006.
26. Turano, K.A., A.T. Broman, K. Bandeen-Rochie, B. Munoz, G.S. Rubin, S.K. West, and SEE Project Team. Association of visual field loss and mobility performance in older adults: Salisbury Eye Evaluation Study. Optom. Vis. Sci. 81:298-307, 2004.
27. Turano, K.A., D.R. Geruschat, F.H. Baker, J.W. Stahl, and M.D. Shapiro. Direction of gaze while walking a simple route: persons with normal vision and persons with retinitis pigmentosa. Optom. Vis. Sci. 78:667-675, 2001.
28. Turano, K.A., D. Yu, L. Hao, and J.C. Hicks. Optic-flow and egocentric-direction strategies in walking: central vs peripheral visual field. Vision Res. 45:3117-3132, 2005.
29. Warren, W.H., B.A. Kay, W.D. Zosh, A.P. Duchon, and S. Sahuc. Optic flow is used to control human walking. Nat. Neurosci. 4:213-216, 2001.
30. Warren, W.H., D.S. Young, and D.N. Lee. Visual control of step length during running over irregular terrain. J. Exp. Psychol. Hum. Percept. Perform. 12:259-266, 1986.
Keywords:

visual exproprioception; visual exteroception; online control; obstacle avoidance; lower visual field

©2008 The American College of Sports Medicine