Free
Article  |   March 2012
The role of discrepant retinal motion during walking in the realignment of egocentric space
Author Affiliations
Journal of Vision March 2012, Vol.12, 4. doi:https://doi.org/10.1167/12.3.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tracey A. Herlihey, Simon K. Rushton; The role of discrepant retinal motion during walking in the realignment of egocentric space. Journal of Vision 2012;12(3):4. https://doi.org/10.1167/12.3.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visually guided action relies on accurate perception of egocentric direction. Unfortunately, perceived direction easily becomes misaligned. How is this problem overcome? One theory (R. Held & S. J. Freedman, 1963) is that during self-movement the observer uses the relationship between anticipated and experienced sensory feedback as a source of information to maintain alignment. However, data supporting this theory is equivocal, and recent evidence appears contradictory. We reexamined the issue. We injected an error into perceived visual direction and then assessed realignment after a period of walking toward a target. We manipulated the sensory information available (presence of retinal motion, Experiment 1; presence of peripheral motion, Experiment 2) and found that as the amount of retinal motion was reduced (Experiments 1 and 2), realignment of perceived visual direction decreased. When we then (Experiment 3) removed the discrepancy between anticipated and experienced retinal motion, no realignment was observed. Our results provide evidence that a discrepancy between anticipated and experienced sensory feedback is an important source of information for the alignment of egocentric space, with retinal motion having a particular role in driving a realignment of perceived visual direction.

Introduction
To head a football (“soccer” ball), you need to know the direction of the ball relative to your head. To grab for a set of keys resting on your desk, you need to know the direction of the keys relative to your body. To test these assertions for yourself, you can try to perform these actions while wearing glasses containing paired prisms. The prisms change perceived visual direction by rotating the optic array. The change in perceived direction leads to a predictable error in the execution of the action. For example, if the prisms shift perceived direction of the keys 15° rightward, you will aim approximately 15° to the right. 
Problematically, perceived visual direction is prone to drift. For example, it has been reported that in a dark room, sustained eye rotation of 22° to the side for 60 s leads to about a 2° error in perceived direction (Paap & Ebenholtz, 1976). A comparable drift is found with sustained head rotation (Howard & Anstis, 1974). Given the centrality of accurate information about visual direction to the successful visual guidance of action, this raises the question of how the brain maintains a calibrated map of egocentric space. 
In the 1960s, building on von Holst's (1954) “reafference principle,” Held and Freedman (1963) put forward the suggestion that the brain could use the relationship between the anticipated percept and the experienced percept (reafference—the sensory consequences of the self-generated action) to keep the perceptuo-motor systems aligned. 1 The best-known example of this comes from walking. Imagine walking straight forward with your gaze direction fixed, this will produce a radial pattern of retinal motion (Calvert, 1950; Gibson, 1958; see Mollon, 1997 for contribution of Grindley); the centre of the pattern of motion should coincide with straight-ahead of the body. If the observer's map of egocentric space is misaligned, the centre of the motion pattern will not be where he or she anticipated it would be. Similarly, if there is an error in the proprioceptive or motor maps, then the observer may move in an unintended direction and so not experience the pattern of motion he or she anticipated. Therefore, a discrepancy between the anticipated and experienced pattern of retinal motion indicates a “misalignment” or an error in the mapping from perception to action. The sign and magnitude of the error is indicative of the error in the mapping, and it is suggested that the brain can make use of this directly in the realignment process (Held & Freedman, 1963). 
The idea of using the relationship between anticipated and experienced sensory feedback in maintaining alignment is compelling, but is it correct? If we examine the classic example of walking, does exposure to discrepant retinal motion drive a realignment of perceived visual direction? Surprisingly, the answer is not clear. Although there is a considerable historical body of data on this topic, empirical data call two components of the hypothesis into question. Below, we highlight these challenges and add a third observation of our own. 
The first challenge is to the role of retinal motion in realignment. After conducting an extensive series of prism adaptation studies documenting the changes in perceived direction that result from walking, Redding and Wallace (1985) reexamined their assumption of a primary role for retinal motion in realignment. Their experimental studies were motivated by the following logic: If retinal motion drives adaptation, then as the “salience” of retinal motion increases, the magnitude of the realignment should also increase. They varied salience through manipulation of participants' walking speed. They found that the magnitude of realignment (both change in perceived visual direction, which encompasses the eye–head system, and change in perceived proprioceptive direction, which encompasses the hand–head system) did not increase as a function of walking speed. It was, therefore, concluded that realignment is not driven by retinal motion. 2  
A second challenge concerns where realignment takes place, that is, whether sensory discrepancy leads to realignment of perceived visual direction or whether adaptation occurs elsewhere in the perceptuo-motor system. In a recent pair of studies, Bruggeman, Zosh, and Warren (2007) and Bruggeman and Warren (2010) used “Virtual Reality” technology (a position tracked head mounted display that allows an observer to move through a virtual, computer-generated, environment) to inject an error in the mapping from perception to action: The observer's walking direction within a virtual environment (a computer-generated scene viewed through a position tracked head-mounted display) was displaced 10° to the left or to the right of the observer's walking direction in the real world. Head orientation provided their measure of perceived visual direction. Using this measure, they found no evidence for the realignment of perceived visual direction. Instead, they found a change in visuo-locomotor mappings specific to the exposure task: The walking exposure task produced a change only in walking direction post-exposure; it did not produce a change in other tasks that were different to the exposure task, such as throwing. Thus, although Bruggeman et al. did find some adaptation, they did not find evidence for the realignment of perceived visual direction. 
Lastly, we consider the evidence for the necessity of a discrepancy. Held reported (e.g., Held & Bossom, 1961; Held & Mikaelian, 1964) that realignment occurs when observers walk through an environment (active movement) but not when pushed through the same environment in a wheelchair (passive movement). A problem with this comparison is that the active and passive conditions employed are not matched for retinal motion. In a wheelchair, the observer experiences an unfamiliar pattern of retinal motion because the position of the eye and change in eye position (bounce and sway) are not equivalent to that experienced by a walking observer. Second, the comparison is indirect. Passively moving an observer removes the possibility of comparing anticipated and experienced motion—the pattern of motion cannot be anticipated. A more direct approach would be to hold all other factors (path, etc.) constant and compare a condition in which anticipated and experienced motion are discrepant to a condition in which they are the same. 
In sum, the fundamental question of whether discrepant retinal motion plays a role in the realignment of egocentric reference frames thus remains open. We report three experiments that aim to address the three challenges described above. The logic of Redding and Wallace (1985)—that the magnitude of realignment should be dependent on the “salience” of the retinal motion—underpins the first two experiments. Experiments 1 and 2 test the conclusions of both Redding and Wallace (1985) and Bruggeman and Warren (2010). The third experiment examines the role of a discrepancy between anticipated and experienced retinal motion. 
In Experiment 1, rather than manipulating the salience of retinal motion by asking observers to walk faster/slower, we manipulate salience through a temporal manipulation: Retinal motion was either continuously, intermittently, or not available while walking. In Experiment 2, we change the salience of retinal motion through a spatial manipulation by decreasing the observers' field of view. In the third experiment, we examine the role of the discrepancy between anticipated and experienced retinal motion. We created a condition in which the experienced retinal motion is comparable to that experienced in the first two experiments but in which there is no discrepancy between the experienced and anticipated retinal motion. Unlike previous work here, we manipulate the difference between anticipated and experienced retinal motion, not the presence of anticipated retinal motion. 
To anticipate the results, they are in line with Held and Freedman's (1963) hypothesis. We found that as the amount of retinal motion decreased, the magnitude of realignment also decreased. When there was no discrepancy (Experiment 3), we found no evidence for realignment of perceived egocentric direction. 
Experiment 1: Temporal manipulation
Participants wore prisms that injected a 9° error into seen (or visual) direction and walked towards a target. While walking, they were exposed to full (“Motion” condition), intermittent (“StopGo”), or no retinal motion (“NoMotion”). In the Motion condition, participants walked normally with eyes open; retinal motion was continuously available. In the StopGo condition, participants walked with their eyes open but were required to make a definite stop, bringing both feet together, after every step; retinal motion was intermittently available. In the NoMotion condition, participants had to make a definite stop after each step and were only allowed to open their eyes when they were stationary; retinal motion was not available. Following the logic of Redding and Wallace (1985), if retinal motion has a particular role in realignment, then we should expect to find the smallest change in perceived visual direction when retinal motion is not available (NoMotion) and the largest change when it is available (Motion). 
Participants
A total of thirty-two participants (three males) took part in Experiment 1. Two (female) participants were unable to complete all three experimental conditions due to a change in weather conditions and so were removed from the data analysis. All had normal or corrected-to-normal vision. Those with corrected vision wore contact lenses. All participants were right-handed, were studying at Cardiff University, and were paid for their participation. Cardiff University, School of Psychology Ethics Committee approved the study (Experiments 13), and all participants gave informed consent. Each participant took part in three within-subject conditions (manipulation of retinal motion availability). 
Experiment setup
To inject an error in perceived direction, we used custom-designed, high-quality, prism glasses (see Supplementary materials for an example of the view through the prism glasses) that rotated the scene by approximately 9°. Half of the participants were exposed to base right, leftward displacing prisms, and the other half were exposed to base left, rightward displacing prisms in random order. 
A variety of visual environments have been used in realignment experiments. In many experiments, observers have walked down a corridor (e.g., Redding & Wallace, 1985) or along a marked path (e.g., Morton & Bastian, 2004). A potential problem with such environments is that path edges provide very salient cues (e.g., splay angle, Beall & Loomis, 1996) to lateral position error. We thought that such salient cues would likely encourage the observers to concentrate on minimising lateral error and that this would inhibit realignment. Therefore, we chose not to use such a corridor or path. Our experiments were conducted in an outdoor environment under full daylight conditions. 
Four perpendicular building walls, as well as bicycles, plant pots, and benches surrounding an open area offered a typical modern outdoor environment. The walking area was a rectangle of 17 × 5 m. We deliberately chose to use an enclosed rectangular environment to aid comparison with the studies of Warren et al. (Warren, Kay, Zosh, Suchon, & Sahuc, 2001; Bruggeman et al., 2007; Bruggeman & Warren, 2010). One feature of their environment that we were unable to replicate was the presence of vertical posts that provide dense motion parallax information. Because Warren et al.'s environments were computer generated, observers could pass straight through the virtual posts. In the real world, it is not possible to use posts to create dense parallax because posts would have been obstacles requiring changes in trajectory. 
We think that it is important to flag some features that are peculiar to enclosed rectangular environments. In an enclosed environment, the observer has an additional set of allocentric cues: orientation, and change in orientation, from the perceived slant of the walls (e.g., Beusmans, 1998); position relative to the sides of the environment (see the literature on place and grid cells, e.g., O'Keefe & Dostrovsky, 1971; Taub, Muller, & Ranck. 1990). Therefore, the environment used here (similar to that used by Bruggeman and Warren) is likely to promote more rapid, or more complete, realignment than would be obtained in more typical open or irregular environments. 
Five targets were used; three were located at one end of the area and two at the other. Each target consisted of a vertical metal pole with luminous tape attached at eye level. Participants were instructed to maintain fixation on the target and walk toward it, stopping about 1 m short. 
Measuring realignment
We took two measures of perceived straight-ahead. Straight-ahead is used in measurements of the alignment of egocentric space because it is easy for all observers to identify (compare with asking observers to judge a direction of 15° from straight-ahead). Based on Redding and Wallace's (1997; see their book for a summary) extensive empirical and theoretical work on the alignment of egocentric frames, we measured visual straight-ahead and proprioceptive straight-ahead. For both measures, participants stood 2 m away from a wall. A ruler was attached to the wall to enable the experimenter to record participants' indications of perceived straight-ahead; the numbers on the ruler were small enough to prevent participants from recognising them. The order in which participants were required to complete the alignment measures was randomised. 
Our primary measure of interest was perceived visual straight-ahead. To measure visual straight-ahead, the researcher moved a pointer horizontally along the wall in front of the observer; the observers' task was to verbally indicate when the pointer was straight-ahead of them. The starting direction of the pointer was varied between measurements. The position of the participant was also changed between measurements to prevent them from basing their estimates of straight-ahead on a remembered position on the wall. 
The second measure, proprioceptive straight-ahead, captures the perceived relative positions of parts of the body, the standard being the position of the hand relative to the head. To measure proprioceptive straight-ahead, participants were required to stand parallel to a wall, then, with their eyes closed, they were asked to guide their unseen right arm (all participants were right-handed) to a position that felt to be pointing straight-ahead. When they were confident of their estimate, they were required to turn on a laser pointer held in their pointing hand. Since the eyes are closed, errors in the localisation of straight-ahead are assumed to be a result of a realignment of perceived proprioceptive direction (hand–head position). 
The tasks were completed without prisms. Differences in perceived straight-ahead from pre- to post-prism exposure (aftereffects) were calculated to give a measure of realignment. 
Procedure
First, baseline measures of straight-ahead were established. Next, the participant performed the first walking condition (either Motion, StopGo, or NoMotion—the order of conditions was counterbalanced across participants). Prior to walking, the participant was positioned near one of the five posts. With eyes closed, the participant put on a pair of (prism) glasses. The participant was then rotated 1.5 times so that he or she was facing in the general direction of the posts located at the other end of the rectangular area, at a distance of 17 m (this procedure was followed to stop participants making an automatic compensation for the prisms by a change in head or eye posture). The participant would then open both eyes, turn and face a post chosen by the experimenter, and head toward it (stopping and closing their eyes as appropriate for the condition). 
The participant was told to stop at approximately arm's length; this counted as one walking trajectory. He or she then turned and walked toward the next target as specified by the experimenter. This procedure was repeated to give a total of six walking trajectories for each condition. 
On completion of the first walking condition, the participant, with his or her eyes closed, would be guided to the measurement area (approximately 10 m from the walking area) and a further set of straight-ahead measures was taken. The participant then spent 3 min de-adapting by walking around the experimental area while bouncing and catching a tennis ball. This task was used to accelerate de-adaptation by providing very salient perceptuo-motor feedback and was based on pilot testing to determine how best to quickly de-adapt participants. Following this, the participant performed the second walking condition, again followed by de-adaptation, and then the third walking condition. The order of conditions was counterbalanced across observers. 
Results
Data for left and right prisms were combined for analysis. Mean change in perceived visual and proprioceptive direction from pre- to post-exposure is shown in Figure 1. A change in the correct (adaptive) direction is assigned a positive value. 
Figure 1
 
Mean visual shift (VS) and proprioceptive shift (PS) displayed as a function of the availability of retinal motion. Error bars = ±1 SE.
Figure 1
 
Mean visual shift (VS) and proprioceptive shift (PS) displayed as a function of the availability of retinal motion. Error bars = ±1 SE.
Change in visual straight-ahead (visual shift—VS) was found to decrease when the availability of retinal motion decreased (see Figure 1) from Motion to NoMotion (Motion vs. NoMotion; t(29) = 2.453, p = 0.020, two tailed). Interestingly, the proprioceptive shift (PS) measure shows that when retinal motion is not available, the brain still picks up an error and adapts, but the site changes (significant interaction between retinal motion and measure of egocentric direction [F(2, 58) = 5.798, p = 0.005]). 
To support these initial findings, in Experiment 2, we sought convergent evidence through the use of a spatial manipulation. 
Experiment 2: Spatial manipulation
The second experiment is based on the same logic as the first. The difference is that in the second experiment we manipulated the saliency of retinal motion by restricting the observers' field of view (FoV; the primary restriction being the vertical extent of the field of view). We examine the consequent changes on the magnitude and location of realignment. Three viewing conditions were used. The first condition is directly comparable to the Motion condition of Experiment 1. In the second condition, goggles reduced the FoV to 75° horizontal and 28° vertical (from an initial FoV of 83° by 65°). 
Because a pair of shutter glasses was available, and because it was preferable for the second experiment to take the same form (3 within-subject conditions) as the first, we took the opportunity to include a third condition. In the third condition, the observers had the same field of view as in the FoV condition, but electronic shutters restricted the view temporally to 400-ms windows every 800 ms. We thought this condition might reveal evidence of a combination of spatial and temporal effects. Unfortunately, inspection of the data showed a floor effect for the FoV manipulation. Therefore, we have removed the data from the Results section, but we include it in the Supplementary materials (Supplementary Figure S1). 
Participants
In Experiment 2, we used a total of thirty participants (8 males). As in Experiment 1, all participants were right-handed students from Cardiff University with normal, or corrected-to-normal (by contact lenses), vision. Participants received payment for their participation. The conditions were completed within subjects; prismatic rotation (either left- or rightward) was manipulated between subjects. 
Experiment setup
The same experimental setup from Experiment 1 was used in Experiment 2
Procedure
The procedure for Experiment 2 was the same as that for Experiment 1. In Experiment 2, participants walked naturally (without pausing or closing their eyes) in all three conditions. 
Results
The results are shown in Figure 2. In common with the first experiment, when the availability of retinal motion was decreased, the magnitude of the visual shift (VS) also decreased (Motion vs. FoV: t(29) = 2.335, p = 0.027, two tailed). 
Figure 2
 
Mean adaptive shift for visual (VS) and proprioceptive (PS) realignment as a function of exposure to retinal motion. Error bars = ±1 SE.
Figure 2
 
Mean adaptive shift for visual (VS) and proprioceptive (PS) realignment as a function of exposure to retinal motion. Error bars = ±1 SE.
Again, the proprioceptive measure showed the opposite pattern of results (i.e., there was a significant interaction [F(1, 29) = 7.263, p = 0.0012]). The magnitude of visual and proprioceptive realignment in this experiment is comparable to the same Motion condition in the first experiment (there were no significant differences in the magnitude of VS [p = 0.595] and PS [p = 0.277] obtained in the Motion condition of the two experiments). 
Interim summary
If we accept the logic of Redding and Wallace (1985), then the results of Experiments 1 and 2 are in line with the hypothesis that discrepant retinal motion drives realignment of perceived visual direction; when exposure to retinal motion is restricted, the magnitude of realignment of visual direction is reduced. Interestingly, in the absence of retinal motion, errors in perceived direction are detected and a realignment process occurs: Proprioceptive direction changes. 
Experiment 3: Does an error signal drive realignment?
In the final experiment, we looked directly at the role of the discrepancy between anticipated and perceived retinal motion. When an error is injected (through use of prisms or Virtual Reality) in the mapping from perception to action, this produces a discrepancy between anticipated and experienced retinal motion, but it also leads to the observers taking physically curved paths and experiencing an offset retinal motion field. It is thus possible that the latter two consequences were responsible for the changes that we found in Experiments 1 and 2. For example, it has been shown by Scott, Lohnes, Horak, and Earhart (2011) that a period of walking on a rotating treadmill can dramatically influence perceived straight-ahead (they found that proprioceptive straight-ahead shifted to the right for anti-clockwise rotation and to the left for clockwise rotation), and a study by Wu, He, and Ooi (2005) demonstrated that exposure to offset radial motion can alter perceived visual straight-ahead. In the third experiment, we sought to retain the curving trajectory and offset motion field but remove the discrepancy between anticipated and experienced retinal motion. Unlike previous investigators, we did not remove the discrepancy by removing the ability to anticipate retinal motion, rather we removed the difference between anticipated and experienced retinal motion. We did this by removing the prism glasses and instructing observers to walk toward a moving target (which produces a curved trajectory; see Experimental setup section). The result was a walked path and experienced retinal motion that was comparable to Experiments 1 and 2. However, in this experiment, there should be no difference between experienced and anticipated retinal motion. Therefore, if realignment is driven by a discrepancy, then we should not find any realignment. We used the same three saliency conditions as in Experiment 1 (Motion, StopGo, and NoMotion). 
Participants
Twenty participants (2 males) took part. All participants were right-handed students from Cardiff University with normal or corrected-to-normal vision by contact lenses only. Participants were tested in return for course credit. Similar to Experiment 1, walking conditions were manipulated within subjects, and walking direction (i.e., a leftward or rightward curve, resembling that taken while wearing leftward or rightward displacing prisms, respectively) was manipulated between participants, with 10 participants taking leftward curving trajectories and 10 taking rightward curving trajectories toward the target. 
Experiment setup
The same outdoor environment used in Experiments 1 and 2 was used for Experiment 3, but the setup of the posts differed. Eight vertical posts were arranged side by side at diagonally opposite corners of the experimental area. Each post had a light (consisting of a 5-cm vertical strip of red LEDs), attached at eye level. The target was the light that was lit. By lighting each light in succession, the target was moved from left to right or vice versa. The observer's task was to walk toward the currently lit light (see Figure 3). 
Figure 3
 
Experiment setup to produce a rightward curving trajectory; the lights were moved to the opposite corners to produce a leftward curving trajectory. The initial start position is shown in white. The trial would commence when the first light (the leftmost grey circle) was switched on—the observer's task was to simply walk toward the light that was on. After a certain distance, the next light in the sequence was switched on causing the observer to adjust their locomotor path accordingly. Each light was switched on once the observer had passed a particular point, in turn causing a curved trajectory that resembled that taken when walking with a misperception of direction (illustrated as a dashed black line). Spacing and timing of the lights was based on analysis of video records of the trajectories taken when wearing prisms. Lights consisted of a vertical strip (5 cm) of five red LEDs attached to a post at eye level.
Figure 3
 
Experiment setup to produce a rightward curving trajectory; the lights were moved to the opposite corners to produce a leftward curving trajectory. The initial start position is shown in white. The trial would commence when the first light (the leftmost grey circle) was switched on—the observer's task was to simply walk toward the light that was on. After a certain distance, the next light in the sequence was switched on causing the observer to adjust their locomotor path accordingly. Each light was switched on once the observer had passed a particular point, in turn causing a curved trajectory that resembled that taken when walking with a misperception of direction (illustrated as a dashed black line). Spacing and timing of the lights was based on analysis of video records of the trajectories taken when wearing prisms. Lights consisted of a vertical strip (5 cm) of five red LEDs attached to a post at eye level.
Procedure
Similar to Experiments 1 and 2, the experiment began by taking pre-walking (exposure) measures of perceived straight-ahead. The participant would then be positioned at one end of the walking area (see the starting position highlighted in Figure 3) and instructed to walk toward the lit light. Initially, the leftmost or rightmost light was switched on depending on which trajectory we were attempting to replicate. For rightward curving trajectories, the lights were set up as shown in Figure 3 and the leftmost light was lit first; for leftward curving trajectories, the lights were set up in the other two corners of the rectangular environment and the rightmost light was lit first. After the observer had walked a certain distance, the next light in the sequence was switched on causing the observer to adjust their locomotor path accordingly. This was repeated until he or she reached the final post at the opposite side of the walking area. The result was a curved trajectory that resembled those in Experiments 1 and 2
As in Experiments 1 and 2, after 6 walking trajectories, participants were guided (with eyes closed) to the measurement area and a further set of straight-ahead measures was taken. This was followed by a de-adaptation period of 3 min. The procedure was repeated for each of the three walking conditions (Motion, StopGo, and NoMotion). 
Results
The results are shown in Figure 4. Unlike the previous experiments, no evidence of adaptation was found. 
Figure 4
 
Mean level of adaptive shift. Visual shift (VS) and proprioceptive shift (PS) are shown across all three conditions. Error bars = ±1 SE.
Figure 4
 
Mean level of adaptive shift. Visual shift (VS) and proprioceptive shift (PS) are shown across all three conditions. Error bars = ±1 SE.
None of the changes in perceived direction were significantly different from zero, and the magnitude of adaptation differed significantly from that obtained in Experiment 1 (significant between subject effects of exposure type—prism vs. lights [F(1, 48) = 11.222, p = 0.002]). 
Discussion
We believe that the three experiments provide a compelling demonstration of the role of retinal motion in the alignment of visual space: as the amount of retinal motion decreased, the degree of visual realignment also decreased. Interestingly, the results also illustrate that other forms of information about misalignment are available because (proprioceptive) realignment was still obtained in the absence of retinal motion. The final experiment shows that a discrepancy between anticipated and experienced sensory feedback drives realignment. We discuss these points in more detail below and relate our findings to previous work. 
Realignment as a function of the amount of retinal motion
We noted in the Introduction section that the role of retinal motion in realignment was directly challenged by the work of Redding and Wallace (1985) and, second, that the work of Bruggeman et al. (2007) challenges the belief that walking leads to a realignment in seen (perceived visual) direction. A consistent and robust effect over two separate experiments leads us to different conclusions. So how do we reconcile our findings with the previous work? 
Redding and Wallace's (1985) conclusion that retinal motion is not responsible for realignment was based on a lack of difference: They found the same magnitude of realignment with three different walking speeds. Our suspicion is that they encountered a ceiling effect: Psychophysical data show that observers are sensitive to small differences between radial retinal motion fields associated with forward locomotion even with slow retinal speeds (Crowell & Banks, 1993) and that sensitivity is independent of speed over a wide range of speeds, thus increasing walking speed should have little effect. Our manipulations reduced the information available from retinal motion. Both temporal (Experiment 1) and spatial (Experiment 2) manipulations led to corresponding changes in realignment. We think that this provides strong evidence for the role of retinal motion in the realignment of egocentric direction. 
Why did we find a different result to Bruggeman et al. (2007) and Bruggeman and Warren (2010)? There are a number of potentially important differences between our studies. One is the field of view. Bruggeman et al. used a Head-Mounted Display (HMD) with an 80° diagonal field of view; the diagonal field of view in our reduced field of view (“FoV”) condition in Experiment 2 was almost identical. In our reduced field of view condition, we found a reduction in the visual shift. Therefore, it is possible that, if Bruggeman et al. replicated their experiment with an HMD with a larger field of view, they too might find a visual shift. We suggest this only as one possible explanation. 
Another possibility is that Bruggeman et al.'s experiment encouraged “world learning” (e.g., Bedford, 1998). In their experiment, motion parallax from the virtual posts provides a vivid cue that observers are not walking straight toward the target. The observer could quickly discover that by making a crabwise movement (Warren et al., 2001) he or she can reduce the motion parallax and so walk a straight course to the target. This would be the equivalent of “side-pointing” (e.g., Redding & Wallace, 1993) in the prism reaching literature. Such task-specific “world learning” would inhibit the normal perceptuo-motor realignment process (Benson, Anguera, & Seidler, 2011). Again, we suggest this only as a possibility. 
We mention a further study. Morton and Bastian (2004) examined the transfer of adaptation between pointing and walking. They found that adaptation from walking generalised to adaptation in reaching. When they measured visual shift, they found that 4 out of 5 participants showed shifts in the adaptive direction. These two findings are compatible with ours and incompatible with those of Bruggeman et al. However, the visual shift they reported was small, and in a group analysis, it did not reach the 5% statistical significance level, so some caution should be exercised in interpreting these results. 
The StopGo finding that realignment occurs in the absence of continuous retinal motion prompts the question, what information is driving the realignment? Candidate cues would include perspective change and positional cues (see Methods section). We draw attention to research by Beusmans (1998), who found that perspective changes provide information about locomotion direction (such information would be readily available in environments such as ours that contain walls), and research by Hahn, Anderson, and Saidpour (2003), who found that observers could utilise the changes within snapshots of a scene to infer change of viewpoint. 
The necessity of a discrepancy
The last challenge that we listed in the Introduction section is the necessity of a discrepancy to drive realignment. In our third experiment, we removed the discrepancy between anticipated and experienced retinal motion in an attempt to rule out other factors that could have driven a change in perceived visual direction (i.e., walking on a curved path). Comparison of the results of Experiment 3 with the results from the first two experiments allows us to rule out the possibility that realignment in the first two experiments occurred simply because observers walked on a curved path. We believe that our manipulation is immune to the criticisms raised against the active/passive comparison used in previous studies. 
A potential criticism of our manipulation is that the scene changes (the target moves relative to the scene). However, it is not obvious why this might disrupt the realignment process; the primary alternative cues that might explain the results of the first two experiments (physically curving trajectory and offset flow field) were present in this experiment. 
Visual–vestibular discrepancies?
We have described the discrepancy between anticipated and experienced retinal motion. There is, however, another cue that might have a role: vestibular information. If the observer perceives that he or she is walking a straight course (holding the target straight-ahead) but is actually taking a curved trajectory, vestibular cues may signal a discrepancy, that the observer is walking a curved path. We believe that we can exclude this possibility in the reported experiments: The angular acceleration experienced by the observer was exceedingly small and likely below threshold (see Benson, Hutt, & Brown, 1989). 
Hierarchically linked adaptation sites
We found an interaction between proprioceptive and visual adaptation; proprioceptive adaptation occurred when visual adaptation did not. This is interesting because it suggests a hierarchy among adaptation sites. One account of these results would be that when a retinal motion error signal is available the brain attributes the error to an error in perceived visual direction (or eye–head alignment) and adapts accordingly. In the absence of a retinal motion error signal, the brain assumes that the error lies elsewhere, either in the perception of the alignment of body parts (proprioceptive realignment) or in visuo-locomotor mappings (Bruggeman et al., 2007). 
However, the realignment process may be more complex. Hay and Pick (1966) showed that when measured in intervals of 12 h, adaptation could be seen to change from one site to another. Comparison of our findings with other work (Held & Bossom, 1961; Redding, Clarke, & Wallace, 1985; Redding & Wallace, 1985) that reported changes in perceived visual direction over intermediate periods of time suggests that something similar may also occur across shorter time intervals. It would be interesting to explore how adaptation evolves and how the brain settles on a stable or optimal allocation of changes, or realignment, across sites. 
Prisms vs. head-mounted displays
We should anticipate a criticism of these experiments. Is it better to use prisms or HMDs to study the visual guidance of walking and realignment? Traditionally, prisms have been used (e.g., Held & Freedman, 1963; Redding et al., 1985), but some recent work has used HMDs (e.g., Bruggeman et al., 2007; Bruggeman & Warren, 2010), and our results appear to be conflicting. 3  
Let us start with prisms. Warren et al. (2001, p. 214) have suggested that prisms may distort the retinal motion field and hence produce spurious results. Fortuitously, there are data available that speak to this issue (see also Supplementary materials; Figure 2). Odom, Ghude, and Humble (2006) examined the precision of observers' judgements of heading direction from optic flow while either wearing base left prisms, base right prisms, or no prisms at all. The precision of heading judgements was found to be similar across all three conditions. In a separate series of experiments on judgement of circular heading with non-canonical flow fields, Kim, Fajen, and Turvey (2000) found that heading perception remained accurate even under dramatic “fishbowl’ distortions. 
Another criticism of prisms is that if an observer shakes his or her head around while wearing prisms a shearing of the scene will be experienced. During our walking experiments, observers are instructed to maintain eye-level gaze on the target and so the rotation of their head is minimal. Therefore, we think this criticism is misplaced. 
Let us now consider HMDs. We recognise (e.g., Rushton & Wann, 1999) that HMDs are very useful devices for research into visual perception. However, HMDs have limitations. First, due to the complexity of the optical design, they also suffer from optical distortion. Some distortions such as pincushioning can be calibrated out (though they typically are not) in software, while others cannot. Second, HMDs have a limited spatial and temporal resolution, typically of the order of 1280 × 1024 at 60 Hz, and limited contrast. The temporal resolution imposes delays between movements of the head and an update of the images (see Di Luca, 2010, for a method of quantifying this delay). Tracking technology and processing impose further delays. As a consequence, the delays at least go into the tens of milliseconds and sometimes into the hundreds. Third, the field of view of HMDs is typically limited, for example, in Bruggeman et al. (2007) the FoV was 80° diagonally. Fourth, HMDs are usually, but not always, used to view a virtual environment. Virtual environments are of restricted ecological realism (e.g., see Bruggeman et al., 2007), richness, and complexity. In addition, we note that additional cautions about the use of HMDs in the study of perceptuo-motor realignment have been raised by others in the field (see the final two paragraphs of Redding & Wallace, 2006, for a summary). 
After weighing these considerations and reviewing our experimental aims, we decided to use high-quality, custom-designed prisms in our experiments. In other situations, HMDs might be more appropriate. We also think that it is important to note that the differences in outcome associated with use of HMDS and prisms can be overstated and there is often good agreement between comparable studies. For example, when considering the role of retinal motion and egocentric direction in walking, studies with prisms (e.g., Rushton, Harris, Lloyd, & Wann, 1998), CAVES (Guterman, Allison, & Rushton. 2007), and HMDs (Saunders & Durgin, 2011) demonstrate a similar reliance on egocentric direction as the primary cue in the guidance of locomotion. 
Conclusions
The problem of adaptation, or realignment of spatial frames, is fundamental to an understanding of visually guided action. Across three experiments, we demonstrated the important role of discrepant retinal motion in the realignment of perceived egocentric direction while walking. When discrepant retinal motion is present, it drives a shift in visually seen direction. Interestingly, when retinal motion is lacking, yet other cues indicating a discrepancy are present, errors are still detected, but the realignment occurs elsewhere within the perceptual motor system. 
Supplementary Materials
Supplementary PDF - Supplementary PDF 
Acknowledgements
We would like to thank Jon Kennedy, Susanne Ferber, and Petroc Sumner for providing valuable comments on an earlier version of the manuscript and the reviewers and editor for useful feedback during the review process. T.A.H. was supported by an EPSRC Ph.D. studentship. 
Commercial relationships: none. 
Corresponding author: Simon K. Rushton. 
Email: RushtonSK@Cardiff.ac.uk. 
Address: School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff, CF10 3AT Wales, UK. 
Footnotes
Footnotes
1  There is little consensus regarding the most appropriate terminology (“alignment,” “calibration,” “adaptation,” etc.) to use. We have adopted the terminology of Redding & Wallace (1997; see their book for a justification of the choice of terms).
Footnotes
2  Redding and Wallace did not identify alternative causes of realignment, but they suggested both head posture (when an observer dons prisms, to maintain fixation on an object with the eyes in their primary position, the observer may turn his or her head on the shoulders) and auditory information (noting a discrepancy between the seen and heard direction of an object or person) as possible candidates.
Footnotes
3  See Supplementary materials for further details.
References
Beall A. C. Loomis J. M. (1996). Visual control of steering without course information. Perception, 25, 481–494. [CrossRef] [PubMed]
Bedford F. (1998). Keeping perception accurate. Trends in Cognitive Sciences, 3, 4–11. [CrossRef]
Benson A. J. Hutt E. C. Brown S. F. (1989). Thresholds for the perception of whole body angular movement about a vertical axis. Aviation Space and Environmental Medicine, 60, 205–213.
Benson B. Anguera J. Seidler R. (2011). A spatial explicit strategy reduces error but interferes with sensorimotor adaptation. Journal of Neurophysiology, 105, 2843–2851. [CrossRef] [PubMed]
Beusmans J. M. (1998). Perceived object shape affects the perceived direction of self-movement. Perception, 27, 1079–1085. [CrossRef] [PubMed]
Bruggeman H. Warren W. H. (2010). The direction of walking—but not throwing or kicking—is adapted by optic flow. Psychological Science, 21, 1006–1013. [CrossRef] [PubMed]
Bruggeman H. Zosh W. Warren W. H. (2007). Optic flow drives human visuo-locomotor adaptation. Current Biology, 17, 2035–2040. [CrossRef] [PubMed]
Calvert E. S. (1950). Visual aids for landing in bad visibility, with a particular reference to the transition from instrumental to visual flight. Transactions of the Illuminating Engineering Society, 15, 183–219.
Crowell J. A. Banks M. S. (1993). Perceiving heading with different retinal regions and types of optic flow. Perception & Psychophysics, 53, 325–337. [CrossRef] [PubMed]
Di Luca M. (2010). New method to measure end-to-end delay of virtual reality. Presence, 19, 569–584. [CrossRef]
Gibson J. (1958). Visually controlled locomotion and visual orientation in animals. British Journal of Psychology, 49, 182–194. [CrossRef] [PubMed]
Guterman P. S. Allison R. S. Rushton S. K. (2007). The visual control of walking: Do we go with the (optic) flow? [Abstract]. Journal of Vision, 7(9):1017, 1017a, http://www.journalofvision.org/content/7/9/1017, doi:10.1167/7.9.1017. [CrossRef]
Hahn S. Andersen G. J. Saidpour A. (2003). Static scene analysis for the perception of heading. Psychological Science, 14, 543–548. [CrossRef] [PubMed]
Hay J. C. Pick H. L. (1966). Visual and proprioceptive adaptation to optical displacement of the visual stimulus. Journal of Experimental Psychology, 72, 419–444. [CrossRef]
Held R. Bossom J. (1961). Neonatal deprivation and adult rearrangement: Complementary techniques for analyzing plastic sensory-motor coordination. Journal of Comparative and Physiological Psychology, 54, 33–37. [CrossRef] [PubMed]
Held R. Freedman S. J. (1963). Plasticity in human sensorimotor control. Science, 142, 455–462. [CrossRef] [PubMed]
Held R. Mikaelian H. (1964). Motor sensory feedback versus need in adaptation to rearrangement. Perceptual and Motor Skills, 18, 685–688. [CrossRef] [PubMed]
Howard I. P. Anstis T. (1974). Muscular and joint-receptor components in postural persistence. Journal of Experimental Psychology, 103, 167–170. [CrossRef] [PubMed]
Kim N. Fajen B. Turvey M. (2000). Perceiving circular heading in noncanonical flow fields. Journal of Experimental Psychology: Human Perception and Performance, 26, 31–56. [CrossRef] [PubMed]
Mollon J. (1997). “…On the basis of velocity cues alone”: Some conceptual themes. Quarterly Journal of Experimental Psychology, 50, 859–878. [PubMed]
Morton S. M. Bastian A. J. (2004). Prism adaptation during walking generalizes to reaching and requires the cerebellum. Journal of Neurophysiology, 92, 2497–2509. [CrossRef] [PubMed]
Odom J. V. Ghude P. Humble H. (2006). Effect of prism orientation on heading direction in optic flow. Journal of Modern Optics, 53, 1363–1369. [CrossRef]
O'Keefe J. Dostrovsky J. (1971). The hippocampus as a spatial map Preliminary evidence from unit activity in the freely-moving rat. Brain Research, 34, 171–175. [CrossRef] [PubMed]
Paap K. R. Ebenholtz S. M. (1976). Perceptual consequences of potentiation in extraocular muscles—Alternative explanation for adaptation to wedge prisms. Journal of Experimental Psychology: Human Perception and Performance, 2, 457–468. [CrossRef] [PubMed]
Redding G. M. Clarke S. E. Wallace B. (1985). Attention and prism adaptation. Cognitive Psychology, 17, 1–25. [CrossRef] [PubMed]
Redding G. M. Wallace B. (1985). Cognitive interference in prism adaptation. Perception and Psychophysics, 37, 225–230. [CrossRef] [PubMed]
Redding G. M. Wallace B. (1993). Adaptive coordination and alignment of eye and hand. Journal of Motor Behavior, 25, 75–88. [CrossRef] [PubMed]
Redding G. M. Wallace B. (1997). Adaptive spatial alignment. Mahwah, NJ: Erlbaum.
Redding G. M. Wallace B. (2006). Generalization of prism adaptation. Journal of Experimental Psychology: Human Perception and Performance, 32, 1006–1022. [CrossRef] [PubMed]
Rushton S. Wann J. (1999). Weighted combination of size and disparity: A computational model for timing a ball catch. Nature Neuroscience, 2, 186–190. [CrossRef] [PubMed]
Rushton S. K. Harris J. M. Lloyd M. R. Wann J. P. (1998). Guidance of locomotion on foot uses perceived target location rather than optic flow. Current Biology, 8, 1191–1194. [CrossRef] [PubMed]
Saunders J. A. Durgin F. H. (2011). Adaptation to conflicting visual and physical heading directions during walking. Journal of Vision, 11(3):15, 1–10, http://www.journalofvision.org/content/11/3/15, doi:10.1167/11.3.15. [PubMed] [Article] [CrossRef] [PubMed]
Scott J. T. Lohnes C. A. Horak F. B. Earhart G. M. (2011). Podokinetic stimulation causes shifts in perception of straight ahead. Experimental Brain Research, 208, 313–321. [CrossRef] [PubMed]
Taub J. S. Muller R. U. Ranck J. B. (1990). Head-direction cells recorded from the postsubiculum in freely moving rats: 1. Description and quantitative analysis. Journal of Neuroscience, 10, 420–435. [PubMed]
von Holst E. (1954). Relations between the central nervous system and the peripheral organs. The British Journal of Animal Behaviour, 2, 89–94. [CrossRef]
Warren W. H. Kay B. A. Zosh W. D. Duchon A. P. Sahuc S. (2001). Optic flow is used to control human walking. Nature Neuroscience, 4, 213–216. [CrossRef] [PubMed]
Wu J. He Z. J. Ooi T. L. (2005). Visually perceived eye level and horizontal midline of the body trunk influenced by optic flow. Perception, 34, 1045–1060. [CrossRef] [PubMed]
Figure 1
 
Mean visual shift (VS) and proprioceptive shift (PS) displayed as a function of the availability of retinal motion. Error bars = ±1 SE.
Figure 1
 
Mean visual shift (VS) and proprioceptive shift (PS) displayed as a function of the availability of retinal motion. Error bars = ±1 SE.
Figure 2
 
Mean adaptive shift for visual (VS) and proprioceptive (PS) realignment as a function of exposure to retinal motion. Error bars = ±1 SE.
Figure 2
 
Mean adaptive shift for visual (VS) and proprioceptive (PS) realignment as a function of exposure to retinal motion. Error bars = ±1 SE.
Figure 3
 
Experiment setup to produce a rightward curving trajectory; the lights were moved to the opposite corners to produce a leftward curving trajectory. The initial start position is shown in white. The trial would commence when the first light (the leftmost grey circle) was switched on—the observer's task was to simply walk toward the light that was on. After a certain distance, the next light in the sequence was switched on causing the observer to adjust their locomotor path accordingly. Each light was switched on once the observer had passed a particular point, in turn causing a curved trajectory that resembled that taken when walking with a misperception of direction (illustrated as a dashed black line). Spacing and timing of the lights was based on analysis of video records of the trajectories taken when wearing prisms. Lights consisted of a vertical strip (5 cm) of five red LEDs attached to a post at eye level.
Figure 3
 
Experiment setup to produce a rightward curving trajectory; the lights were moved to the opposite corners to produce a leftward curving trajectory. The initial start position is shown in white. The trial would commence when the first light (the leftmost grey circle) was switched on—the observer's task was to simply walk toward the light that was on. After a certain distance, the next light in the sequence was switched on causing the observer to adjust their locomotor path accordingly. Each light was switched on once the observer had passed a particular point, in turn causing a curved trajectory that resembled that taken when walking with a misperception of direction (illustrated as a dashed black line). Spacing and timing of the lights was based on analysis of video records of the trajectories taken when wearing prisms. Lights consisted of a vertical strip (5 cm) of five red LEDs attached to a post at eye level.
Figure 4
 
Mean level of adaptive shift. Visual shift (VS) and proprioceptive shift (PS) are shown across all three conditions. Error bars = ±1 SE.
Figure 4
 
Mean level of adaptive shift. Visual shift (VS) and proprioceptive shift (PS) are shown across all three conditions. Error bars = ±1 SE.
Supplementary PDF
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×