Free
Article  |   September 2012
The relative contributions of radial and laminar optic flow to the perception of linear self-motion
Author Affiliations
Journal of Vision September 2012, Vol.12, 7. doi:https://doi.org/10.1167/12.10.7
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Laurence R. Harris, Rainer Herpers, Michael Jenkin, Robert S. Allison, Heather Jenkin, Bill Kapralos, David Scherfgen, Sandra Felsner; The relative contributions of radial and laminar optic flow to the perception of linear self-motion. Journal of Vision 2012;12(10):7. https://doi.org/10.1167/12.10.7.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  When illusory self-motion is induced in a stationary observer by optic flow, the perceived distance traveled is generally overestimated relative to the distance of a remembered target (Redlick, Harris, & Jenkin, 2001): subjects feel they have gone further than the simulated distance and indicate that they have arrived at a target's previously seen location too early. In this article we assess how the radial and laminar components of translational optic flow contribute to the perceived distance traveled. Subjects monocularly viewed a target presented in a virtual hallway wallpapered with stripes that periodically changed color to prevent tracking. The target was then extinguished and the visible area of the hallway shrunk to an oval region 40° (h) × 24° (v). Subjects either continued to look centrally or shifted their gaze eccentrically, thus varying the relative amounts of radial and laminar flow visible. They were then presented with visual motion compatible with moving down the hallway toward the target and pressed a button when they perceived that they had reached the target's remembered position. Data were modeled by the output of a leaky spatial integrator (Lappe, Jenkin, & Harris, 2007). The sensory gain varied systematically with viewing eccentricity while the leak constant was independent of viewing eccentricity. Results were modeled as the linear sum of separate mechanisms sensitive to radial and laminar optic flow. Results are compatible with independent channels for processing the radial and laminar flow components of optic flow that add linearly to produce large but predictable errors in perceived distance traveled.

Introduction
Humans and other animals can use optic flow cues to judge the distance they have traveled, a skill known as visual odometry. Systematic errors are found in visual odometry that depend on the details of the motion profile (Lappe, Jenkin, & Harris, 2007; Redlick, Harris, & Jenkin, 2001; Srinivasan, Zang, & Bidwell, 1997). When subjects are asked to judge when they have reached a previously shown target position in visual simulations of linear self-motion at low constant acceleration or at constant velocity, they indicate that they have arrived after traveling less than the target distance actually simulated (Redlick et al., 2001). However, at higher accelerations (0.1–1 m·s−2) subjects are reasonably accurate at judging their motion using visual cues alone. Calculating the distance traveled from optic flow is one part of the process known as path integration, in which the course of an extended movement is estimated by integrating short pieces of the movement to yield the total path (Maurer & Seguinot, 1995; Mittelstaedt & Mittelstaedt, 1973). Extracting self-motion information from optic flow requires transforming visual motion (coded in angular terms) to motion in three-dimensional space, which involves complex geometry and a need for scaling (Frenz & Lappe, 2005; Koenderink, 1990). 
Furthermore, the pattern of optic flow generated by forward translation varies across the visual field. Extracting self-motion information from the different patterns requires different computational mechanisms. In a static environment, the flow associated with forward motion can be approximated as laminar in the peripheral visual field and radial in the central field (Andersen & Braunstein, 1985; Stoffregen 1985). There is physiological evidence for structures sensitive to these different computational requirements. Neurons in the monkey cerebral cortex appear to perform such a parsing and are differentially sensitive to radial (Duffy & Wurtz, 1991; Graziano, Andersen, & Snowden, 1994; Saito et al., 1986) and laminar (Albright, 1989) flow. Behavioral evidence also suggests that the radial and laminar components are processed separately. For example, sideways linear vection induced by laminar flow has a longer latency (∼19 s) than forward motion induced by radial flow (∼4 s) (Telford & Frost, 1993), although despite the longer latency, laminar flow seems to be especially significant in balance control (Andersen & Dyre, 1989). Laminar and radial flows also contribute differently to the ability to detect the heading of motion (Crowell & Banks, 1993, 1996; Warren & Kurtz, 1992). 
In order to assess the relative contributions of the radial and laminar components of optic flow in visual odometry, we presented subjects with a visually simulated corridor that was viewed at different eccentricities relative to the direction of motion simulated. For each viewing condition, subjects estimated when they reached the position of a previously presented target within this corridor using the optic flow information available. The amount of the corridor that they could see was restricted with the consequence that the relative amount of radial and laminar flow visible depended on the eccentricity of viewing. When looking straight ahead, the flow was almost entirely radial, but as the subjects looked more eccentrically, the amount of radial flow fell off and the amount of laminar flow increased. 
The use of different measurement techniques to assess the perception of self-motion has caused some confusion in the literature as to whether subjects overestimate (Frenz & Lappe, 2005) or underestimate (Redlick et al., 2001) their distance traveled during vection. This confusion has recently been resolved by modeling the perceived travel distance as the output of a leaky spatial integrator (Lappe et al., 2007). The leaky integrator model predicts opposite errors depending on whether subjects are asked to accumulate a distance during the simulation (simulating motion first and then asking how far they felt themselves to have moved) or to count down a distance (simulating motion through a predetermined distance). By taking the subject's task into account, the leaky spatial integrator model correctly predicts the seemingly conflicting results. This model has been applied to the data collected in the present study. 
One well-established phenomenon in virtual environments is that distances are often perceived as closer than they really are (Willemsen, Gooch, Thompson, & Creem-Regehr, 2008). In fact, even in the real world there is considerable distance compression, especially for further distances (Creem-Regehr, Willemsen, Gooch, & Thompson, 2005). In order to assess the possible contribution of this perceptual distortion to the perception of distance traveled, estimates of perceived distances to the targets were also obtained. 
Methods
Subjects
Subjects (n = 10, two female, mean age = 41) included all the authors and some visiting students in the lab of Rainer Herpers. Experiments were performed in a manner consistent with the Declaration of Helsinki and with ethics approval from York University and the University of Bonn-Rhine-Sieg. 
Equipment
Subjects sat on a stationary exercise bicycle with their heads 120 cm from a triptych of vertical screens with the two side screens attached at an angle of 120° (Figure 1). Each screen was 138 cm wide and 104 cm high and mounted with the lower edge 125 cm above the floor. Head movement was tracked using a six camera OptiTrack system (Natural Point, Corvallis, OR). The tracking system provided an angular resolution of better than 1°, a positional resolution of better than 1 cm, and a temporal lag of less than 50 ms. The visual display was updated based on the subject's head position and orientation within the simulated environment. 
 
Figure 1 (movie)
 
The experimental set up. Subjects sat on a stationary exercise bike in front of a large triptych of screens. In Experiment 1 they were shown a corridor that then moved visually towards them (A), and they pressed a button when they reached the position of a previously shown target.
Figure 1b
 
In Experiment 2 (B) they judged the distance of a static target as a multiple of the distance to a standard.
Figure 1b
 
In Experiment 2 (B) they judged the distance of a static target as a multiple of the distance to a standard.
Visual simulation
The visual display simulated the appearance of being in and looking down a corridor 2 m wide, 2.5 m high and 250 m long, centered on the central display screen and aligned with the bicycle. The corridor was wallpapered with vertical, 0.5 m stripes of various colors that changed dynamically such that five times a second, 20% of the stripes changed color in order to prevent tracking of individual stripes. A striped frame was drawn around the corridor at a simulated distance of 2 m from the subject. This frame was fixed relative to the subject and remained on throughout the experiment. The target was a simulated wooden door with cross beams with the same width and height as the corridor. 
Procedure
Experiment 1: Self-motion perception
Subjects sat on a stationary exercise bicycle wearing the head tracking system with their left eye occluded by an optician's patch. They looked down a simulated stationary corridor and viewed a target simulated at 8, 12, 16, or 20 m beyond the reference frame (which was simulated 2 m in front of the subject). The video display was yoked to the head tracker. Subjects were encouraged to assess the target's distance by moving their head from side to side. After viewing the corridor and target they pressed a button that caused the target to disappear and the visual display to shrink to an oval region that faded gradually to black starting at 40° (h) × 24° (v), reaching black by 84° (h) × 50° (v). This ensured a consistent field size for all viewing eccentricities. Simultaneously, a target light came on either straight ahead or displaced 20°, 35°, or 50° to the right. An additional six subjects (three female, mean age = 30) performed the experiment with their right eye patched and eccentric viewing to the left. Subjects moved their heads (with the field of view held constant relative to their gaze direction) until the visible area was centered on the target light. They then pressed the button again whereupon the target light was extinguished. Subjects maintained fixation in this direction while motion down the corridor was visually simulated at a constant velocity of 1 or 2 m/s. Subjects pressed a button when they felt the frame (which was fixed relative to them) had reached the previously viewed target's position. The visual field then expanded to the full extent of the display, the subject returned to looking straight ahead, and the target for the next trial was displayed. Each condition was presented twice for a total of 2 repetitions × 2 speeds × 4 eccentricities × 4 target distance = 64 trials per subject, presented in a random order. 
Experiment 2: Distance estimation
Distance estimation trials used the same arrangement as for Experiment 1 except that the subject did not travel down the hallway or look to one side. Rather, when the target appeared, a fixation point was simulated at a fixed distance of 1.5 m from the subject (i.e., 30 cm beyond the screen; see Figure 1b). Subjects estimated verbally how many times further away the target was than the fixation point (e.g., “1.7 times as far”). Subsequently they estimated the distance to the fixation point in meters. All target distance estimates were converted into meters by multiplying the data by this scaling factor. Each of the four experimental target distances was presented 10 times in a randomized order for a total of 10 repetitions × 4 distances = 40 estimates. 
Data analysis
The simulated distance traveled that was regarded as passing through a given target distance (Experiment 1) and the distance estimates of each target (Experiment 2) were plotted as functions of the actual target distance. The simulated distance traveled was fitted by the output of Lappe et al.'s (2007) leaky spatial integrator model. This model has been found to reliably predict movement distance (Bergmann et al., 2011) and to be flexible in coping with different instruction sets. During motion, the model assumes that the subject maintains an ongoing estimate of the distance to the target (D). Let x be the distance the subject moves. Then, under the Lappe et al. (2007) leaky integrator model for a subject moving towards a previously presented target, the instantaneous change in D with respect to x is given by where k is the sensory gain (k = 1 for an ideal observer) and α represents the leaky integrator constant or leak rate (α = 0 for an ideal observer). Equation 1 solves to1 where D0 is the actual distance to the target before moving. From this we can solve for the distance traveled (x) at which the subject believed they had reached the target (D = 0) for a given target distance (D0):  
Results
Variation of perceived travel distance with eccentric viewing
We recorded the simulated travel distance necessary for subjects to believe they had reached the position of a previously seen target at 8, 12, 16, or 20 m for two speeds (1 m/s and 2 m/s), for four angles of eccentric viewing (0°, 20°, 35°, and 50°) either to the left or right. We performed a repeated measures analysis of variance (ANOVA) with three within-group factors (speed, eccentricity, and distance) and one between-group factor (left/right). Violations of the sphericity assumption during ANOVA procedures were corrected by adjusting the degrees of freedom according to the Greenhouse-Geisser correction. There was no significant difference between viewing to the left and viewing to the right (F[1, 14] = .516, p = 0.485). Subjects' perception of how far they had traveled varied systematically with target distance (F[1.627, 22.776] = 130.255, p < 0.001), speed of simulated movement (F[1, 14] = 78.614, p < 0.001) and eccentric viewing angle (F[1.726, 24.169] = 6.291, p = 0.008). This is illustrated in Figure 2, which plots the mean distance at which subjects judged themselves to have reached each target as a function of the simulated target's distance for each speed and eccentricity of viewing. For the slow speed (1 m/s), observers judged that they had reached the target before they actually did (points below perfect performance indicated by the diagonal dashed line). For the faster speed (2 m/s), they moved beyond the actual distance of the target for close distances (<15 m; data points above the perfect performance line) but for longer distances (>15 m), similar to their performance when exposed to the slower speed, they reported that they reached the target before they actually had. 
Figure 2
 
The simulated distance subjects moved through to reach a target is plotted as a function of simulated target distance for the four eccentric viewing angles and two speeds used in this study. Viewing was monocular through the left eye when looking right and through the right eye when looking left. There were no differences attributable to the viewing side, and data from left and right viewing have been pooled. Standard error bars are shown. The dashed line indicates perfect performance. Plotted through the data are the best fitting leaky spatial integrator fits (see Data analysis in the Methods section).
Figure 2
 
The simulated distance subjects moved through to reach a target is plotted as a function of simulated target distance for the four eccentric viewing angles and two speeds used in this study. Viewing was monocular through the left eye when looking right and through the right eye when looking left. There were no differences attributable to the viewing side, and data from left and right viewing have been pooled. Standard error bars are shown. The dashed line indicates perfect performance. Plotted through the data are the best fitting leaky spatial integrator fits (see Data analysis in the Methods section).
Each subject's responses for a given velocity and eccentricity of viewing were fit using the leaky spatial integrator model of Lappe et al. (2007) (see Methods) to obtain the sensory gains (k) and integration constants (α). The average r2 of these fits was 0.88 ± 0.01. A repeated measures ANOVA was run on these values (2 directions × 2 speed ×4 eccentricities). There was no effect of left vs. right viewing for either α (F[1, 14] = 1.753, p = .207) or k (F[1, 14] = 0.067, p = 0.799), and therefore the data were collapsed across left/right viewing directions. A subsequent repeated measures ANOVA showed that the integration constant (α) did not vary significantly with speed or eccentricity of viewing, and therefore the average value (0.05 ± 0.008) was used for subsequent fits of the model. The average r2 of these fits (with α held constant across conditions) was 0.81 ± 0.02. The model's fit is shown plotted through the mean data in Figure 2. For the resulting sensory gains (k) for each subject and each condition, we performed an ANOVA with two within-group factors (speed, eccentricity of viewing). Violations of the sphericity assumption during ANOVA procedures were corrected by adjusting the degrees of freedom according to the Greenhouse-Geisser correction. The sensory gain parameter varied with speed (F[1, 14] = 79.918, p < 0.001) and eccentricity of viewing (F[1.869, 26.171] = 4.045, p = 0.032). Figure 3 shows the pattern of variation of the sensory gain with viewing eccentricity for each speed. 
Figure 3
 
The sensory gain parameters (k) of the best-fit Lappe et al. (2007) model plotted as a function of eccentricity. There were no differences for left and right viewing, and the data have been pooled and mirrored. The sensory gain (k) showed a distinctive M function for both speeds of motion. The integration constant (α) has a value of 0.05. Standard error bars are plotted.
Figure 3
 
The sensory gain parameters (k) of the best-fit Lappe et al. (2007) model plotted as a function of eccentricity. There were no differences for left and right viewing, and the data have been pooled and mirrored. The sensory gain (k) showed a distinctive M function for both speeds of motion. The integration constant (α) has a value of 0.05. Standard error bars are plotted.
Judging distance-to-target
Subjects' judgments of the distance to the same targets that were used in the simulated motion experiment are plotted in Figure 4 as a function of the simulated target distances. The average sensory gain (perceived over actual distance) was 0.52 ± 0.12. As Figure 4 illustrates, there was considerable intersubject variability. 
Figure 4
 
Results for Experiment 2: judging distances. The horizontal axis indicates the simulated distance of the target, and the vertical axis is the perceived distance (see Data analysis in the Methods section). Plotted through the data for each subject is the best-fit output of the leaky spatial integrator model (see Data analysis in the Methods section). The integration constant has a value of zero resulting in straight line fits.
Figure 4
 
Results for Experiment 2: judging distances. The horizontal axis indicates the simulated distance of the target, and the vertical axis is the perceived distance (see Data analysis in the Methods section). Plotted through the data for each subject is the best-fit output of the leaky spatial integrator model (see Data analysis in the Methods section). The integration constant has a value of zero resulting in straight line fits.
Comparison of judged distances with distance of travel
Figure 5 plots the sensory gains obtained from the moving-to-target experiment against the sensory gain for the judging-target-distance task for both speeds used in this study for each of our participants. There was a poor negative correlation (r2 = 0.13 and 0.33 for the two speeds, respectively). 
Figure 5
 
Comparison of the sensory gains of perceived self-motion (Experiment 1: vertical axis), plotted as a function of sensory gain of perceived distance (Experiment 2: horizontal axis). Data for 1 m/s is shown by black symbols, data for 2 m/s is shown by red symbols. Correlations were very weak (r2 = 0.13 and 0.33 for the two speeds, respectively) and negative.
Figure 5
 
Comparison of the sensory gains of perceived self-motion (Experiment 1: vertical axis), plotted as a function of sensory gain of perceived distance (Experiment 2: horizontal axis). Data for 1 m/s is shown by black symbols, data for 2 m/s is shown by red symbols. Correlations were very weak (r2 = 0.13 and 0.33 for the two speeds, respectively) and negative.
Discussion
We have shown that when optic flow corresponding to self-motion down a corridor is viewed eccentrically, the perceived distance traveled is affected by the viewing direction for a given pattern of optic flow. For the corridor environment used here, optic flow produced a maximum perception of distance traveled when viewed about 20–30° eccentrically. Measurements of distance compression in the same equipment showed that it is not possible to predict the perceived distance traveled from the subjects' estimate of the target distance alone. 
Comparison with previous studies
The present study was conducted with constant velocity simulations. In general, subjects perceived themselves to have traveled further than the simulation intended and thus pressed the button to indicate that they had reached a given target before they actually had. This is consistent with earlier studies using the same instruction: report when you have reached the location of a previously viewed target (Frenz & Lappe, 2005; Lappe et al., 2007; Redlick et al., 2001). Redlick et al. (2001) reported no change in gain for constant velocity optic flow simulations over the range of 0.4–6.4 m/s. The present results, using a more sophisticated model to describe the perceived distances, show a significant difference between the two speeds tested (1 and 2 m/s) under comparable viewing conditions. 
Redlick et al. (2001) used a simple linear regression model in which sensory gain was assumed to be independent of distance traveled. Assessing distance traveled is part of path integration in which the course of an extended movement is estimated by integrating short pieces of the movement to yield the total path (Maurer & Seguinot, 1995; Mittelstaedt & Mittelstaedt, 1973). The Lappe et al. (2007) model includes a leaky spatial integrator which explicitly links sensory gain to distance traveled. Lappe et al. (2007), using a stereoscopic display with appropriate disparity cues and longer distances than were simulated here (up to 64 m) and movement facing the center of expansion, found a sensory of gain of 0.98 and an integration constant of 0.008. The sensory gains reported in the present study for movement facing the center of expansion (0° eccentricity) are 0.80 ± 0.087 and 0.53 ± 0.07 for 1 m/s and 2 m/s, respectively, and an α value of 0.05 ± 0.008. A reanalysis of the constant velocity data from Redlick et al. (2001) (Figure 6) shows a good fit to the leaky integrator model with a constant α of 0.05 and a k of 0.9 for 1 m/s and 0.7 for 2 m/s. The most obvious difference between the studies is that the stereoscopic displays of Lappe et al. (2007) were associated with much smaller α values (0.008 compared to 0.05). When visual cues to self-motion are provided stereoscopically with appropriate disparity cues, the integrator seems to be charged more effectively and does not leak so much. This provides a quantitative description of the improvement in self-motion perception caused by adding stereoscopic information that has previously been reported (Butler, Campos, Bulthoff, & Smith, 2011; Palmisano, 2002; Zikovitz, Jenkin, & Harris, 2001) and corresponding neural processes (Lappe & Grigo, 1999). 
Figure 6
 
This figure shows a reanalysis of the data reported in Redlick et al. (2001). (A) the best-fit output of the Lappe et al. (2007) leaky spatial integrator model plotted through the data of Redlick et al. (2001; see Figure 2). (B) The k (sensory gain: filled black circles) and α (integration constant: filled red circles) values plotted as a function of velocity. Also shown are the looking-straight-ahead values from the present experiment (k: open circles; α: dashed red line). Standard error bars shown, α errors too small to show, typical value 0.02.
Figure 6
 
This figure shows a reanalysis of the data reported in Redlick et al. (2001). (A) the best-fit output of the Lappe et al. (2007) leaky spatial integrator model plotted through the data of Redlick et al. (2001; see Figure 2). (B) The k (sensory gain: filled black circles) and α (integration constant: filled red circles) values plotted as a function of velocity. Also shown are the looking-straight-ahead values from the present experiment (k: open circles; α: dashed red line). Standard error bars shown, α errors too small to show, typical value 0.02.
Few studies have looked at the effect of eccentric viewing on vection. Palmisano and colleagues (Kim & Palmisano 2010; Palmisano & Kim, 2009) also found that eccentric gaze can increase vection strength but attributed it to changes in eye tracking patterns. 
Processing optic flow components
Obtaining distance traveled from laminar and radial flow fields requires different computational mechanisms. Laminar flow is relatively homogenous, so averaging over the field, with an allowance for distances, is a valid algorithm. However, such an approach to averaging radial flow is inappropriate since, for example, a circular sampling region centered on the focus of expansion might generate a net flow of zero. More sophisticated processing is required for extracting distance traveled from radial flow. While Crowell and Banks (1993, 1996) found that heading direction was more accurately and sensitively estimated from radial than laminar flow, the flow components' relative contribution to the perception of distance traveled is unknown. Based on the physiological evidence for parsing optic flow into different subsystems (Albright, 1989; Duffy & Wurtz 1991; Graziano et al., 1994; Saito et al., 1986) and the logical need to apply different algorithms for extracting distance traveled from radial and laminar flow, we modeled the sensory gain of self-motion perception induced by optic flow as being processed by independent radial and laminar channels. 
The pattern of optic flow generated by linear motion was first described by J. J. Gibson (1979) in his seminal book The Ecological Approach to Visual Perception. During linear movement the optic flow can be represented as a sphere of flow lines radiating from the point toward which the subject is traveling (the focus of expansion [FOE]) and converging on the point directly opposite to this. The amount of radial flow, defined as lines of flow that are not parallel, is maximal at the FOE and falls off sinusoidally with eccentricity from this direction, reaching zero at 90° from this direction. Laminar flow, defined as parallel lines of flow, is maximal when viewed orthogonal to the direction of travel and also falls off sinusoidally, reaching zero at the FOE. This is illustrated in Figure 7. Channels tuned to these types of flow would therefore be expected to show activity varying with eccentricity from the FOE as shown in this diagram. 
Figure 7
 
Modeling the laminar and radial flow components of optic flow. Each component is sinusoidally related to the angle of viewing (horizontal axis). The dashed and solid red lines show the variation in magnitude with eccentricity of the radial and laminar components respectively (here shown with arbitrary magnitudes). The blue line shows the sum of the two components.
Figure 7
 
Modeling the laminar and radial flow components of optic flow. Each component is sinusoidally related to the angle of viewing (horizontal axis). The dashed and solid red lines show the variation in magnitude with eccentricity of the radial and laminar components respectively (here shown with arbitrary magnitudes). The blue line shows the sum of the two components.
We modeled the variation of sensory gain (k; see Methods) as a function of eccentricity (Figure 3) as the linear sum of these two hypothetical channels with only the amplitudes of the two channels as variables. The output of the best-fit model is shown in Figure 8 plotted through the data. The ratio of the two channels is roughly constant over this range (radial being 1.7 × laminar) but the absolute values needed to fit the sensory gains decreases with increasing velocity as shown in Figure 8. Importantly, the integration time constants did not vary across conditions, and the only factor required to model the effects of retinal eccentricity was a gain change. The match of the nonlinear eccentricity function with the output of this model, especially the curious dip in perceived travel distance around 20–30°, is supportive of the radial and laminar channels hypothesis, However, in a real-world environment such as simulated here, many parameters vary across the field. 
Figure 8
 
Fitting the output of a linear simulation of the radial and laminar flow components illustrated in Figure 7 to the variation in sensory gain with eccentricity. The only variables were the two amplitudes of the radial and laminar components. The best-fit functions are plotted through the data in polar (A) and linear (B) coordinates. The linear summation models with the peak gains used are shown in (C) for 1 m/s and (D) for 2 m/s. The relative amounts of the two components was constant (radial = 1.7 × laminar).
Figure 8
 
Fitting the output of a linear simulation of the radial and laminar flow components illustrated in Figure 7 to the variation in sensory gain with eccentricity. The only variables were the two amplitudes of the radial and laminar components. The best-fit functions are plotted through the data in polar (A) and linear (B) coordinates. The linear summation models with the peak gains used are shown in (C) for 1 m/s and (D) for 2 m/s. The relative amounts of the two components was constant (radial = 1.7 × laminar).
Other factors
Our simulation used a real-world scenario (motion down a corridor) and therefore induced variations across the field during simulated forward translation in speed of motion, local distance, local gradient of depth, and the amount of local motion parallax. Going from direction of travel outward, the depth decreases, angular speed increases, and the range in both depth and speeds decreases. Might these variations offer an alternative explanation for the effects we report? The environment and speed of motion was constant across trials in our experiments, and therefore, the optic flow was also constant. The only variation was where on the retina the stimulus fell and which part was actually visible during the movement trial. As the leaky integrator model proposed by Lappe et al. (2007) fitted the data with a fixed integration time constant and only a parametric variation in gain, it appears to be a parsimonious description of visual path integration for peripheral as well as central or full-field stimuli. However, part of the eccentricity dependence of the gain may result from some of these secondary factors. Angular speed is unlikely to be the only factor since in our experiment, gain decreased with increasing linear speed (which also results in increased angular speed), suggesting gain should decrease with eccentricity rather than (initially) increase as we found. Further experiments are needed to investigate the effect of varying these various parameters on the perceived distance of travel (e.g., Harris et al., 2012) although none is expected to show a variation pattern matching our data. 
Compression of distance in virtual reality
An established phenomenon in virtual reality is that distances are often perceived as closer than they really are (Willemsen et al., 2008). Even in the real world there is considerable distance compression, especially for further distances (Creem-Regehr et al. 2005). This may contribute to subjects feeling that they had reached a given target before the simulated distance to it had been traversed. However, Figure 5 shows that, although compression of perceived space was found in our experimental set up, there was no significant correlation between perceived target distance and perceived travel distance. That is, there was a slight tendency for subjects who had a smaller sensory gain when judging static distances to have a larger sensory gain when determining how far they felt they had moved. The distance accumulated by the path integration mechanism is subject to a source of error unrelated to the phenomenon of distance compression. 
Conclusion
We have shown that the gain for self-motion estimation depends on eccentricity and that it can be modeled by Lappe et al.'s (2007) leaky spatial integrator. The variation in gain is compatible with the parsing of optic flow used for self-motion estimation into two systems responsible for processing laminar and radial flow respectively that add linearly. 
Acknowledgments
These experiments were made possible by the generous support of the Alexander Von Humboldt Foundation Trans Coop grant on “Self-motion perception in immersive environments.” Laurence Harris, Michael Jenkin, Bill Kapralos, and Rob Allison are funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada. We thank Markus Lappe for his comments on an earlier version of this manuscript and Adria Hoover for her help with the statistical analysis. 
Commercial relationships: none. 
Corresponding author: Laurence R. Harris. 
Email: harris@yorku.ca 
Address: Department of Psychology, York University, Toronto, Ontario, Canada. 
References
Albright T. D. (1989). Centrifugal directional bias in the middle temporal visual area (Mt) of the Macaque. Visual Neuroscience, 2, 177–188. [CrossRef] [PubMed]
Andersen G. J. Braunstein M. L. (1985). Induced self-motion in central vision. Journal of Experimental Psychology: Human Perception & Performance, 11, 122–132. [CrossRef]
Andersen G. J. Dyre B. P. (1989). Spatial orientation from optic flow in the central visual-field. Perception & Psychophysics, 45, 453–458. [CrossRef] [PubMed]
Bergmann J. Krauss E. Munch A. Jungmann R. Oberfeld D. Hecht H. (2011). Locomotor and verbal distance judgments in action and vista space. Experimental Brain Research, 210, 13–23. [CrossRef] [PubMed]
Butler J. S. Campos J. L. Bulthoff H. H. Smith S. T. (2011). The role of stereo vision in visual-vestibular integration. Seeing Perceiving, 24, 453–470. [CrossRef] [PubMed]
Creem-Regehr S. H. Willemsen P. Gooch A. A. Thompson W. B. (2005). The influence of restricted viewing conditions on egocentric distance perception: Implications for real and virtual indoor environments. Perception, 34, 191–204. [CrossRef] [PubMed]
Crowell J. A. Banks M. S. (1993). Perceiving heading with different retinal regions and types of optic flow. Perception & Psychophysics, 53, 325–337. [CrossRef] [PubMed]
Crowell J. A. Banks M. S. (1996). Ideal observer for heading judgments. Vision Research, 36, 471–490. [CrossRef] [PubMed]
Duffy C. J. Wurtz R. H. (1991). Sensitivity of MST neurons to optic flow stimuli. 1. A continuum of response selectivity to large-field stimuli. Journal of Neurophysiology, 65, 1329–1345. [PubMed]
Frenz H. Lappe M. (2005). Absolute travel distance from optic flow. Vision Research, 45, 1679–1692. [CrossRef] [PubMed]
Gibson J. J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin.
Graziano M. S. Andersen R. A. Snowden R. J. (1994). Tuning of MST neurons to spiral motions. Journal of Neuroscience, 14, 54–67. [PubMed]
Harris L. R. Herpers R. Jenkin M. Allison R. S. Jenkin H. Kapralos B. (2012). Optic flow and self-motion perception: the contribution of different parts of the field [Abstract]. Society for Neuroscience Abstracts, 672.14.
Kim J. Palmisano S. (2010). Eccentric gaze dynamics enhance vection in depth. Journal of Vision, 10(12):7, 1–11, http://www.journalofvision.org/content/10/12/7, doi:10.1167/10.12.7. [PubMed] [Article] [CrossRef] [PubMed]
Koenderink J. J. (1990). Some theoretical aspects of optic flow. In Warren R. Wertheim A. H.(Eds.), Perception and control of self-motion (pp. 53–68). Hillsdale, NJ: Lawrence Earlbaum.
Lappe M. Grigo A. (1999). How stereovision interacts with optic flow perception: neural mechanisms. Neural Networks, 12, 1325–1329. [CrossRef] [PubMed]
Lappe M. Jenkin M. Harris L. R. (2007). Travel distance estimation from visual motion by leaky path integration. Experimental Brain Research, 180, 35–48. [CrossRef] [PubMed]
Maurer R. Seguinot V. (1995). What is modelling for? A critical review of the models of path integration. Journal of Theoretical Biology, 175, 457–435. [CrossRef]
Mittelstaedt H. Mittelstaedt M. L. (1973). Mechanismen der orientierung ohne richtende außenreize [Mechanisms of orientation without orienting external stimuli]. Fortschritte der Zoologie, 21, 46–58.
Palmisano S. (2002). Consistent stereoscopic information increases the perceived speed of vection in depth. Perception, 31, 463–480. [CrossRef] [PubMed]
Palmisano S. Kim J. (2009). Effects of gaze on vection from jittering, oscillating, and purely radial optic flow. Attention, Perception, & Psychophysics, 71, 1842–1853. [CrossRef]
Redlick F. P. Harris L. R. Jenkin M. R. (2001). Humans can use optic flow to estimate distance of travel. Vision Research, 41, 213–219. [CrossRef] [PubMed]
Saito H. Yukie M. Tanaka K. Hikosaka K. Fukada Y. Iwai E. (1986). Integration of direction signals of image motion in the superior temporal sulcus of the macaque monkey. Journal of Neuroscience, 6, 145–157. [PubMed]
Srinivasan M. V. Zang S. Bidwell N. (1997). Visually mediated odometry in honeybees. Journal of Experimental Biology, 200, 2513–2522. [PubMed]
Stoffregen T. A. (1985). Flow structure versus retinal location in the optical control of stance. Journal of Experimental Psychology: Human Perception & Performance, 11, 554–565. [CrossRef]
Telford L. Frost B. J. (1993). Factors affecting the onset and magnitude of linear vection. Perception & Psychophysics, 53, 682–692. [CrossRef] [PubMed]
Warren W. H. Kurtz K. J. (1992). The role of central and peripheral-vision in perceiving the direction of self-motion. Perception & Psychophysics, 51, 443–454. [CrossRef] [PubMed]
Willemsen P. Gooch A. A. Thompson W. B. Creem-Regehr S. H. (2008). Effects of stereo viewing conditions on distance perception in virtual environments. Presence, 17, 91–101. [CrossRef]
Zikovitz D. C. Jenkin M. R. Harris L. R. (2001). Comparison of stereoscopic and non-stereoscopic optic flow displays. Journal of Vision, 1(3):317, http://www.journalofvision.org/content/1/3/317, doi:10.1167/1.3.317. [Abstract] [CrossRef]
Footnotes
1  In the original Lappe et al. (2007) paper there was a typographical error in the appendix in which this expansion was described. This has been corrected here.
Figure 1b
 
In Experiment 2 (B) they judged the distance of a static target as a multiple of the distance to a standard.
Figure 1b
 
In Experiment 2 (B) they judged the distance of a static target as a multiple of the distance to a standard.
Figure 2
 
The simulated distance subjects moved through to reach a target is plotted as a function of simulated target distance for the four eccentric viewing angles and two speeds used in this study. Viewing was monocular through the left eye when looking right and through the right eye when looking left. There were no differences attributable to the viewing side, and data from left and right viewing have been pooled. Standard error bars are shown. The dashed line indicates perfect performance. Plotted through the data are the best fitting leaky spatial integrator fits (see Data analysis in the Methods section).
Figure 2
 
The simulated distance subjects moved through to reach a target is plotted as a function of simulated target distance for the four eccentric viewing angles and two speeds used in this study. Viewing was monocular through the left eye when looking right and through the right eye when looking left. There were no differences attributable to the viewing side, and data from left and right viewing have been pooled. Standard error bars are shown. The dashed line indicates perfect performance. Plotted through the data are the best fitting leaky spatial integrator fits (see Data analysis in the Methods section).
Figure 3
 
The sensory gain parameters (k) of the best-fit Lappe et al. (2007) model plotted as a function of eccentricity. There were no differences for left and right viewing, and the data have been pooled and mirrored. The sensory gain (k) showed a distinctive M function for both speeds of motion. The integration constant (α) has a value of 0.05. Standard error bars are plotted.
Figure 3
 
The sensory gain parameters (k) of the best-fit Lappe et al. (2007) model plotted as a function of eccentricity. There were no differences for left and right viewing, and the data have been pooled and mirrored. The sensory gain (k) showed a distinctive M function for both speeds of motion. The integration constant (α) has a value of 0.05. Standard error bars are plotted.
Figure 4
 
Results for Experiment 2: judging distances. The horizontal axis indicates the simulated distance of the target, and the vertical axis is the perceived distance (see Data analysis in the Methods section). Plotted through the data for each subject is the best-fit output of the leaky spatial integrator model (see Data analysis in the Methods section). The integration constant has a value of zero resulting in straight line fits.
Figure 4
 
Results for Experiment 2: judging distances. The horizontal axis indicates the simulated distance of the target, and the vertical axis is the perceived distance (see Data analysis in the Methods section). Plotted through the data for each subject is the best-fit output of the leaky spatial integrator model (see Data analysis in the Methods section). The integration constant has a value of zero resulting in straight line fits.
Figure 5
 
Comparison of the sensory gains of perceived self-motion (Experiment 1: vertical axis), plotted as a function of sensory gain of perceived distance (Experiment 2: horizontal axis). Data for 1 m/s is shown by black symbols, data for 2 m/s is shown by red symbols. Correlations were very weak (r2 = 0.13 and 0.33 for the two speeds, respectively) and negative.
Figure 5
 
Comparison of the sensory gains of perceived self-motion (Experiment 1: vertical axis), plotted as a function of sensory gain of perceived distance (Experiment 2: horizontal axis). Data for 1 m/s is shown by black symbols, data for 2 m/s is shown by red symbols. Correlations were very weak (r2 = 0.13 and 0.33 for the two speeds, respectively) and negative.
Figure 6
 
This figure shows a reanalysis of the data reported in Redlick et al. (2001). (A) the best-fit output of the Lappe et al. (2007) leaky spatial integrator model plotted through the data of Redlick et al. (2001; see Figure 2). (B) The k (sensory gain: filled black circles) and α (integration constant: filled red circles) values plotted as a function of velocity. Also shown are the looking-straight-ahead values from the present experiment (k: open circles; α: dashed red line). Standard error bars shown, α errors too small to show, typical value 0.02.
Figure 6
 
This figure shows a reanalysis of the data reported in Redlick et al. (2001). (A) the best-fit output of the Lappe et al. (2007) leaky spatial integrator model plotted through the data of Redlick et al. (2001; see Figure 2). (B) The k (sensory gain: filled black circles) and α (integration constant: filled red circles) values plotted as a function of velocity. Also shown are the looking-straight-ahead values from the present experiment (k: open circles; α: dashed red line). Standard error bars shown, α errors too small to show, typical value 0.02.
Figure 7
 
Modeling the laminar and radial flow components of optic flow. Each component is sinusoidally related to the angle of viewing (horizontal axis). The dashed and solid red lines show the variation in magnitude with eccentricity of the radial and laminar components respectively (here shown with arbitrary magnitudes). The blue line shows the sum of the two components.
Figure 7
 
Modeling the laminar and radial flow components of optic flow. Each component is sinusoidally related to the angle of viewing (horizontal axis). The dashed and solid red lines show the variation in magnitude with eccentricity of the radial and laminar components respectively (here shown with arbitrary magnitudes). The blue line shows the sum of the two components.
Figure 8
 
Fitting the output of a linear simulation of the radial and laminar flow components illustrated in Figure 7 to the variation in sensory gain with eccentricity. The only variables were the two amplitudes of the radial and laminar components. The best-fit functions are plotted through the data in polar (A) and linear (B) coordinates. The linear summation models with the peak gains used are shown in (C) for 1 m/s and (D) for 2 m/s. The relative amounts of the two components was constant (radial = 1.7 × laminar).
Figure 8
 
Fitting the output of a linear simulation of the radial and laminar flow components illustrated in Figure 7 to the variation in sensory gain with eccentricity. The only variables were the two amplitudes of the radial and laminar components. The best-fit functions are plotted through the data in polar (A) and linear (B) coordinates. The linear summation models with the peak gains used are shown in (C) for 1 m/s and (D) for 2 m/s. The relative amounts of the two components was constant (radial = 1.7 × laminar).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×