Open Access
Article  |   July 2017
Using optic flow in the far peripheral field
Author Affiliations
  • Meaghan McManus
    Centre for Vision Research, York University, Toronto, ON, Canada
  • Sarah D'Amour
    Centre for Vision Research, York University, Toronto, ON, Canada
  • Laurence R. Harris
    Centre for Vision Research, York University, Toronto, ON, Canada
Journal of Vision July 2017, Vol.17, 3. doi:https://doi.org/10.1167/17.8.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Meaghan McManus, Sarah D'Amour, Laurence R. Harris; Using optic flow in the far peripheral field. Journal of Vision 2017;17(8):3. https://doi.org/10.1167/17.8.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Self-motion information can be used to update spatial memory of location through an estimate of a change in position. Viewing optic flow alone can create Illusory self-motion or “vection.” Early studies suggested that peripheral vision is more effective than central vision in evoking vection, but controlling for retinal area and perceived distance suggests that all retinal areas may be equally effective. However, the contributions of the far periphery, beyond 90°, have been largely neglected. Using a large-field Edgeless Graphics Geometry display (EGG, Christie, Canada, field of view ±112°) and systematically blocking central (±20° to ±90°) or peripheral (viewing through tunnels ±20° to ±40°) parts of the field, we compared the effectiveness of different retinal regions at evoking forwards linear vection. Fifteen participants indicated when they had reached the position of a previously presented target after visually simulating motion down a simulated corridor. The amount of simulated travel needed to match a given target distance was modelled with a leaky spatial integrator model to estimate gains (perceived/actual distance) and a spatial decay factor. When optic flow was presented only in the far periphery (beyond 90°) gains were significantly higher than for the same motion presented full field or in only the central field, resulting in accurate performance in the range of speeds associated with normal walking. The increased effectiveness of optic flow in the peripheral field alone compared to full-field motion is discussed in terms of emerging neurophysiological studies that suggest brain areas dedicated to processing information from the far peripheral field.

Introduction
Moving through the world generates optic flow. If large-field visual motion is presented to a stationary observer, it can cause an illusion of self-motion referred to as vection. Vection can be divided into the sensation of self-rotation (circular vection) or linear self-motion (linear vection) with most studies concentrating on circular vection. The relative contribution of the central and peripheral parts of the visual field to vection has been controversial. Here, we extend linear self-motion studies to include visual motion stimulation of the very far periphery. 
Early studies using circular vection seemed to suggest that peripheral vision was more effective than central vision in evoking self-motion (Brandt, Dichgans, & Koenig, 1973). However, this apparent difference in effectiveness may be related not to the retinal region stimulated but to the brain's assumption that objects in the periphery are generally further away and may be assumed to form the background of a scene, as opposed to objects in the center of the visual field that are more likely to be able to move relative to that background (Howard & Heckmann, 1989). When controlling for perceived depth using binocular disparity, the eccentricity of moving stimuli (at least within the region of binocular overlap) did not affect the strength of vection (Nakamura, 2008). Together with the fact that controlling for retinal area also abolishes the apparent dominance of the peripheral field in evoking vection (Nakamura & Shimojo, 1998), this suggests that it is not which part of the retina that is the significant factor but assumptions by the observer about the likely structure of the scene. All these experiments, however, were based on circular vection in which a given retinal angular velocity in any part of the field would be expected to evoke the same sensation. This is not the case for linear vection where angular retinal velocity varies dramatically across the visual field, even when simulating constant velocity linear motion of the observer. 
Berthoz, Pavard, and Young (1975) demonstrated the significance of the periphery in generating forwards linear vection but did not systematically vary the region of the field stimulated. Delorme and Martin (1986) suggested that peripheral motion might be more effective at evoking sway: a physiological response to perceived forwards or backwards motion. A systematic study of the effect of central versus peripheral stimulation on linear vection (Tarita-Nistor, González, Spigelman, & Steinbach, 2006) used sideways linear vection. They were only able to present stimuli out to about 40° but concluded that peripheral stimulation and central stimulation were associated with similar vection onset latency and duration when area was controlled and in the presence of a fixation cross. Interestingly, they found that a fixation point enhanced the perception of centrally induced linear vection but had no effect on peripherally induced linear vection. Although visual angular velocities do vary quite dramatically between the central and peripheral retinal regions during sideways vection, the variation is considerably more extreme for forwards linear vection. The only recent study to systematically investigate role of the far periphery on linear vection was by Pretto, Ogier, Bülthoff, and Bresciani (2009). They found that although large-field visual motion did not enhance circular vection, motion beyond the central 60° evoked an overestimation of perceived speed of travel. Although their field extended ±115° ,they did not look at the effect of stimulating the extreme visual periphery alone. 
Intriguingly, recent neurophysiological recordings have found areas in the limbic cortex which seem specialized for processing the fast retinal motion (in terms of °/s) in the far periphery (Rockland, 2012; Yu, Chaplin, Davies, Verma, & Rosa, 2012). These findings suggest that fast retinal motion in the far periphery may be processed differently compared to motion in the rest of the visual field. 
Traditionally vection has been measured using self-report of its strength or speed—often relative to some standard (Palmisano, Allison, & Pekin, 2008) and its duration or onset latency (Palmisano, Gillam, & Blackburn, 2000; Palmisano & Chan, 2004). A more quantitative way is to measure the perceived travel distance evoked by a given stimulus configuration. Lappe and colleagues (Bremmer & Lappe, 1999; Lappe, Stiels, Frenz, & Loomis, 2011) and others (Harris et al., 2012; Redlick, Jenkin, & Harris, 2001) have demonstrated that people can use the sensation of motion provided by optic flow to update their sense of location in the environment although they make systematic errors that depend on whether participants are told to reproduce a previously given distance (count up) or move to a previously presented target (count down). Their perceived motion in both cases can be well modeled using a leaky spatial integrator (Lappe, Jenkin, & Harris, 2007). 
The present experiment measured perceived distance travelled in response optic flow in the reduced central visual field (central ±20° visible), the intermediate central visual field (central ±40° visible), the close periphery (central ±20° blocked from view), the intermediate periphery (central ±40° blocked from view), or the far periphery (central ±90° blocked from view). Comparisons are made with the response to a display that covered the entire visual field out to ±112° in all directions. We hypothesized that visual motion in the far periphery would lead to a different estimation in distance traveled compared to that evoked by full-field motion, especially if it relies on specialized parts of the brain that specifically monitor peripheral motion. In order to level the playing field between central and peripheral estimates of perceived distance traveled, we used a fixation point a la Tarita-Nistor et al. (2006). Although optic flow can produce strong feelings of self-motion, we didn't measure that in this paper. Instead, we looked at the perceived travel distance induced by optic flow. 
Methods
Participants
The experiment measured the responses of 15 participants (mean age 24.1 ± 2.9, seven males, eight females). All were students at York University. The experiment was approved by the Ethics Board at York University and was run in accordance with the Treaty of Helsinki (1964). 
Apparatus
All stimuli used in this project were projected onto a large-field Edgeless Graphics Geometry display (EGG, Christie, Ontario; Figure 1). The EGG is a large, curved screen which has a field of view that extends ±112° horizontally, and +89° and −83° vertically. The screen has an 18-megapixel resolution, with an optical resolution of 6 arcmin/OLP. The EGG is also equipped with the Christie Twist™ attachment that improves the visual output from multiple projectors by adjusting warping and performing edge-blending. 
Figure 1
 
The Christie large-field Edgeless Graphics Geometry display (EGG). Participants sat in a chair located in the middle of the curved screen.
Figure 1
 
The Christie large-field Edgeless Graphics Geometry display (EGG). Participants sat in a chair located in the middle of the curved screen.
Participants sat approximately 107 cm from the front of the screen and viewed binocularly. In front of the participants was a controller that consisted of a knob that could be twisted around its vertical axis, tilted in every direction, and be pulled up or down. The knob had two buttons, one on each side. These buttons were remapped into the up and down arrow keys on the keyboard using 3Dconnection™. 
Visual stimuli
The experiment was programed in Vizard (version 5.0) using Python (version 5.0). Participants viewed a simulated hallway with red brick walls and a gray tiled floor. The simulated width of the hallway was 10 m, with 30 floor tiles across the hallway in each row of tiles. The walls were 300 m high, such that the viewer could not see the top of the walls (see Figure 2). The walls and the floor extended 100 m in front of the viewer. At the start of each new trial the participant's position was reset to the same point in the virtual hallway. The central or peripheral parts of the hallway could be blocked off by placing either a virtual black circular disk or black tunnel at some simulated distance in the hallway. In the center of the screen was a light gray fixation point (1.3°) that appeared when optic flow started and was present throughout the stimulus presentation. 
Figure 2
 
Screen captures of the hallway. (a) full field of view (FF), (b) central field of view ±20° (CF20), (c) central field of view ±40° (CF40), (d) peripheral field of view from ±20° out (PF20), (e) peripheral field of view from ±40° out (PF40), and (f) peripheral field of view from ±90° out (PF90). The target hedra is visible in the first three illustrations. In the experiment the target and the field restrictions were not present at the same time. Movies demonstrating each of the visual conditions are provided in the supplementary materials.
Figure 2
 
Screen captures of the hallway. (a) full field of view (FF), (b) central field of view ±20° (CF20), (c) central field of view ±40° (CF40), (d) peripheral field of view from ±20° out (PF20), (e) peripheral field of view from ±40° out (PF40), and (f) peripheral field of view from ±90° out (PF90). The target hedra is visible in the first three illustrations. In the experiment the target and the field restrictions were not present at the same time. Movies demonstrating each of the visual conditions are provided in the supplementary materials.
The participant's task was to note the distance to a target presented in the hallway and then press a button when they felt they had travelled through that distance. The target was a red 3D shape, referred to by Vizard as a “hedra” that was 1 m3 in size (see Figure 2). The distance to the target was defined as being from the front of the participant (the location of the camera) to the center of the target shape. The participant was instructed to press the response button when they were “inside” the target shape. The projection was not in stereo and was not actively linked to the position of the participant's head. Therefore, all distance cues came from visual perspective. 
Visual conditions
There were six visual conditions, which are illustrated in Figure 2: a full-field condition (FF, Figure 2a), two central-field viewing conditions (CF, Figure 2b and c), and three peripheral-field viewing conditions (PF, Figure 2d through f). In the full-field condition the whole screen was visible. The CF conditions consisted of a small tunnel in which everything but ±20°of the central visual field was occluded (Figure 2b), and a large tunnel condition in which everything but ±40° of the central visual field was occluded (Figure 2c). In the three PF conditions there was a small circle condition where ±20° of the central visual field was occluded (Figure 2d), a large circle condition (±40°, Figure 2e), and a giant circle condition (±90°, Figure 2f). 
Procedure
At the beginning of every trial, participants saw the full-field view of the hallway. The 3D hedra shape was projected at some distance (see below) down the hallway. Participants were instructed to pay attention to how far the target shape was from them and, when ready, to click the left button on the EGG controller. Immediately upon the click, the target shape disappeared and one of the visual conditions was selected and presented on the screen. The fixation point appeared, and the simulated hallway then moved towards the participant at a constant velocity (see below), creating a pattern of optic flow consistent with forwards linear motion. When the participants felt that they had reached the location of the previously visible target (that is that their head was inside the hedra), they clicked the right button on the controller. Once this button was clicked, the next trial started with the participant repositioned at their original position in the hallway, with the target at a new distance from them, and the entire screen visible. 
Different speeds at which the participant moved down the corridor were used along with different distances through which they were asked to move. The distances were 3, 5, 7, 10, and 12 m at 0.5, 1, and 1.5 m/s under all of the six visual conditions shown in Figure 1. In each experiment the trial order was randomized for each participant. Each condition was presented to each participant twice, resulting in 5 × 3 × 6 × 2 = 180 trials per participant. 
Practice trials
Before the experiment, participants completed 10 practice trials that demonstrated all the visual conditions, distances, and speeds to be used. During the practice trials, when the participant clicked the start button, they experienced optic flow compatible with forwards self-motion and were able to see the target shape move towards them because, unlike in the actual experiment, it did not disappear. Once they were inside the shape, they hit the stop button. During the practice trials the experimenter stood behind the participant to monitor their behavior and provide feedback. For the first two trials the experimenter told participants exactly when to hit the stop button. For the remainder of the trials the participants determined when they should stop and feedback was given. If they stopped at an appropriate location, they were told “good”; otherwise they were reminded of the instructions. During the PF90 visual conditions participants were asked if they were able to sense the motion in the very far periphery. These practice trials, with the target continuously visible, also served to confirm calibration of the system. One person was not able to complete the experiment as he/she could not sense motion in the far periphery. They were stopped during the practice trials and were not included as a participant in the experiment. 
Data analysis
Data were the simulated travel distance necessary for participants to believe that they had reached the position of the previously seen target—that is, the amount of optic flow necessary to simulate travel through a given perceived distance (determined by the position of the target). A longer travel distance is associated with less effective optic flow. The statistical analysis comprised repeated measures analysis of variances (ANOVAs). Mauchly's test of sphericity was used, and violations of the sphericity assumption were corrected using the Greenhouse-Geisser correction. Alpha was set at p < 0.05, and posthoc multiple comparisons were made using Bonferroni corrections. 
Results
Variation in required travel distance
The simulated travel distance necessary for participants to believe that they had reached the position of a previously seen target at 3, 5, 7, 10, and 12 m for three speeds (0.5, 1, and 1.5 m/s) for six visual conditions (FF, CF20, CF40, PF20, PF40, and PF90) were recorded. A repeated-measures ANOVA was performed. Significant main effects were found for visual condition, F(2.53, 35.48) = 24.93, p < 0.001, Display Formula\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicodeTimes]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\eta _p^2\) = .640); speed, F(1.17, 16.40) = 12.03, p = 0.002, Display Formula\(\eta _p^2\) = .462); and distance, F(1.33, 18.56) = 121.08, p < 0.001, Display Formula\(\eta _p^2\) = .897), which means that differences in travel distance occurred depending on visual condition, speed, and distance. Significant interactions were revealed between visual condition and distance, F(6.72, 94.05) = 4.53, p < 0.001, Display Formula\(\eta _p^2\) = .245), and speed and distance, F(8, 112) = 6.05, p < 0.001, Display Formula\(\eta _p^2\) = .302). Average distances and the results of the pairwise comparisons across visual conditions and speeds are shown in Figure 3 and Figure 4. Pairwise comparisons showed that for all target distances, the travel distances needed under PF90 differed from those needed under all the other visual conditions (Figure 3). 
Figure 3
 
The average distance participants needed to travel to reach each target distance for each visual condition listed in the legend on the side of the bar chart. Error bars are ± 1 SE. For each distance, PF90 differed from all of the other visual conditions (p < 0.001) indicated by the asterisks. Data from all speeds have been pooled.
Figure 3
 
The average distance participants needed to travel to reach each target distance for each visual condition listed in the legend on the side of the bar chart. Error bars are ± 1 SE. For each distance, PF90 differed from all of the other visual conditions (p < 0.001) indicated by the asterisks. Data from all speeds have been pooled.
Figure 4
 
For each speed, 0.5 m/s, 1.0 m/s, and 1.5 m/s, the average distance traveled for a given target distance is plotted. Error bars are ±1 SE. Significant differences between speeds for each distance are shown by horizontal bars (p < 0.05). All visual conditions have been pooled.
Figure 4
 
For each speed, 0.5 m/s, 1.0 m/s, and 1.5 m/s, the average distance traveled for a given target distance is plotted. Error bars are ±1 SE. Significant differences between speeds for each distance are shown by horizontal bars (p < 0.05). All visual conditions have been pooled.
Modeling
When simulating motion with optic flow, the distance a person needs to travel in order to feel they have reached a remembered location can be modeled as the output of a leak spatial integrator (Lappe et al., 2007). The model has been found to be flexible in coping with different instruction sets (e.g., Bergmann et al., 2011). The model predicts the distance at which the participant will stop when heading to a target distance based on a gain factor (k), and a decay factor. Gain refers to the ratio between the perceived distance traveled (i.e., the target distance) and the actual distance travelled. A low gain (less than one) indicates that the person had to travel further than the target distance in order to believe they had travelled through that distance. A large gain (greater than one) indicates that the participant required less optic flow in order to believe they had travelled through the distance. The spatial decay factor corresponds to the progressive leakage of perceived distance travelled as distance increases. 
During motion, the model assumes that the participant maintains an ongoing estimate of the distance to the target (D). If x is the distance the participant actually moves, then the instantaneous change in D with respect to x is given by  
\begin{equation}\tag{1}{{dD\left( x \right)} \over {dx}} = - \left( {\alpha D + k} \right)\end{equation}
where k is the sensory gain (k = 1 for an ideal observer) and α represents the leaky integrator decay constant (α = 0 for an ideal observer). Equation 1 solves to1  
\begin{equation}\tag{2}D\left( x \right) = \left( {{D_0} + {k \over \alpha }} \right)\exp \left( { - \alpha x} \right) - {k \over \alpha }\end{equation}
where D0 is the actual distance to the target before moving. From this formula we can solve for the distance traveled (x) at which the subject believes they have reached the target (D = 0) for a given target distance (D0):  
\begin{equation}\tag{3}x\left( {{D_0}} \right) = {1 \over \alpha }\ln \left( {1 + {{\alpha {D_0}} \over k}} \right)\end{equation}
 
The gain factor for each speed and visual condition was found by fitting the function across the data for the five different target distances tested. The decay value was initially allowed free and was found to not vary with condition (t tests in all cases p > 0.05). We therefore reanalyzed the data using the average decay value of 0.077, leaving the gain as the only free parameter. The output of the best-fit model is plotted through the data for each visual condition and speed in Figure 5
Figure 5
 
The data were modeled using the Lappe et al. (2007) leaky spatial integrator model. Fits of the model to the data are shown for each visual condition and speed. The dashed line corresponds to accurate performance.
Figure 5
 
The data were modeled using the Lappe et al. (2007) leaky spatial integrator model. Fits of the model to the data are shown for each visual condition and speed. The dashed line corresponds to accurate performance.
The gain in this fitted model is plotted as a function of speed for each visual condition tested in Figure 6. When vision was available in the central field (including the entire field) gains were consistently below unity gain: i.e., there was a tendency to overshoot the target (travel further than a given target distance). However, when motion was only present in the peripheral field, performance was accurate for simulated speeds around 1m/s. Targets are undershot (participants felt they moved further than they really had) at speeds slower than that. In all cases the peripheral-motion only condition was associated with a higher gain than when central vision was available. 
Figure 6
 
The gains for each visual condition plotted as a function of simulated linear velocity. Data from the furthest peripheral condition (PF 90) are plotted with a green line. Errors bars are ±1 SE. Also shown is a gray dashed line corresponding to accurate performance, and a blue shaded region corresponding to the range of speeds of normal walking (1.2 m/s ± 0.15 m/s, from Sekiya & Nagasaki, 1998).
Figure 6
 
The gains for each visual condition plotted as a function of simulated linear velocity. Data from the furthest peripheral condition (PF 90) are plotted with a green line. Errors bars are ±1 SE. Also shown is a gray dashed line corresponding to accurate performance, and a blue shaded region corresponding to the range of speeds of normal walking (1.2 m/s ± 0.15 m/s, from Sekiya & Nagasaki, 1998).
Discussion
This is the first systematic study of how people use optic flow compatible with forward motion for spatial updating when it was presented on different parts of the visual field extending out to the far periphery including when visual motion was only available beyond 90°. Our results show that far peripheral motion is used differently from optic flow in other parts of the visual field. In general, participants significantly overshot the targets (Figure 3), indicating that visual motion alone was generally associated with a low gain. However, when motion was only present in the far periphery, the gains were closer to, or even exceeded, unity (Figure 6). Speed of motion was also found to be an important factor in that as speed increased, gain decreased, particularly for PF90. Optimum conditions (most accurate perception of distance travelled) occurred with a speed of approximately 1m/s visible only in the peripheral field (Figure 6). 
The effect of retinal area
In the 1970s it was claimed that peripheral vision was more effective than the central field in evoking vection (Berthoz et al., 1975; Brandt et al., 1973). This fitted with emerging studies suggesting that cells in the vestibular system had visual input that integrated over large areas of the retina (Henn, Young, & Finley, 1974; Waespe & Henn, 1978). Nevertheless, later studies suggested that the effectiveness per unit area of retina was the same (Nakamura & Shimojo, 1998) and even that the effect of central versus peripheral visual movement could be nulled if the retinal areas were equated (Nakamura, 2001). It remains the case that there is just a lot more peripheral retina than central retina and that the whole retina takes part in processing self-motion. The area within the monocular segment (out to around 30°) has only about one tenth of the retinal area from 30°–100° (Spector, 1990). The area of view in our experiments was not kept constant across different viewing conditions, as it was not possible to do this in the further peripheral conditions. However, if the area of the visual field were a factor, we should see a gradual change in the gain as the area of view was reduced compared to the full field condition. If the area of the visual field were impacting the results, a change in gain should be observed not only in PF90 but also, to some degree in the PF40 and CF40 conditions. This was not observed (Figure 3 and Figure 6). This suggests that the differences in behavior observed in the PF90 condition is the result of differential processing of motion in the far periphery. The relative speed judgments of Pretto et al. (2009) are compatible with this conclusion although they only went as far as blocking out the central 60° and made speed judgments relative to a standard of 5 m/s (much beyond natural walking speeds of about 1.2 m/s (Sekiya & Nagasaki, 1998). They found that peripheral visual motion was consistently overestimated compared to full field viewing. This is somewhat in line with our observations of higher gains in terms of perceived travel distance evoked by far peripheral visual motion. Whereas Pretto et al., (2009) found differences in the perception of all peripheral motion, people might use this motion information differently in the far periphery. Findings of higher gains in the far periphery are also similar to findings by Harris et al. (2012) who looked at radial versus laminar flow in which radial motion corresponds to the pattern of optic flow seen in the central visual field during forwards motion while laminar motion would be seen in the periphery. Similar to the findings here, their findings noted that gains were lowest for the radial motion and higher for laminar motion. 
PF90 is also the only condition in which the stimulus fell only on an area of retina that is never associated with binocular depth cues. All of the other conditions involved at least parts of the retina that would normally have stereoscopic depth cues. Our display did not provide stereoscopic cues, and therefore disparity cues were in conflict with monocular cues to depth in the central region of the field. This difference might have contributed to the greater gains we observed in the PF90 condition. Indeed, the lack of stereo cues may have contributed to the relatively low gains we observed in most of our visual conditions as stereo cues have been found to enhance vection (Palmisano, 1996), although not necessarily estimates of distance travelled (Frenz, Lappe, Kolesnik, & Bührmann, 2007). The PF90 condition also provided relatively little structure during the motion (the floor was barely visible, for example) which may also have contributed to the effect. 
When moving forward in a real-world environment the velocity at each point on the retina depends on the speed of self-motion, the eyes' orientation relative to the direction of heading, and the instantaneous distance of each object in the image. 
The results found for PF90 may be due to a miscalculation of the linear velocity that would normally be deduced from processing the whole field and which now is forced to use only low resolution data from the far periphery. If a participant interprets the corridor as further from them (and thus wider), this would increase the calculated speed of motion. If they believed the hallway to be narrower, the optic flow would be interpreted as produced by less linear motion. Thus we postulate that a possible reason for the increased gain in the PF90 condition could be a misinterpretation of the distance to the wall. This hypothesis is in line with recent findings by Ott, Pohl, Halfmann, Hardiess, and Mallot (2016) who found that in order to keep perceived motion constant, participants had to decrease their perceived motion as a hallway narrowed and increase their motion as it widened. However, at the beginning of every trial participants saw a full field view of the hallway, meaning that the participants in all conditions should have had the same mental representation of the hallway and the same understanding of the size of the hallway. A misinterpretation of depth during the optic flow phase would suggest that participants do not use the stored depth information of an environment when determining distance traveled. 
Effect of speed
The speed of travel affected the simulated travel distance required for a participant to reach a given target distance (Figure 4). Participants tended to overshoot the target distance more as speed increased, corresponding to a decrease in gain (Figure 6). This finding is in line with previous work (Harris et al., 2012; Redlick et al., 2001), which also found that slower speeds lead to higher gains. But it does conflict with findings by Lappe et al., (2007) who found no effect of speed beyond about 1 m/s. Figure 6 shows that the most accurate estimates of distance travelled were found when vision was only available in the very far periphery for speeds around 1 m/s. This corresponds closely to the speed of normal human walking which is around 1.2 m/s (Sekiya & Nagasaki, 1998) and suggests an important role for the periphery in guiding navigation, particularly while walking with obscured, or poor central field information. Examples include walking through a crowd, walking while looking at a cellphone, or walking in a long hallway. But gains dropped to less than 0.5 during full field stimulation. Why when both the central and peripheral fields were stimulated did accurate performance not remain? 
Effect of region of field stimulated: Center versus periphery
Experiments that have looked at the visual consequences of forwards linear self-motion have generally been limited by their screen size and have not been able to investigate the far periphery. Many experiments look at the periphery only as far as ±30° (Palmisano et al., 2000; Palmisano & Chan, 2004; Redlick et al., 2001; Turano, Yu, Hao, & Hicks, 2005) and some to ±45° (Bremmer & Lappe, 1999; Seya, Shinoda, & Nakaura, 2015) while a person's periphery actually extends to around ±100° (Spector, 1990). When visual flow is presented only peripherally, perceived speed is systematically overestimated compared to a full-field stimulus (Pretto et al., 2009). In the current study, when motion was only visible in the far periphery, it was associated with larger gains than for a full-field stimulus. We suggest that this may reflect a reciprocal inhibitory connection between the vestibular and visual systems (Brandt, Bartenstein, Janek, & Dieterich, 1998) that may have perceptual consequences (Hogendoorn, Verstraten, MacDougall, & Alais, 2017). Certainly the availability of peripheral vision contributes to a sense of immersion and presence in a virtual scene (Lin, Duh, Parker, Abi-Rached, & Furness, 2002) and the visual fields of vestibular nucleus cells extend over the entire hemifield (Henn et al., 1974). Thus we propose that peripheral retinal flow may be regarded as vestibular in nature and may be actively inhibited and overridden by optic flow in the center of the field. This would explain why gains are lower in both the full field (central and peripheral motion) and central only conditions. 
Neurological underpinnings
Palmisano, Allison, Schira, and Barry (2015) provide an overview of brain imaging studies on visual self-motion perception. Debate around whether or not there is vestibular suppression or coactivation while processing multisensory cues to self-motion is still ongoing (Brandt et al., 1998; Nishiike et al., 2002). Recent findings suggest the nature of the interaction might be dependent on the motion profile: constant velocity (as used in these studies) might lead to a suppression of vestibular areas whereas acceleration might lead to coactivation (Palmisano, Allison, Kim, & Bonato, 2011). 
The processing of visual motion may also depend on the location of that motion in the visual field. Recent neurological findings suggest that the prostriata is specially implicated in processing visual input from the far periphery (Rockland, 2012; Yu et al., 2012). The prostriata is an area in the visual cortex considered part of the retrosplenial limbic cortex, implicated in memory and navigation (Vann, Aggleton, & Maguire, 2009). Most of the neurons in the prostriata have receptive fields with centers located more than 50° from the fovea and show a preference for faster speeds (Rockland, 2012). During linear motion retinal velocities are highest in the peripheral field. This area might therefore be involved in processing the high angular velocities that are found in the peripheral field during forwards linear motion and account for the different effects found when only the periphery is stimulated. 
Conclusions
Participants were able to use the visual motion provided to stop at preinstructed distances. Their systematic errors revealed that fastest self-motion was evoked when the stimulus fell only in the far periphery. This may be related to an inhibitory effect of optic flow in the central visual field on the peripheral field or may reflect the stimulation of a special brain area dedicated to processing motion in the far periphery. If only far peripheral information is available, it may be sensible to over interpret the movement and thus err on the side of caution. 
Acknowledgments
LRH was supported by Discovery Grant 46271-2015 from the Natural Sciences and Engineering Research Council of Canada. MM was partially supported by OGS and by an NSERC CREATE on the Brain in Action. The Christie Edgeless Graphics display was made possible by funding from the Canadian Foundation for Innovation (project #30859). 
Commercial relationships: none. 
Corresponding author: Meaghan McManus. 
Address: Department of Psychology, York University, Toronto, ON, Canada. 
References
Bergmann, J., Krauss, E., Munch, A., Jungmann, R., Oberfeld, D., & Hecht, H. (2011). Locomotor and verbal distance judgments in action and vista space. Experimental Brain Research, 210 (1), 13–23.
Berthoz, A., Pavard, B., & Young, L. R. (1975). Perception linear horizontal self motion induced by peripheral vision (linearvection) basic characteristics and visual-vestibular interactions. Experimental Brain Research, 23, 471–489.
Brandt, T., Bartenstein, P., Janek, A., & Dieterich, M. (1998). Reciprocal inhibitory visual-vestibular interaction. Visual motion stimulation deactivates the parieto-insular vestibular cortex. Brain: A Journal of Neurology, 121 (9), 1749–58.
Brandt, T., Dichgans, J. M., & Koenig, E. (1973). Differential effects of central versus peripheral vision on egocentric and exocentric motion perception. Experimental Brain Research, 16, 476–491.
Bremmer, F., & Lappe, M. (1999). The use of optical velocities for distance discrimination and reproduction during visually simulated self motion. Experimental Brain Research, 127, 33–42.
Delorme, A., & Martin, C. (1986). Roles of retinal periphery and depth periphery in linear vection and visual control of standing in humans. Canadian Journal of Psychology/Revue Canadienne de Psychologie, 40 (2), 176–187, doi.org/10.1037/h0080091.
Frenz, H., Lappe, M., Kolesnik, M., & Bührmann, T. (2007) Estimation of travel distance from visual motion in virtual environments. ACM Transactions on Applied Perception, 4, 3.
Harris, L. R., Herpers, R., Jenkin, M., Allison, R. S., Jenkin, H., Kapralos, B.,… Felsner, S. (2012). The relative contributions of radial and laminar optic flow to the perception of linear self-motion. Journal of Vision, 12 (10): 7, 1–10, doi:10.1167/12.10.7. [PubMed] [Article]
Henn, V., Young, L. R., & Finley, C. (1974). Vestibular nucleus units in alert monkeys are also influenced by moving visual fields. Brain Research, 71, 144–149.
Hogendoorn, H., Verstraten, F. A. J., MacDougall, H., & Alais, D. (2017). Vestibular signals of self-motion modulate global motion perception. Vision Research, 130, 22–30, doi.org/10.1016/j.visres.2016.11.002.
Howard, I. P., & Heckmann, T. (1989). Circular vection as a function of the relative sizes, distances, and positions of two competing visual displays. Perception, 18 (5), 657–665, doi.org/10.1068/p180657.
Lappe, M., Jenkin, M., & Harris, L. R. (2007). Travel distance estimation from visual motion by leaky path integration. Experimental Brain Research, 180 (1), 35–48, doi.org/10.1007/s00221-006-0835-6.
Lappe, M., Stiels, M., Frenz, H., & Loomis, J. M. (2011). Keeping track of the distance from home by leaky integration along veering paths. Experimental Brain Research, 212 (1), 81–89, doi.org/10.1007/s00221-011-2696-x.
Lin, J. J.-W., Duh, H. B. L., Parker, D. E., Abi-Rached, H., & Furness, T. A. (2002). Effects of field of view on presence, enjoyment, memory, and simulator sickness in a virtual environment. In Proceedings IEEE Virtual Reality 2002 (pp. 164–171). IEEE Computer Society, doi.org/10.1109/VR.2002.996519.
Nakamura, S. (2001). The perception of self-motion induced by central and peripheral visual stimuli moving in opposite directions. Japanese Psychological Research, 43 (3), 113–120, doi.org/10.1111/1468-5884.00167.
Nakamura, S. (2008). Effects of stimulus eccentricity on vection reevaluated with a binocularly defined depth. Japanese Psychological Research, 50 (2), 77–86, doi.org/10.1111/j.1468-5884.2008.00363.x.
Nakamura, S., & Shimojo, S. (1998). Stimulus size and eccentricity in visually induced perception of horizontally translational self-motion. Perceptual and Motor Skills, 87 (2), 659–663, doi.org/10.2466/pms.1998.87.2.659.
Nishiike, S., Nakagawa, S., Nakagawa, A., Uno, A., Tonoike, M., Takeda, N., & Kubo, T. (2002). Magnetic cortical responses evoked by visual linear forward acceleration. NeuroReport, 13 (14), 1805–1808, doi.org/10.1097/00001756-200210070-00023.
Ott, F., Pohl, L., Halfmann, M., Hardiess, G., & Mallot, H. A. (2016). The perception of ego-motion change in environments with varying depth: Interaction of stereo and optic flow. Journal of Vision, 16 (9): 4, 1–15, doi:10.1167/16.9.4. [PubMed] [Article]
Palmisano, S. (1996) Perceiving self-motion in depth: the role of stereoscopic motion and changing-size cues. Perception & Psychophysics, 58, 1168–1176.
Palmisano, S., Allison, R. S., Kim, J., & Bonato, F. (2011). Simulated viewpoint jitter shakes sensory conflict accounts of vection. Multisensory Research, 24, 173–200, doi.org/10.1163/187847511X570817.
Palmisano, S., Allison, R. S., & Pekin, F. (2008). Accelerating self-motion displays produce more compelling vection in depth. Perception, 37 (1), 22–33.
Palmisano, S., Allison, R. S., Schira, M. M., & Barry, R. J. (2015). Future challenges for vection research: Definitions, functional significance, measures, and neural bases. Frontiers in Psychology, 6, 1–15, doi.org/10.3389/fpsyg.2015.00193.
Palmisano, S., & Chan, A. Y. (2004). Jitter and size effects on vection are immune to experimental instructions and demands. Perception, 33 (8), 987–1000.
Palmisano, S., Gillam, B., & Blackburn, S. (2000). Global-perspective jitter improves vection in central vision. Perception, 29, 57–67.
Pretto, P., Ogier, M., Bülthoff, H. H., & Bresciani, J.-P. (2009). Influence of the size of the field of view on motion perception. Computers & Graphics, 33 (2), 139–146, doi.org/10.1016/j.cag.2009.01.003.
Redlick, F. P., Jenkin, M., & Harris, L. R. (2001). Humans can use optic flow to estimate distance of travel. Vision Research, 41 (2), 213–219.
Rockland, K. S. (2012). Visual system: Prostriata—A visual area off the beaten path. Current Biology, 22 (14), 571–573, doi.org/10.1016/j.cub.2012.05.030.
Sekiya, N., & Nagasaki, H. (1998). Reproducibility of the walking patterns of normal young adults: Test-retest reliability of the walk ratio (step-length/step-rate), Gait & Posture, 7, 225–227.
Seya, Y., Shinoda, H., & Nakaura, Y. (2015). Up-down asymmetry in vertical vection. Vision Research, 117, 16–24, doi.org/10.1016/j.visres.2015.10.013.
Spector, R. H. (1990). Visual fields. In Walker, H. K. Hall, W. D. & Hurst J. W. (Eds.) Clinical methods: The history, physical and laboratory examination. Boston, MA: Butterworths.
Tarita-Nistor, L., González, E. G., Spigelman, A. J., & Steinbach, M. J. (2006). Linear vection as a function of stimulus eccentricity, visual angle, and fixation. Journal of Vestibular Research: Equilibrium & Orientation, 16 (6), 265–272.
Turano, K. A., Yu, D., Hao, L., & Hicks, J. C. (2005). Optic-flow and egocentric-direction strategies in walking: Central versus peripheral visual field. Vision Research, 45 (25–26), 3117–3132, doi.org/10.1016/j.visres.2005.06.017.
Vann, S. D., Aggleton, J. P., & Maguire, E. A. (2009). What does the retrosplenial cortex do? Nature Reviews Neuroscience, 10 (11), 792–802, doi.org/10.1038/nrn2733.
Waespe, W., & Henn, V. (1978). Conflicting visual-vestibular stimulation and vestibular nucleus activity in alert monkeys. Experimental Brain Research, 33, 203–211.
Yu, H. H., Chaplin, T. A., Davies, A. J., Verma, R., & Rosa, M. G. P. (2012). A specialized area in limbic cortex for fast analysis of peripheral vision. Current Biology, 22 (14), 1351–1357, doi.org/10.1016/j.cub.2012.05.029.
Footnotes
1  In the original Lappe et al. (2007) paper there was a typographical error in the appendix in which this expansion was described. This has been corrected here.
Figure 1
 
The Christie large-field Edgeless Graphics Geometry display (EGG). Participants sat in a chair located in the middle of the curved screen.
Figure 1
 
The Christie large-field Edgeless Graphics Geometry display (EGG). Participants sat in a chair located in the middle of the curved screen.
Figure 2
 
Screen captures of the hallway. (a) full field of view (FF), (b) central field of view ±20° (CF20), (c) central field of view ±40° (CF40), (d) peripheral field of view from ±20° out (PF20), (e) peripheral field of view from ±40° out (PF40), and (f) peripheral field of view from ±90° out (PF90). The target hedra is visible in the first three illustrations. In the experiment the target and the field restrictions were not present at the same time. Movies demonstrating each of the visual conditions are provided in the supplementary materials.
Figure 2
 
Screen captures of the hallway. (a) full field of view (FF), (b) central field of view ±20° (CF20), (c) central field of view ±40° (CF40), (d) peripheral field of view from ±20° out (PF20), (e) peripheral field of view from ±40° out (PF40), and (f) peripheral field of view from ±90° out (PF90). The target hedra is visible in the first three illustrations. In the experiment the target and the field restrictions were not present at the same time. Movies demonstrating each of the visual conditions are provided in the supplementary materials.
Figure 3
 
The average distance participants needed to travel to reach each target distance for each visual condition listed in the legend on the side of the bar chart. Error bars are ± 1 SE. For each distance, PF90 differed from all of the other visual conditions (p < 0.001) indicated by the asterisks. Data from all speeds have been pooled.
Figure 3
 
The average distance participants needed to travel to reach each target distance for each visual condition listed in the legend on the side of the bar chart. Error bars are ± 1 SE. For each distance, PF90 differed from all of the other visual conditions (p < 0.001) indicated by the asterisks. Data from all speeds have been pooled.
Figure 4
 
For each speed, 0.5 m/s, 1.0 m/s, and 1.5 m/s, the average distance traveled for a given target distance is plotted. Error bars are ±1 SE. Significant differences between speeds for each distance are shown by horizontal bars (p < 0.05). All visual conditions have been pooled.
Figure 4
 
For each speed, 0.5 m/s, 1.0 m/s, and 1.5 m/s, the average distance traveled for a given target distance is plotted. Error bars are ±1 SE. Significant differences between speeds for each distance are shown by horizontal bars (p < 0.05). All visual conditions have been pooled.
Figure 5
 
The data were modeled using the Lappe et al. (2007) leaky spatial integrator model. Fits of the model to the data are shown for each visual condition and speed. The dashed line corresponds to accurate performance.
Figure 5
 
The data were modeled using the Lappe et al. (2007) leaky spatial integrator model. Fits of the model to the data are shown for each visual condition and speed. The dashed line corresponds to accurate performance.
Figure 6
 
The gains for each visual condition plotted as a function of simulated linear velocity. Data from the furthest peripheral condition (PF 90) are plotted with a green line. Errors bars are ±1 SE. Also shown is a gray dashed line corresponding to accurate performance, and a blue shaded region corresponding to the range of speeds of normal walking (1.2 m/s ± 0.15 m/s, from Sekiya & Nagasaki, 1998).
Figure 6
 
The gains for each visual condition plotted as a function of simulated linear velocity. Data from the furthest peripheral condition (PF 90) are plotted with a green line. Errors bars are ±1 SE. Also shown is a gray dashed line corresponding to accurate performance, and a blue shaded region corresponding to the range of speeds of normal walking (1.2 m/s ± 0.15 m/s, from Sekiya & Nagasaki, 1998).
Supplement 1
Supplement 2
Supplement 3
Supplement 4
Supplement 5
Supplement 6
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×