Open Access
Article  |   April 2018
No special treatment of independent object motion for heading perception
Author Affiliations
  • Li Li
    NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, PRC
    Department of Psychology, The University of Hong Kong, Hong Kong SAR
    [email protected]
  • Long Ni
    Department of Psychology, The University of Hong Kong, Hong Kong SAR
  • Markus Lappe
    Institute for Psychology and Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
  • Diederick C. Niehorster
    Department of Psychology, The University of Hong Kong, Hong Kong SAR
  • Qi Sun
    Department of Psychology, The University of Hong Kong, Hong Kong SAR
Journal of Vision April 2018, Vol.18, 19. doi:https://doi.org/10.1167/18.4.19
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Li Li, Long Ni, Markus Lappe, Diederick C. Niehorster, Qi Sun; No special treatment of independent object motion for heading perception. Journal of Vision 2018;18(4):19. https://doi.org/10.1167/18.4.19.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How do we judge the direction of self-motion (i.e., heading) in the presence of independent object motion? Previous studies that examined this question confounded the effects of a moving object's speed and its position on heading judgments, and did not examine whether the visual system uses salient nonmotion visual cues (such as color contrast and binocular disparity) to segment a moving object from global optic flow prior to heading estimation. The current study addressed these issues with both behavioral testing and computational modeling. Our results show that the visual system does not treat independent object motion separately for the perception of heading during self-motion. This is surprising because we all can segment a moving object from global optic flow and perceive its scene-relative motion independent of self-motion. Our findings support the claim that the perception of self-motion with independent object motion and the perception of object motion during self-motion are performed by different neural mechanisms.

Introduction
When we move in the world, the projected image of the world in our eyes transforms and generates a global pattern of movement which is termed as optic flow (Gibson, 1950). When we travel on a straight path, the pattern of optic flow is radial and originates from a fixed point, the focus of expansion (FOE), which indicates the direction of our self-motion (i.e., heading). It has been reported that humans can use the FOE in optic flow to accurately perceive heading within 1° to 2° (Van den Berg, 1992; Warren, Morris, & Kalish, 1988). 
We frequently move through an environment where other objects move as well, e.g., walking pedestrians or oncoming cars in the street. A moving object adds an extra local motion component to global optic flow generated during self-motion and disturbs its coherent pattern, potentially making heading perception more difficult. An effective way to perceive heading accurately in this case is to identify and segment the moving object from the global flow field and then base heading estimation on motion information in the background optic flow. Several studies have investigated whether the human visual system is capable of doing this and found that although observers could judge heading accurately when a moving object did not occlude the FOE in the background optic flow, small but significant biases in heading judgments arose when the object occluded or was in close proximity to the background FOE (Layton & Fajen, 2016a; Layton & Fajen, 2016b; Royden & Hildreth, 1996; Warren & Saunders, 1995). This finding suggests that the visual system does not always segment a moving object from the global flow field prior to heading estimation. 
The direction of the observed heading bias however differs in these studies. While some (Layton & Fajen, 2016a; Layton & Fajen, 2016b; Warren & Saunders, 1995) found a heading bias in the opposite direction to object lateral motion, others (Royden & Hildreth, 1996) reported a heading bias in the same direction as object lateral motion. The discrepancy in the reported direction of heading bias is attributed to differences in the presented object motion. Specifically, while an object moved laterally and simultaneously approached the observer in the former group of studies, it moved laterally but kept at a fixed distance from the observer in the latter study. 
Computational models
A number of computational models attempted to explain the underlying neural computation for heading perception in the presence of independent object motion. Although capable of qualitatively reproducing some psychophysical data, most models lack biological basis (e.g., Hanada, 2005; Pauwels & Van Hulle, 2004; Raudies & Neumann, 2013; Saunders & Niehorster, 2010). Below we discuss the heading models implemented in the neurophysiological framework and capable of replicating certain psychophysical data. 
Warren and Saunders (1995) proposed a template-matching model to simulate the bias in heading judgments induced by a moving object in their study. The model is composed of two layers of units, representing neurons in primate visual areas MT and MST. Each unit in the MST layer is a heading template corresponding to a specific optic flow pattern. Heading direction is computed by pooling overall velocity vectors in the flow field detected by the units in the MT layer and then finding a unit in the MST layer that best matches the input flow field (see also Perrone & Stone, 1994). The Warren and Saunders model can successfully explain the observed heading bias in the opposite direction to the lateral motion of an approaching object (Layton & Fajen, 2016a; Layton & Fajen, 2016b; Warren & Saunders, 1995) but cannot explain the observed heading bias in the same direction as the lateral motion of a nonapproaching object (Royden & Hildreth, 1996). 
In order to explain different heading biases induced by different types of object motion, based on the property of the neurons in the primate visual area MT, Royden (2002) proposed a differential motion model that subtracts velocity vectors in adjacent neighborhoods throughout the flow field to obtain difference velocity vectors. The model then computes the heading direction by pooling all difference velocity vectors in the flow field. Although this model works well in explaining different heading biases induced by different types of object motion, neurophysiological research has cast doubt on whether differential motion neurons in MT feed to the dorsal part of MST (MSTd) where heading-sensitive neurons are located (Berezovskii & Born, 2000), thus challenging the neurophysiological basis of this model. 
Layton, Mingolla, and Browning (2012) proposed another neurophysiologically plausible model. Specifically, their model shows that heading bias can be produced by competitive interactions between the flow template units representing neurons in the MSTd area. When an approaching object moves laterally to occlude the FOE in the background optic flow, the activities in the MSTd units merge and produce a peak activity (i.e., the heading estimate) in the opposite direction to the approaching object's lateral motion. In contrast, when a nonapproaching object moves laterally to occlude the FOE in the background optic flow, the activities in the MSTd units split into a bimodal distribution and produce a peak activity in the same direction as the nonapproaching object's lateral motion. This model thus also successfully reproduces different heading biases induced by different types of object motion reported by previous psychophysical studies, but only when the object moves to obscure the FOE in the background optic flow. Layton and Fajen (2016c) later added recurrent interactions to the model's MSTd units to simulate the temporal dynamics of heading perception in the presence of a moving object. 
Current study
Although previous studies consistently found that a moving object composed of random dots biased heading estimation when it moved to obscure the FOE in the background optic flow, the object moved to obscure the background FOE only in a small part of a trial (Layton & Fajen, 2016a; Layton & Fajen, 2016b; Royden & Hildreth, 1996; Warren & Saunders, 1995). The fact that neither an opaque blank object nor an object composed of static dots influenced heading perception even when it moved and completely occluded the background FOE (Royden & Hildreth, 1996; Warren & Saunders, 1995) suggests that heading biases may not result from the obscuring of the background FOE. This possibility, however, has not been examined in previous studies. 
In the presence of a moving object, an efficient way for observers to perceive heading accurately is to identify and segment the moving object from the global flow field and then base heading estimation on the motion information in the background optic flow. Although previous psychophysical studies show that the visual system does not use relative motion or dynamic occlusion cues in the display to segment independent object motion prior to heading perception (Layton & Fajen, 2016a; Layton & Fajen, 2016b; Royden & Hildreth, 1996; Warren & Saunders, 1995), it is possible that the visual system can still use salient nonmotion visual cues (such as color contrast and binocular disparity) to segment a moving object from the global flow field. This possibility, again, has not been examined. 
In the current study, we aimed to address the above standing issues. First, we examined whether the occlusion of the FOE in the background optic flow is a prerequisite for a moving object to elicit biases in heading judgments (Experiment 1). Second, we examined whether the visual system uses salient nonmotion visual cues, such as color contrast (Experiment 2) or binocular disparity (Experiment 3) to segment a moving object from the global flow field prior to heading estimation. Given the experimental data, we then performed computational modeling to examine whether and how the observed heading biases can be explained by a visual system that interprets flow fields in terms of self-motion in a rigid world. 
Experiment 1: Is the occlusion of the background FOE a prerequisite for a moving object to induce a heading bias?
This experiment addressed the first standing issue, i.e., whether it is necessary for a moving object to obscure the FOE in the background optic flow to bias heading perception. Previous studies that supported this claim used an object composed of random dots that moved at a constant speed across the background optic flow. The object's position relative to the background FOE thus changed continuously throughout a trial (Layton & Fajen, 2016a; Layton & Fajen, 2016b; Royden & Hildreth, 1996; Warren & Saunders, 1995). It is possible that the observed heading bias arose not because the moving object occluded the background FOE but because the closer the velocity vectors in optic flow to the FOE, the smaller their magnitude. When the object moved to get closer to the background FOE, its motion could become more dominant compared with optic flow and thus bias the perceived location of the FOE (i.e., the perceived heading). If this is true, the same heading bias may arise when a moving object's speed is large enough compared with the background optic flow even when the object is away from the background FOE. In this experiment, we tested this hypothesis by using an opaque window that remained in a constant position on the screen throughout a trial within which different types of object motion were presented to separate the effects of a moving object's speed and its position relative to the background FOE on heading judgments. 
Participants
Eleven students (10 naïve as to the purpose of the study and one author; six males and five females) between the age of 21 and 32 (average 26) at The University of Hong Kong participated in this experiment. All had normal or corrected-to-normal vision and provided informed consent. The study was approved by the Human Research Ethics Committee for Non-Clinical Faculties at The University of Hong Kong. We determined the sample size in each experiment based on the sample size used in previous research. 
Visual stimuli and apparatus
The display (46° × 46°) simulated an observer translating at 1 m/s through a 3D cloud (depth range 0.5–2 m) consisting of 50 randomly positioned nonexpanding white dots (diameter 0.5°; luminance 122 cd/m2) in the presence of an opaque window (10° × 10°) within which nine randomly positioned nonexpanding white dots were shown. The projected dot density on the image screen was 0.02 dots/deg2 for the 3D cloud and 0.09 dots/deg2 for the opaque window. The background color was black (see Figure 1). The direction of the simulated observer translation (i.e., the FOE in the background optic flow) varied from −8° (left) to 8° (right) with respect to the middle of the screen (0°). The center of the opaque window was at the horizontal midline of the screen with a constant position offset at 5° or 10° from the FOE in the background optic flow. At the 5° offset, the edge (invisible) of the opaque window occluded the background FOE, while at the 10° offset, the background FOE could be seen throughout the trial. 
Figure 1
 
Schematic illustrations of the visual stimuli in Experiment 1: (a) Lateral motion condition in which the dots within the opaque window moved laterally to the right on the display screen, (b) motion-in-depth condition in which the dots moved laterally to the right while simultaneously moved toward the observer, (c) random motion condition in which each dot moved in random direction on the display screen, and (d) no motion condition in which no dot was shown inside the opaque window. The direction of the simulated observer translation (i.e., the FOE of the background flow field) is indicated by the white circle (absent in the experiment) in the center, and the position offset of the opaque window is at 5° to the right of the background FOE. The boundaries of the opaque window (absent in the experiment) are shown for illustration purpose only.
Figure 1
 
Schematic illustrations of the visual stimuli in Experiment 1: (a) Lateral motion condition in which the dots within the opaque window moved laterally to the right on the display screen, (b) motion-in-depth condition in which the dots moved laterally to the right while simultaneously moved toward the observer, (c) random motion condition in which each dot moved in random direction on the display screen, and (d) no motion condition in which no dot was shown inside the opaque window. The direction of the simulated observer translation (i.e., the FOE of the background flow field) is indicated by the white circle (absent in the experiment) in the center, and the position offset of the opaque window is at 5° to the right of the background FOE. The boundaries of the opaque window (absent in the experiment) are shown for illustration purpose only.
We tested four types of object motion conditions: (a) in the lateral motion condition, on each trial, the nine dots within the opaque window moved laterally (left or right) on the display screen at 4.6°/s for the 5° position offset and at 9.2°/s for the 10° offset to match the average background optic flow speed at the window's location (Figure 1a, Supplementary Movie S1); (b) in the motion-in-depth condition, the dots within the opaque window were put in the same depth range of the 3D cloud (0.5–2.0 m). They moved laterally as in Condition 1 but simultaneously approached the observer at 1 m/s (Figure 1b, Supplementary Movie S2). Together with the 1 m/s forward motion of the observer, the total approaching speed of the dots within the opaque window was 2 m/s in the display; (c) in the random motion condition, each dot within the opaque window moved in a random direction on the display screen with the speed matched to that in Condition 1 (Figure 1c, Supplementary Movie S3); and (d) in the no motion condition, all dots within the opaque window were removed; thus, the window contained zero motion signal (Figure 1d, Supplementary Movie S4). For all display conditions, on each trial, the opaque window remained in a constant position on the screen. The dots that moved outside of the viewing frustum of the 3D cloud or the opaque window disappeared but were repositioned back in a random position; thus, the number of visible dots in the 3D cloud and the opaque window and their depths remained constant throughout the trial. 
The displays were partly programmed in MATLAB (MathWorks, Natick, MA) using the Psychophysics Toolbox 3 and partly in C++ with OpenGL and were rendered using a Dell Studio XPS Desktop 435T/9000 with an NVIDIA GeForce GTX 560Ti graphics card running Windows 7. The displays (46° × 46°) were rear-projected on a large screen at 60 Hz with an Epson EB-G5750WU projector (native resolution: 1,920 × 1,200 pixels). Participants viewed the display monocularly with their head stabilized by a chin rest at the viewing distance of 56.5 cm. 
Procedure
On each trial, the first frame of the display appeared for 1 s and was followed by 1 s of motion. At the end of the displayed motion, a white horizontal line appeared at the center of an otherwise blank display, and participants were asked to use the mouse to move a vertical bar that appeared in a random position within 20° relative to the middle of the screen along the horizontal line to indicate their perceived heading direction. As in previous studies (Layton & Fajen, 2016a; Layton & Fajen, 2016b; Royden & Hildreth, 1996; Warren & Saunders, 1995), participants were free to fixate where they wished during the trial. Because any eye movement would be accompanied by extraretinal oculomotor signals in this case, the eye movement should have little effect on heading perception (see Li & Warren, 2000). 
The experiment was composed of two blocks of randomly intermixed trials. One block contained two lateral motion conditions (left and right lateral motion, respectively) and the random motion and the no motion conditions with 240 trials (4 object motion conditions × 4 position offsets × 15 trials), and the other block contained two motion-in-depth conditions (with left and right lateral motion, respectively) with 120 trials (2 object motion conditions × 4 position offsets × 15 trials). The testing order of the two blocks was counterbalanced between participants. Participants received 20 practice trials with the display that contained only the background optic flow with no other object before the experiment commenced. No feedback was provided on any trial. The experiment took approximately 40 min to complete. 
Results
The angle between participants' perceived heading and the FOE in the background optic flow, defined as heading error, was measured. Given participants' symmetrical heading performance in the left and right object lateral motion conditions, we collapsed the heading judgment data across these two conditions for both the lateral motion and the motion-in-depth conditions. The solid lines in Figure 2a plot the mean heading error averaged across 11 participants as a function of position offset for the lateral motion and the motion-in-depth object motion conditions in Experiment 1. Positive heading errors indicate a heading bias in the same direction as object lateral motion, and negative heading errors indicate a heading bias in the opposite direction to object lateral motion. A 2 (object motion condition) × 4 (position offset) repeated-measures ANOVA revealed that the two object motion conditions induced a bias in heading judgments in opposite directions, F(1, 10) = 39, p ≪ 0.001, η2 = 0.8. While heading errors were in the same direction as object lateral motion for the lateral motion condition, Mean ± SE: 0.99° ± 0.21°, t(10) = 4.85, p = 0.001, Cohen's d = 1.46, heading errors were in the opposite direction to object lateral motion for the motion-in-depth condition, −0.87° ± 0.15°, t(10) = −5.95, p ≪ 0.001, Cohen's d = 1.8. Position offset had no effect on the induced heading bias, F(3, 30) = 0.22, p = 0.88, η2 = 0.02. 
Figure 2
 
Experiment 1 data. (a) Mean heading errors averaged across participants as a function of position offset for the lateral motion (red lines) and the motion-in-depth conditions (green lines) in Experiment 1 (solid lines) and the control experiment (dashed lines). Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. (b) Mean heading errors for the random motion (blue line) and the no motion conditions (purple line) in Experiment 1. Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow, and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 11 participants in Experiment 1 and across five participants in the control Experiment.
Figure 2
 
Experiment 1 data. (a) Mean heading errors averaged across participants as a function of position offset for the lateral motion (red lines) and the motion-in-depth conditions (green lines) in Experiment 1 (solid lines) and the control experiment (dashed lines). Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. (b) Mean heading errors for the random motion (blue line) and the no motion conditions (purple line) in Experiment 1. Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow, and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 11 participants in Experiment 1 and across five participants in the control Experiment.
The solid lines of Figure 2b plot the mean heading error averaged across 11 participants for the random motion and the no-motion conditions in Experiment 1. Positive heading errors indicate a heading bias to the right of the background FOE, and negative heading errors indicate a heading bias to the left of the background FOE. Two separate one-way repeated-measures ANOVAs revealed that while the perceived heading was biased toward the offset position for the random-motion condition in which the dots inside the opaque window moved in random directions, F(3, 30) = 7.55, p = 0.001, η2 = 0.43, no such effect was found for the no-motion condition in which no motion signals were shown within the opaque window, F(3, 30) = 2.41, p = 0.16, η2 = 0.46. 
Discussion
The results of this experiment are consistent with previous findings in showing that an approaching laterally moving object composed of random dots biases heading perception in the opposite direction to its lateral motion (Layton & Fajen, 2016a; Layton & Fajen, 2016b; Warren & Saunders, 1995) and a nonapproaching laterally moving object biases heading perception in the same direction as its lateral motion (Royden & Hildreth, 1996). The magnitude of the observed heading bias (about 1°) is also comparable to previous findings. It is not surprising that heading perception is not affected by the presence of an opaque blank object even when it occludes the background FOE (Royden & Hildreth, 1996; Warren & Saunders, 1995). Interestingly, when an object composed of random dots provided random motion signals, the perceived heading is biased toward its location. This has not been reported before. 
An important finding of this experiment is that a moving object does not have to occlude or be in close proximity to the FOE in the background optic flow to induce biases in heading judgments. This conflicts with the findings of previous studies (Layton & Fajen, 2016a; Layton & Fajen, 2016b; Royden & Hildreth, 1996; Warren & Saunders, 1995). One explanation is that previous studies entangled the effects of a moving object's speed and its position relative to the background FOE on heading perception by keeping its lateral motion speed constant as it moved to occlude the background FOE. We thus do not know whether the observed heading bias was induced by the occlusion of the background FOE (object position effect) or the dominant object motion speed compared with the background optic flow speed as the object moved close to the background FOE (object motion effect). In this experiment, when we kept an opaque window in a fixed position away from the background FOE (such as in the 10° offset position) and matched the speed of lateral object motion displayed within the window to that of the background optic flow at its location, the same amount of heading bias arose as when we kept the window in a position to occlude the background FOE (such as in the 5° offset position) and lowered the speed of object lateral motion displayed within the window to match the flow speed near the FOE. We hence separated the effects of object position and object motion on heading perception and found that the obscuring of the FOE in the background optic flow is not a prerequisite for a moving object to bias heading perception. 
Effect of object lateral motion speed
To further confirm the above findings, we ran a control experiment with five participants (four males and one female, average age 24) in which the object lateral motion speed was set to 7°/s for both the 5° and 10° position offsets (the lateral motion speed was 4.6°/s for the 5° position offset and 9.2°/s for the 10° position offset in Experiment 1). We found that compared with the results of Experiment 1, the magnitude of the induced heading bias, averaged across the lateral motion and the motion-in-depth conditions, was significantly smaller at the 10° position offset, 0.23° ± 0.28° versus 0.78° ± 0.11° in Experiment 1, t(17) = −2.33, p = 0.035, Cohen's d = 1.16; but did not change much at the 5° offset, 1.05° ± 0.19° versus 1.07° ± 0.19° in Experiment 1, t(17) = −0.07, p = 0.95, Cohen's d = 0.04, possibly due to a ceiling effect at this position offset (see Figure 2a). These results provide further support to our conclusion that it is the speed of a moving object rather than the occlusion of the FOE in the background optic flow that determines the magnitude of the induced heading bias. A moving object away from the background FOE can induce a heading bias when its lateral motion speed is comparable to the background optic flow speed at its location. 
Note that in Experiment 1, the two motion-in-depth conditions were run in a separate block from the two lateral motion conditions. To make sure that the different directions of heading biases observed in the motion-in-depth and lateral motion conditions are not due to the fact that they were run in separate blocks, we intermixed the trials from all object motion conditions and ran them in a random order in the following two experiments. 
Experiment 2: Does color contrast aid the segmentation of a moving object?
This experiment addressed the second standing issue, i.e., whether the visual system uses salient nonmotion visual cues to segment a moving object from the global flow field prior to heading estimation. The segmentation idea has been favored by many computational heading models that first estimate self-motion parameters (e.g., heading/translation and rotation) from the global flow field and then detect the inconsistent flow area supposed to contain a moving object. Once the moving object is detected, it is segmented from the global flow field and accurate heading estimation is carried out by pooling the motion signals in the remaining flow field (e.g., Adiv, 1985; Pauwels & Van Hulle, 2004; Raudies & Neumann, 2013). 
Psychophysical studies so far have shown that neither relative motion nor dynamic occlusion cues provided by a moving object and the background optic flow enables the visual system to segment the moving object from the global flow field to support accurate heading estimation (Layton & Fajen, 2016a, b; Royden & Hildreth, 1996; Warren & Saunders, 1995). However, other more salient nonmotion visual cues that may facilitate segmentation have not been examined in this context; e.g., a host of studies have shown that objects with different colors pop out from the background scene and capture one's attention automatically (Frey, Honey, & König, 2008; Turatto & Galfano, 2000), and can thus be easily detected and segmented from complex scenes (e.g., Nothdurft, 1993; Saarela & Landy, 2012). In this experiment, we thus examined whether adding color contrast between a moving object and the background optic flow enables the visual system to segment the moving object from the global flow field to support accurate heading perception. 
Participants
Eleven students (10 naïve as to the purpose of the study and one author; five males and six females) between the ages of 19 and 32 (average 23) at The University of Hong Kong participated in this experiment. Three of these participants also participated in Experiment 1. All had normal or corrected-to-normal vision and provided informed consent. The study was approved by the Human Research Ethics Committee for Non-Clinical Faculties at The University of Hong Kong. 
Visual stimuli and procedure
The display was identical to that in Experiment 1 except that the color of the dots within the opaque window was changed to yellow to make them stand out from the 3D cloud composed of white dots. The luminance of the yellow dots within the opaque window and the white dots in the 3D cloud was equated (122 cd/m2). 
Because no heading bias was observed in the no-motion condition in Experiment 1, Experiment 2 did not test this condition but tested the lateral motion, the motion-in-depth, and the random motion conditions. Accordingly, Experiment 2 had one block of 300 randomly intermixed trials (5 object motion conditions: left & right lateral motion, left & right motion-in-depth, and random motion) × 4 position offsets × 15 trials) and took approximately 30 min to complete. The rest of the testing procedure was the same as in Experiment 1
Results
Figure 3a plots the mean heading error averaged across 11 participants as a function of position offset for the lateral motion and the motion-in-depth object motion conditions in Experiment 2. Similar to Experiment 1, a two (object motion condition) × 4 (position offset) repeated-measures ANOVA revealed that the two object motion conditions induced heading biases in opposite directions, F(1, 10) = 19.5, p = 0.001, η2 = 0.66, and position offset had no effect on the induced heading bias, F(3, 30) = 0.16, p = 0.9, η2 = 0.02. 
Figure 3
 
Experiment 2 data. (a) Mean heading errors averaged across participants as a function of position offset for the lateral motion (red line) and the motion-in-depth conditions (green line). Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. (b) Mean heading errors for the random motion condition (blue line). Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow, and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 11 participants.
Figure 3
 
Experiment 2 data. (a) Mean heading errors averaged across participants as a function of position offset for the lateral motion (red line) and the motion-in-depth conditions (green line). Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. (b) Mean heading errors for the random motion condition (blue line). Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow, and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 11 participants.
Figure 3b plots the mean heading error averaged across 11 participants for the random motion condition in Experiment 2. Similar to Experiment 1, a one-way repeated-measures ANOVA revealed that the perceived heading is biased toward the offset position for this condition, F(3, 30) = 5.17, p = 0.005, η2 = 0.34. 
Discussion
The results of Experiment 2 reinforced the findings of Experiment 1in that a moving object away from the FOE in the background optic flow could induce the same amount of heading bias as a moving object in close proximity to the background FOE, and the direction of heading bias depended on whether the moving object approached the observer or not. Furthermore, the results from Experiments 1 and 2 both showed a heading bias toward the object offset position when the object contained random motion signals. 
Effect of the color contrast segmentation cue
To examine whether adding color contrast between a moving object and the background optic flow affected heading judgments in Experiment 2, we compared the magnitude of the observed heading biases between Experiments 1 and 2 by conducting a 2 (Experiment) × 4 (position offset) mixed-design ANOVA on heading errors in each of the three object motion conditions. The direction and magnitude of heading biases were both comparable between the two experiments for all three object motion conditions, F(1, 20) = 0.104, p = 0.75, η2 = 0.005 for the lateral motion condition; F(1, 20) = 0.32, p = 0.58, η2 = 0.016 for the motion-in-depth condition; and F(1, 20) = 0.04, p = 0.84, η2 = 0.002 for the random motion condition, indicating that adding color contrast to a moving object in Experiment 2 did not help the visual system segment it from the global flow field to support accurate heading perception. 
Experiment 3: Does binocular disparity aid the segmentation of a moving object?
Previous psychophysical studies have shown that adding binocular disparity information in the display helps the visual system discount motion noise in the flow field (such as rotational components caused by eye and head movements) and thus improve the accuracy of heading perception (e.g., Lappe, 1996; Van den Berg & Brenner, 1994). It has also been shown that binocular disparity information alone facilitates the segmentation of local motion signals from a global motion scene (e.g., Britten, 1999; Poom & Börjesson, 2005). This experiment thus examined whether adding binocular disparity information in the display facilitates the segmentation of a moving objet from the global flow field to enable accurate heading perception. 
Participants
Fourteen students (all naïve as to the purpose of the study; five males and nine females) between the ages of 19 and 27 (average 22) at New York University Shanghai participated in this experiment. None had participated in Experiments 1 and 2. All had normal or corrected-to -normal vision and provided informed consent. The study was approved by the Institutional Review Board at New York University Shanghai. 
Visual stimuli and apparatus
This experiment had two viewing conditions. In the stereo viewing condition, a stereo display was presented on a LCD monitor with a smaller screen size than that of the rear-projected screen used in Experiments 1 and 2. Accordingly, the display size and the related experimental parameters were scaled down. Specifically, the stereo display (33.2° × 33.2°, focal distance and viewing distance 56.5 cm) simulated an observer translating at 0.8 m/s through a 3D cloud (depth range 0.4–1 m) consisting of 50 randomly positioned nonexpanding white dots (diameter: 0.15°) in the presence of an opaque window (7° × 7°) within which nine randomly positioned nonexpanding white dots were shown. The direction of the simulated observer translation (i.e., the FOE in the background optic flow) varied from −5° (left) to 5° (right) with respect to the middle of the screen (0°). The center of the opaque window was at the horizontal midline of the screen with a constant position offset of 3.5° or 7° from the FOE in the background optic flow. 
As in Experiment 2, we tested three object motion conditions in Experiment 3: (a) in the lateral motion condition, on each trial, the nine dots within the opaque window were put on a plane at the distance of 2.1 m. They moved laterally (left or right) at 3.2°/s for the 3.5° position offset and at 6.4°/s for the 7° position offset to match the average background optic flow speed at the window's location; (b) in the motion-in-depth condition, the dots within the opaque window were put in the depth range of (1.8–2.4 m). They moved laterally as in Condition 1 but simultaneously approached the observer at 0.8 m/s. Together with the 0.8 m/s forward motion of the observer, the total approaching speed of the dots within the opaque window was 1.6 m/s in the display; and (3) in the random motion condition, each dot within the opaque window moved in a random direction on a plane at the distance of 2.1 m with the speed matched to that in Condition 1. Note that for all the three display conditions, the dots within the opaque window were placed at a distance far away from the 3D cloud (depth range 0.4–1 m), thus using the binocular disparity cue in the stereo display could nicely segment the moving object from the background optic flow with the 3D cloud. 
As a control condition, we also ran a nonstereo display condition in which the display was rendered in nonstereo mode (i.e., the same cyclopean view was presented to both eyes through the shutter glasses) to remove the binocular disparity cue in the display. 
The display was programmed in MATLAB using the Psychophysics Toolbox 3 and was rendered using a Dell Studio XPS Desktop 435T/9000 with a Leadtek Quadro K2000 GDDR5 graphics card running Windows 7. The display (33.2° × 33.2°) was presented on an ASUS VG278H 27-in. LCD monitor (resolution: 1,920 × 1,080 pixels) at 120 Hz (60 Hz per eye). Participants viewed the display binocularly through a pair of nVidia shutter glasses with their head stabilized by a chinrest. 
Procedure
The testing procedure was identical to that of Experiment 2 except that on each trial, the first frame of the display appeared for 1.5 s to allow participants to have sufficient time to fuse the two views of the stereo display. In addition, this experiment contained two blocks of randomly intermixed trials for the two viewing conditions. The testing order of the two blocks was counterbalanced between participants. As in Experiment 2, each block consisted of 300 trials: 5 object motion conditions (left & right lateral motion, left & right motion-in-depth, and random motion) × 4 position offsets × 15 trials. The experiment took approximately 1 hr to complete. 
Results and discussion
The solid lines in Figure 4a plot the mean heading error averaged across 14 participants as a function of position offset for the lateral motion and the motion-in-depth object motion conditions with stereo displays. Again, a 2 (object motion condition) × 4 (position offset) repeated-measures ANOVA revealed that the two object motion conditions induced heading biases in opposite directions, F(1, 13) = 34.30, p ≪ 0.001, η2 = 0.73, and position offset had no effect on the induced heading bias, F(3, 39) = 0.85, p = 0.476, η2 = 0.06. 
Figure 4
 
Experiment 3 data. (a) Mean heading errors averaged across participants as a function of position offset for the lateral motion (red lines) and the motion-in-depth conditions (green lines) with stereo (solid lines) and nonstereo (dashed lines) displays. Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. (b) Mean heading errors for the random motion condition (blue lines). Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow, and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 14 participants.
Figure 4
 
Experiment 3 data. (a) Mean heading errors averaged across participants as a function of position offset for the lateral motion (red lines) and the motion-in-depth conditions (green lines) with stereo (solid lines) and nonstereo (dashed lines) displays. Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. (b) Mean heading errors for the random motion condition (blue lines). Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow, and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 14 participants.
The solid line of Figure 4b plots the mean heading error averaged across 14 participants for the random motion condition with stereo displays. Consistent with the data for this condition in Experiments 1 and 2, there appeared a trend that the perceived heading was biased toward the offset position. A one-way repeated-measures ANOVA, however, did not reveal a significant position offset effect on heading judgments, F(3, 39) = 1.44, p = 0.25, η2 = 0.10. 
Consistent with the results from Experiments 1 and 2, the results from Experiment 3 with stereo displays also showed that while heading errors were in the same direction as object lateral motion for the lateral motion condition, Mean ± SE: 0.84° ± 0.13°, t(13) = 6.52, p ≪ 0.001, Cohen's d = 1.81, heading errors were in the opposite direction to object lateral motion for the motion-in-depth condition, −0.51° ± 0.13°, t(13) = −3.93, p = 0.002, Cohen's d = 1.09. This suggests that the visual system does not use binocular disparity information to segment a moving object from the global flow field to support accurate heading estimation. 
The data from the nonstereo display condition provide further evidence for the above claim. The dashed lines in Figure 4 plot the mean heading error averaged across 14 participants for the three object motion conditions with nonstereo displays. No significant difference in heading errors was found between stereo and nonstereo displays for the lateral motion and random motion conditions, F(1, 13) = 0.96, p = 0.35, η2 = 0.07 and F(1, 13) = 3.56, p = 0.086, η2 = 0.22, respectively. A significant difference in heading errors was found between stereo and nonstereo displays for the motion-in-depth condition, F(1, 13) = 8.34, p = 0.013, η2 = 0.39. Nevertheless, Newman-Keuls tests revealed that this main effect was due to a larger heading error found with stereo than nonstereo displays at only one (3.5°) out of four tested position offsets (p = 0.049; Figure 4a). 
Together the results of this experiment support the conclusion that binocular disparity information is not used by the visual system to segment a moving object from the global flow field to enable accurate heading perception. 
Computational modeling
The findings of Experiments 1 through 3 put three major constraints on computational models attempting to simulate the neural computation underlying heading perception in the presence of independent object motion: (a) a viable model needs to be able to explain the finding that a moving object away from the FOE in the background optic flow can still induce a similar amount of bias in heading judgments as a moving object in close proximity to the background FOE; (b) a viable model needs to consider the finding that a moving object can bias heading judgments despite salient visual cues (such as relative motion, dynamic occlusion, color contrast, and binocular disparity) that make it stand out from the global flow field, and (c) a viable model needs to be able to simulate the change in the direction of heading bias caused by approaching versus nonapproaching object motion as observed in Experiments 1 through 3 and previous studies (e.g., Royden & Hildreth, 1996). In addition to these three major constraints, a viable model should be neurophysiologically plausible. 
The above constraints challenge many existing models for heading perception in the presence of a moving object; e.g., the first constraint challenges the models that emphasize the close proximity of a moving object to the background FOE to induce heading biases (Layton et al., 2012; Layton & Fajen, 2016c; Royden, 2002; Warren & Saunders, 1995); the second constraint challenges the models that use visual cues to segment a moving object from the global flow field prior to heading estimation (e.g., Adiv, 1985; Pauwels & Van Hulle, 2004; Raudies & Neumann, 2013); and the third constraint challenges the models that cannot explain the change in the direction of heading bias caused by approaching versus nonapproaching object motion (e.g., Warren & Saunders, 1995). 
Due to the similarity between the pattern of data in our experiments and the reported biases in heading judgments of optic flow illusions (Duffy & Wurtz, 1993) that a long-standing neurophysiologically plausible heading model, the population heading map model, has successfully accounted for (Lappe & Rauschecker, 1995a), we presented our visual stimuli to this model. The purpose of this model simulation was to examine whether and how the observed heading biases in our experiments can be explained by a visual system that interprets flow fields in terms of self-motion in a rigid world. The model uses a least-squares optimization approach that pools velocity signals over the entire optic flow field for heading estimation without segmenting any moving object. The least-squares approach was first proposed by Bruss and Horn (1983) and further developed by Heeger and Jepson (1990, 1992) with a parallelizable algorithm for optimization. Lappe and Rauschecker (1993) developed the population heading map model with an implementation of this algorithm in the neurophysiological framework. 
Model structure
The population heading map model formulates a detailed specification of the physiological properties and connectivity of the neurons in the primate visual areas MT and MST along the visual motion pathway, and proposes a population heading map in area MST consistent with current neurophysiological findings (e.g., Bremmer, Kubischik, Pekel, Hoffmann, & Lappe, 2010; Gu, Fetsch, Adeyemo, DeAngelis, & Angelaki, 2010). Specifically, this model consists of two layers. Layer 1 contains units resembling the neurons in area MT that selectively respond to speed and direction of velocity vectors in the flow field. Subpopulations of these units with identical receptive field locations but different motion preferences are combined into hyper-columns. The distribution of activity in each hyper-column is used to encode a local motion signal at a retinal location. Layer 2 contains units resembling the neurons in area MST that form a population heading map in which subpopulations of units encode the likelihood of a particular heading direction matching the input flow field. This is realized through the weights of the synaptic connections between the two layers. 
The input to a Layer 2 unit is a weighted sum of the outputs of a random subset of hyper-columns in Layer 1 that encode speed and direction of optic flow at a set of retinal locations. The weights are predefined by implementing the least-squares optimization algorithm (Heeger & Jepson, 1990; Heeger & Jepson, 1992) such that the total input to each Layer 2 unit is proportional to the deviation of the input flow field from its preferred flow field. The weights are then chosen such that the match of the input to the preferred flow field of each Layer 2 unit becomes independent of additional rotation and the precise structure of the environment (see the mathematical details in Lappe & Rauschecker, 1993; Lappe & Rauschecker, 1995b). The best-matching heading is then chosen as the most active subpopulation in the heading map. 
Note that this model assumes that any input flow field results from observer translation and rotation through a rigid environment. The model parameters in the simulation were taken from previous research and the properties of the visual stimuli used in the current study, as detailed below. There were no free parameters fit to the data. 
Simulation procedure
Because the visual stimuli used in our experiments were mirror-symmetric with respect to the middle of the screen and participants were asked to report their perceived heading along a horizontal line at the center of the screen, the population heading map in Layer 2 contained 30 subpopulations equally spaced between −11.5° (left) and 11.5° (right) along the horizontal meridian. Each subpopulation consisted of 20 MST units with each receiving inputs from 30 randomly selected hyper-columns consisted of MT units in Layer 1. The number of hyper-columns in Layer 1 was matched to the geometry and density of the dots in the 3D cloud and the object in the visual stimuli in Experiments 1 and 2. The subpopulation in the heading map that had the highest activity determined the final heading estimate. 
The model estimates headings with three types of assumptions about the observer's eye movement: (a) fixation on an object in the scene during locomotion, (b) fronto-parallel eye rotation, and (c) unrestricted 3D eye movement. Because participants in our experiments were allowed to freely move their eyes throughout a trial, we did not constrain the model to one of the three eye movement assumptions. Instead, for each experimental condition, we performed 100 simulation runs with each of the three eye movement scenarios and then averaged the simulation results across 300 runs. 
Results
As in the experiments, given the symmetrical performance in the left and right object lateral motion conditions, we collapsed the simulated heading errors across these two conditions. Figure 5a plots the mean simulated heading error averaged across 600 simulation runs as a function of position offset for the lateral motion and the motion-in-depth conditions along with the human data from Experiment 1. Consistent with the human data, the simulated heading bias was in the same direction as object lateral motion for the lateral motion condition but was in the opposite direction to object lateral motion for the motion-in-depth condition. The mean of the simulated heading error was within 1° of the mean of the human heading error (1.68° vs. 0.99° for the lateral motion condition and −1.28 vs. −0.87° for the motion-in-depth condition). Figure 5b plots the mean simulated heading error averaged across 300 simulation runs for the random motion condition along with the human data from Experiment 1. Consistent with the human data, the stimulated heading bias was toward the offset position, and the mean of the simulated heading error was well within the range of the observed human heading error. 
Figure 5
 
Model simulation data. (a) Mean heading errors as a function of position offset for the lateral motion (red) and the motion-in-depth conditions (green) from the model simulations (solid lines) along with the heading error data (mean ± 1 SE) from Experiment 1 (shaded areas). Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. Error bars are SEs across 600 simulation runs (some are smaller than the data points). (b) Mean heading errors for the random motion condition (blue line) along with the heading error data (mean ± 1 SE) from Experiment 1 (shaded areas). Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 300 simulation runs.
Figure 5
 
Model simulation data. (a) Mean heading errors as a function of position offset for the lateral motion (red) and the motion-in-depth conditions (green) from the model simulations (solid lines) along with the heading error data (mean ± 1 SE) from Experiment 1 (shaded areas). Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. Error bars are SEs across 600 simulation runs (some are smaller than the data points). (b) Mean heading errors for the random motion condition (blue line) along with the heading error data (mean ± 1 SE) from Experiment 1 (shaded areas). Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 300 simulation runs.
Discussion
The model simulation results show that the population heading map model provides a coherent explanation for the observed heading biases in the current study by treating a moving object as part of the flow field and pooling the information over the entire flow field for an optimal (in a least-squares sense) estimation of heading while allowing for possible occurrence of (eye) rotation. For a moving object with pure lateral motion, an optimal analysis of the entire flow field shows that the most consistent interpretation of the object motion is a small amount of rotation, which the model compensates for by adjusting the estimated heading in the same direction as object lateral motion. This is similar to the optic flow illusions for which the lateral motion overlaps the entire radial flow field (Duffy & Wurtz, 1993; Lappe & Rauschecker, 1995a). In contrast, when a moving object undergoes lateral motion while simultaneously approaching the observer, the most consistent interpretation of the object motion is no rotation but translation with an offset angle, which the model compensates for by adjusting the estimated heading in the opposite direction to object lateral motion. Finally, for an object that provides random motion signals, when the model combines the random motion vectors in the object with the background optic flow for an optimal estimation of heading, it prompts a heading estimate that is displaced towards the object location. 
General discussion
The experiments in this paper examined how a moving object affects heading perception and whether the visual system segments object motion from the global flow field when salient nonmotion visual cues are available. By varying dot motion within an opaque window that remained in a constant position on the display screen, we disentangled the effects of object position and object motion on heading judgments that were confounded in previous studies (Layton & Fajen, 2016a; Layton & Fajen, 2016b; Royden & Hildreth, 1996; Warren & Saunders, 1995). We found that a moving object, regardless of its position relative to the background FOE, induced a small but consistent heading bias, and that the heading bias is in the same direction as object lateral motion when an object undergoes pure lateral motion and is in the opposite direction to object lateral motion when an object undergoes lateral motion while simultaneously approaching the observer. When an object contains random motion signals, the heading bias is toward the object location. 
One might think that the proper way for the visual system to deal with a moving object is to detect and segment it from the global flow field. Heading could then be accurately recovered from the remaining consistent flow vectors (e.g., Adiv, 1985; Pauwels & Van Hulle, 2004; Raudies & Neumann, 2013). However, the findings of our experiments show that the visual system does not segment and remove a moving object from the global flow field to support accurate heading perception. On the contrary, the visual system produces a bias in heading estimation that is well explained by pooling information from the entire flow field and computing the best matching heading without any segmentation. Even when additional salient visual nonmotion cues such as color contrast (Experiment 2) and binocular disparity (Experiment 3) are provided to help segmentation, heading biases remain unchanged. 
Using the entire flow field without any segmentation provides an efficient operating mode for the visual system to estimate heading in the presence of a moving object. This is because segmentation is a computationally complicated and time-consuming process while heading can be estimated with brief display durations as short as 40 ms (Bremmer, Churan, & Lappe, 2018). In natural situations, the optic flow field is typically large and contains much redundant information. A moving object typically covers only a small portion of the flow field. Simply pooling all motion information over the entire flow field provides not only a fast but also a reasonably reliable estimate of heading. In fact, the heading biases observed in our experiments (≤2°) as well as in previous studies are small and within the tolerable range of accuracy (2°–4°) for safe control of self-motion (Cutting, Springer, Braren, & Johnson, 1992). 
Computational models for heading perception in the presence of independent object motion
Previous computational models have focused on the proximity of a moving object to the FOE in the background optic flow and introduced weighting schemes to accommodate the relevant data from psychophysical studies (Layton et al., 2012; Layton & Fajen, 2016c; Royden, 2002; Warren & Saunders, 1995). The findings of our experiments show that close proximity to the background FOE is not a prerequisite for a moving object to induce a bias in heading judgments and thus challenge the structure of these models. Our finding that an object containing random motion signals biases heading estimation toward its location might also cause problems for these models. While the ability of these models to simulate the heading bias observed in the random motion condition has not been investigated, the population heading map model provides a simple and coherent explanation for the heading biases observed in the current study as well as for many other psychophysical (e.g., Lappe, 1996; Lappe & Rauschecker, 1995a) and neurophysiological data on heading estimation (e.g., Lappe, Bremmer, Pekel, Thiele, & Hoffmann, 1996). The model stimulation results confirm that the visual system gives no special treatment to a moving object for heading perception but rather pools motion signals from the entire flow field and computes the best matching heading without any segmentation. 
Because the population heading map model essentially computes an optimal set of self-motion parameters (i.e., heading/translation and rotation) for a given flow field, it is conceivable that other heading models that compute a full set of self-motion parameters without prior segmentation can also predict the observed heading biases. In fact, one such model, the Bayesian integration model by Saunders and Niehorster (2010), can reproduce the observed heading biases in the lateral motion and the motion-in-depth conditions. However, unlike the population heading map model, this model is not formulated in neurophysiological terms. Similarly, the differential motion detectors in the model by Royden (2002) are also inconsistent with the reported neurophysiological properties of the primate visual areas MT and MSTd (Berezovskii & Born, 2000). 
The models by Layton and his colleagues (2012, 2016c) do not compute self-motion parameters but produce heading biases by shifting peak activities in the heading map when an object moves and occludes the FOE in the background optic flow. It remains unclear whether this model can produce similar peak shifts when a moving object does not occlude the background FOE as shown in our experiments. Furthermore, unlike the population heading map model, this model is designed to explain heading biases introduced by a moving object. It contains only expansion templates and thus cannot explain other more general aspects of heading perception such as optic flow illusions (Duffy & Wurtz, 1993) or the ability to estimate heading in the presence of eye rotations (e.g., Li & Warren, 2000; Warren & Hannon, 1988). 
Neural mechanisms for the perception of heading and the perception of scene-relative object motion during self-motion
While the results of the current study clearly show that in the presence of a moving object, heading is perceived from the raw, unsegmented global flow field, this does not mean that object motion cannot be segmented from the flow field. In fact, object motion is clearly visible and distinguishable in our visual stimuli, and much research has shown that the visual system can identify an independently moving object in the flow field (e.g., Niehorster & Li, 2017; Rushton & Warren, 2005; Warren & Rushton, 2009). Yet, this capability is apparently not used in heading estimation. 
We conclude, consistent with other observations (Rushton, Niehorster, Warren, & Li, 2018; Warren, Rushton, & Foulkes, 2012; Yu, Hou, Spillmann, & Gu, 2018), that the perception of self-motion with independent object motion and the perception of object motion during self-motion are performed by different neural mechanisms. Self-motion (specifically heading) estimation has been linked extensively to the primate visual area MSTd (e.g., Britten & van Wezel, 1998; Duffy & Wurtz, 1995; Lappe et al., 1996). Recent neurophysiological studies found that object motion can bias the responses of neurons in area MSTd (Logan & Duffy, 2005), and that microstimulation in area MT, the major input to area MST, also produces heading biases similar to those induced by object motion (Yu et al., 2017). These observations support the view that MSTd contains a population heading map that pools motion signals across the visual field for heading estimation without any segmentation of independent object motion. In contrast, a subset of MT neurons with differential motion properties could provide the segmentation of object motion from the global flow field (Layton & Fajen, 2016d; Royden, Sannicandro, & Webber, 2015), which is then further elaborated either in MSTl (Eifuku & Wurtz, 1999) or in a subset of MSTd neurons that respond to local motion in addition to heading stimuli (Krekelberg, Paolini, Bremmer, Lappe, & Hoffmann, 2001). Accordingly, these neurons can serve the perception of scene-relative object-motion during self-motion. Further neurophysiological research is needed to pin down the exact sites and pathways involved in this process. 
Acknowledgments
This study was supported by research grants from the Hong Kong Research Grants Council (HKU 746013H & G-HKU704/14), Shanghai Science and Technology Committee (17ZR1420100), NYU-ECNU Joint Research Institute, the German Science Foundation (DFG LA952/7), and the German Academic Exchange Service (DAAD), and a PhD Fellowship from the Hong Kong Research Grants Council (PF09-03850). LL, NL, DCN, and QS designed, ran, and analyzed the three experiments. ML designed, ran, and analyzed the computational modelling. The paper was written by LL, ML, NL, DCN, and QS. 
Commercial relationships: none. 
Corresponding author: Li Li. 
Address: New York University Shanghai, Shanghai, PRC. 
References
Adiv, G. (1985). Determining three-dimensional motion and structure from optical flow generated by several moving objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 7 (4), 384–401.
Berezovskii, V. K., & Born R. T. (2000). Specificity of projections from wide-field and local motion-processing regions within the middle temporal visual area of the owl monkey. Journal of Neuroscience, 20, 1157–1169.
Bremmer, F., Churan, J., & Lappe, M. (2018). Heading representations are compressed during saccades. Nature Communications, 8 (920), 1–13.
Bremmer, F., Kubischik, M., Pekel, M., Hoffmann, K. P., & Lappe, M. (2010). Visual selectivity for heading in monkey area MST. Experimental Brain Research, 200 (1), 51–60.
Britten, K. H. (1999). Motion perception: How are moving images segmented? Current Biology, 9 (19), R728–R730.
Britten, K. H., & van Wezel, R. J. (1998). Electrical microstimulation of cortical area MST biases heading perception in monkeys. Nature Neuroscience, 1 (1), 59–63.
Bruss, A. R., & Horn, B. K. (1983). Passive navigation. Computer Vision, Graphics, and Image Processing, 21 (1), 3–20.
Cutting, J. E., Springer, K., Braren, P. A., & Johnson, S. H. (1992). Wayfinding on foot from information in retinal, not optical, flow. Journal of Experimental Psychology: General, 121 (1), 41–72.
Duffy, C. J., & Wurtz, R. H. (1993). An illusory transformation of optic flow fields. Vision Research, 33 (11), 1481–1490.
Duffy, C. J., & Wurtz, R. H. (1995). Response of monkey MST neurons to optic flow stimuli with shifted centers of motion. Journal of Neuroscience, 15 (7), 5192–5208.
Eifuku, S., & Wurtz, R. H. (1999). Response to motion in extrastriate area MSTl: Disparity sensitivity. Journal of Neurophysiology, 82 (5), 2462–2475.
Frey, H. P., Honey, C., & König, P. (2008). What's color got to do with it? The influence of color on visual attention in different categories. Journal of Vision, 8 (14): 6, 1–17, http://doi.org/10.1167/8.14.6. [PubMed] [Article]
Gibson, J. J. (1950). The perception of the visual world. Boston, MA: Houghton Mifflin.
Gu, Y., Fetsch, C. R., Adeyemo, B., DeAngelis, G. C., & Angelaki, D. E. (2010). Decoding of MSTd population activity accounts for variations in the precision of heading perception. Neuron, 66 (4), 596–609.
Hanada, M. (2005). Computational analyses for illusory transformations in the optic flow field and heading perception in the presence of moving objects. Vision Research, 45 (6), 749–758.
Heeger, D. J., & Jepson, A. (1990). Visual perception of three-dimensional motion. Neural Computation, 2 (2), 129–137.
Heeger, D. J., & Jepson, A. D. (1992). Subspace methods for recovering rigid motion.1. Algorithm and implementation. International Journal of Computer Vision, 7 (2), 95–117.
Krekelberg, B., Paolini, M., Bremmer, F., Lappe, M., & Hoffmann, K. P. (2001). Deconstructing the receptive field: Information coding in macaque area MST. Neurocomputing, 38, 249–254.
Lappe, M. (1996). Functional consequences of an integration of motion and stereopsis in area MT of monkey extrastriate visual cortex. Neural Computation, 8 (7), 1449–1461.
Lappe, M., Bremmer, F., Pekel, M., Thiele, A., & Hoffmann, K. P. (1996). Optic flow processing in monkey STS: A theoretical and experimental approach. Journal of Neuroscience, 16 (19), 6265–6285.
Lappe, M., & Rauschecker, J. P. (1993). A neural network for the processing of optic flow from ego-motion in man and higher mammals. Neural Computation, 5 (3), 374–391.
Lappe, M., & Rauschecker, J. P. (1995a). An illusory transformation in a model of optic flow processing. Vision Research, 35 (11), 1619–1631.
Lappe, M., & Rauschecker, J. P. (1995b). Motion anisotropies and heading detection. Biological Cybernetics, 72 (3), 261–277.
Layton, O. W., & Fajen, B. R. (2016a). The temporal dynamics of heading perception in the presence of moving objects. Journal of Neurophysiology, 115 (1), 286–300.
Layton, O. W., & Fajen, B. R. (2016b). Sources of bias in the perception of heading in the presence of moving objects: Object-based and border-based discrepancies. Journal of Vision, 16 (1): 9, 1–18, http://doi.org/10.1167/16.1.9. [PubMed] [Article]
Layton, O. W., & Fajen, B. R. (2016c). Competitive dynamics in MSTd: A mechanism for robust heading perception based on optic flow. PLoS Computational Biology, 12 (6), e1004942.
Layton, O. W., & Fajen, B. R. (2016d). A neural model of MST and MT explains perceived object motion during self-motion. Journal of Neuroscience, 36 (31), 8093–8102.
Layton, O. W., Mingolla, E., & Browning, N. A. (2012). A motion pooling model of visually guided navigation explains human behavior in the presence of independently moving objects. Journal of Vision, 12 (1): 20, 1–19, http://doi.org/10.1167/12.1.20. [PubMed] [Article]
Li, L., & Warren, W. H. (2000). Perception of heading during rotation: Sufficiency of dense motion parallax and reference objects. Vision Research, 40 (28), 3873–3894.
Logan, D. J., & Duffy, C. J. (2005). Cortical area MSTd combines visual cues to represent 3-D self-movement. Cerebral Cortex, 16 (10), 1494–1507.
Niehorster, D. C., & Li, L. (2017). Accuracy and tuning of flow parsing for visual perception of object motion during self-motion. i-Perception, 8 (3), 1–18.
Nothdurft, H.C. (1993). The role of features in preattentive vision: Comparison of orientation, motion and color cues. Vision Research, 33 (14), 1937–1958.
Pauwels, K., & Van Hulle, M. M. (2004, May–June). Segmenting independently moving objects from egomotion flow fields. In Wörgötter, F. (Chair), Early Cognitive Vision Workshop, Isle of Skye, SCT.
Perrone, J. A., & Stone, L. S. (1994). A model of self-motion estimation within primate extrastriate visual cortex. Vision Research, 34 (21), 2917–2938
Poom, L., & Börjesson, E. (2005). Colour, polarity, disparity, and texture contributions to motion segregation. Perception, 34 (10), 1193–1203.
Raudies, F., & Neumann, H. (2013). Modeling heading and path perception from optic flow in the case of independently moving objects. Frontiers in Behavioral Neuroscience, 7, 23.
Royden, C. S. (2002). Computing heading in the presence of moving objects: A model that uses motion-opponent operators. Vision Research, 42 (28), 3043–3058.
Royden, C. S., & Hildreth, E. C. (1996). Human heading judgments in the presence of moving objects. Attention, Perception, & Psychophysics, 58 (6), 836–856.
Royden, C. S., Sannicandro, S. E., & Webber, L. M. (2015). Detection of moving objects using motion-and stereo-tuned operators. Journal of Vision, 15 (8): 21, 1–17, http://doi.org/10.1167/15.8.21. [PubMed] [Article]
Rushton, S. K., Niehorster, D. C., Warren, P. A., & Li, L. (2018). The primary role of flow processing in the identification of scene-relative object movement. Journal of Neuroscience 38 (7), 1737–1743.
Rushton, S. K., & Warren, P. A. (2005). Moving observers, relative retinal motion and the detection of object movement. Current Biology, 15, R542–R543.
Saarela, T. P., & Landy, M. S. (2012). Combination of texture and color cues in visual segmentation. Vision Research, 58, 59–67.
Saunders, J. A., & Niehorster, D. C. (2010). A Bayesian model for estimating observer translation and rotation from optic flow and extra-retinal input. Journal of Vision, 10 (10): 7, 1–22, http://doi.org/10.1167/10.10.7. [PubMed] [Article]
Turatto, M., & Galfano, G. (2000). Color, form and luminance capture attention in visual search. Vision Research, 40 (13), 1639–1643.
Van den Berg, A. V. (1992). Robustness of perception of heading from optic flow. Vision Research, 32, 1285–1296.
Van den Berg, A. V., & Brenner, E. (1994). Humans combine the optic flow with static depth cues for robust perception of heading. Vision Research, 34 (16), 2153–2167.
Warren, W. H., & Hannon, D. J. (1988, November 10). Direction of self-motion is perceived from optical flow. Nature, 336 (6195), 162–163.
Warren, W. H., Morris, M. W., & Kalish, M. (1988). Perception of translational heading from optical flow. Journal of Experimental Psychology: Human Perception and Performance, 14 (4), 646–660.
Warren, P. A., & Rushton, S. K. (2009). Optic flow processing for the assessment of object movement during ego movement. Current Biology, 19 (18), 1555–1560.
Warren, P. A., Rushton, S. K., & Foulkes, A. J. (2012). Does optic flow parsing depend on prior estimation of heading? Journal of Vision, 12 (11): 7, 1–14, http://doi.org/10.1167/12.11.7. [PubMed] [Article]
Warren, W. H., & Saunders, J. A. (1995). Perceiving heading in the presence of moving objects. Perception, 24 (3), 315–331.
Yu, X., Hou, H., Spillmann, L., & Gu, Y. (2018). Causal evidence of motion signals in macaque middle temporal area weighted-pooled for global heading perception. Cerebral Cortex, 28 (2), 612–624.
Figure 1
 
Schematic illustrations of the visual stimuli in Experiment 1: (a) Lateral motion condition in which the dots within the opaque window moved laterally to the right on the display screen, (b) motion-in-depth condition in which the dots moved laterally to the right while simultaneously moved toward the observer, (c) random motion condition in which each dot moved in random direction on the display screen, and (d) no motion condition in which no dot was shown inside the opaque window. The direction of the simulated observer translation (i.e., the FOE of the background flow field) is indicated by the white circle (absent in the experiment) in the center, and the position offset of the opaque window is at 5° to the right of the background FOE. The boundaries of the opaque window (absent in the experiment) are shown for illustration purpose only.
Figure 1
 
Schematic illustrations of the visual stimuli in Experiment 1: (a) Lateral motion condition in which the dots within the opaque window moved laterally to the right on the display screen, (b) motion-in-depth condition in which the dots moved laterally to the right while simultaneously moved toward the observer, (c) random motion condition in which each dot moved in random direction on the display screen, and (d) no motion condition in which no dot was shown inside the opaque window. The direction of the simulated observer translation (i.e., the FOE of the background flow field) is indicated by the white circle (absent in the experiment) in the center, and the position offset of the opaque window is at 5° to the right of the background FOE. The boundaries of the opaque window (absent in the experiment) are shown for illustration purpose only.
Figure 2
 
Experiment 1 data. (a) Mean heading errors averaged across participants as a function of position offset for the lateral motion (red lines) and the motion-in-depth conditions (green lines) in Experiment 1 (solid lines) and the control experiment (dashed lines). Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. (b) Mean heading errors for the random motion (blue line) and the no motion conditions (purple line) in Experiment 1. Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow, and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 11 participants in Experiment 1 and across five participants in the control Experiment.
Figure 2
 
Experiment 1 data. (a) Mean heading errors averaged across participants as a function of position offset for the lateral motion (red lines) and the motion-in-depth conditions (green lines) in Experiment 1 (solid lines) and the control experiment (dashed lines). Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. (b) Mean heading errors for the random motion (blue line) and the no motion conditions (purple line) in Experiment 1. Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow, and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 11 participants in Experiment 1 and across five participants in the control Experiment.
Figure 3
 
Experiment 2 data. (a) Mean heading errors averaged across participants as a function of position offset for the lateral motion (red line) and the motion-in-depth conditions (green line). Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. (b) Mean heading errors for the random motion condition (blue line). Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow, and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 11 participants.
Figure 3
 
Experiment 2 data. (a) Mean heading errors averaged across participants as a function of position offset for the lateral motion (red line) and the motion-in-depth conditions (green line). Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. (b) Mean heading errors for the random motion condition (blue line). Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow, and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 11 participants.
Figure 4
 
Experiment 3 data. (a) Mean heading errors averaged across participants as a function of position offset for the lateral motion (red lines) and the motion-in-depth conditions (green lines) with stereo (solid lines) and nonstereo (dashed lines) displays. Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. (b) Mean heading errors for the random motion condition (blue lines). Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow, and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 14 participants.
Figure 4
 
Experiment 3 data. (a) Mean heading errors averaged across participants as a function of position offset for the lateral motion (red lines) and the motion-in-depth conditions (green lines) with stereo (solid lines) and nonstereo (dashed lines) displays. Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. (b) Mean heading errors for the random motion condition (blue lines). Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow, and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 14 participants.
Figure 5
 
Model simulation data. (a) Mean heading errors as a function of position offset for the lateral motion (red) and the motion-in-depth conditions (green) from the model simulations (solid lines) along with the heading error data (mean ± 1 SE) from Experiment 1 (shaded areas). Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. Error bars are SEs across 600 simulation runs (some are smaller than the data points). (b) Mean heading errors for the random motion condition (blue line) along with the heading error data (mean ± 1 SE) from Experiment 1 (shaded areas). Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 300 simulation runs.
Figure 5
 
Model simulation data. (a) Mean heading errors as a function of position offset for the lateral motion (red) and the motion-in-depth conditions (green) from the model simulations (solid lines) along with the heading error data (mean ± 1 SE) from Experiment 1 (shaded areas). Positive heading errors indicate a heading bias in the same direction as object lateral motion and negative heading errors in the opposite direction to object lateral motion. Error bars are SEs across 600 simulation runs (some are smaller than the data points). (b) Mean heading errors for the random motion condition (blue line) along with the heading error data (mean ± 1 SE) from Experiment 1 (shaded areas). Positive heading errors indicate a heading bias to the right of the FOE in the background optic flow and the negative heading errors indicate a heading bias to the left of the background FOE. Error bars are SEs across 300 simulation runs.
Supplement 1
Supplement 2
Supplement 3
Supplement 4
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×