**While we know that humans are extremely sensitive to optic flow information about direction of heading, we do not know how they integrate information across the visual field. We adapted the standard cue perturbation paradigm to investigate how young adult observers integrate optic flow information from different regions of the visual field to judge direction of heading. First, subjects judged direction of heading when viewing a three-dimensional field of random dots simulating linear translation through the world. We independently perturbed the flow in one visual field quadrant to indicate a different direction of heading relative to the other three quadrants. We then used subjects' judgments of direction of heading to estimate the relative influence of flow information in each quadrant on perception. Human subjects behaved similarly to the ideal observer in terms of integrating motion information across the visual field with one exception: Subjects overweighted information in the upper half of the visual field. The upper-field bias was robust under several different stimulus conditions, suggesting that it may represent a physiological adaptation to the uneven distribution of task-relevant motion information in our visual world.**

**Figure 1**

**Figure 1**

- anchor trials, in which the optic flow in all four quadrants of the display depicted a common direction of heading, and
- perturbation trials, in which the optic flow in three of the quadrants depicted one of the four 6° eccentric directions of heading, while the other quadrant (randomly chosen) depicted a direction of heading that was 3° more or less eccentric than that depicted in the other three quadrants (see example in Figure 1C).

- One of the four quadrants contained the direction of heading location. This is referred to as the DOH quadrant. The base direction of heading was 6° off fixation along an imaginary 45° line bisecting the quadrant.
- Independently, one of the four quadrants was selected to contain the perturbation. This is referred to as the perturbation quadrant. The optic flow in the other three (nonperturbation) quadrants depicted the base direction of heading.
- The flow pattern in the perturbation quadrant was manipulated to indicate a direction of heading 3° more or less eccentric than the base direction of heading. With four options for the DOH quadrant, four options for the perturbation quadrant, and two options for the perturbation direction, the experiment consisted of 32 configurations for perturbation trials.

*x*and

*y*components of user estimates of heading, as described under Data analysis (below).

- 37 wild-card trials, generated by randomly placing DOHs in each of 37 evenly divided areas within the central occluded 10° of the display;
- 24 anchor trials (two each of the 12 possible anchor locations; Figure 1B); and
- 64 perturbation trials (two each of the 32 possible perturbation trial types).

**Figure 2**

**Figure 2**

*x*,

*y*) space of an imaginary frontoparallel image plane along with the (

*x*,

*y*) coordinates specifying the direction of heading indicated by the optic flow in each of the four quadrants (UR, LR, UL, and LL). In order to estimate the relative influence of each spatial quadrant on the estimation of DOH, we modeled the

*x*and

*y*components of subjects' DOH estimates as linear functions of the

*x*and

*y*components of the DOH depicted by the optic flow in each of the four quadrants: where

*w*and

_{UR}*λ*represent the strength of influence of the optic flow in the UR quadrant on subjects' estimates of direction of heading in the

_{UR}*x*and

*y*directions, respectively, and similarly for the other three quadrants. As noted previously, the relative influence of optic flow should depend on the location of the flow relative to the DOH (functional quadrant); thus, we fit four separate models of the forms given by Equations 1 and 2 to subjects' direction of heading estimates for conditions in which the true direction of heading was in each of the four retinotopic quadrants (e.g., UR, UL). To directly conceptualize the relative influence of each quadrant, we normalized the weights to each quadrant so they would sum to 1. Therefore, the weights reported below represent the relative influence of the flow in each quadrant, independent of multiplicative biases in subjects' responses (e.g., compressing or expanding their estimates radially away from fixation).

*p*(

**v⃗**

_{m}|

*H*;

**x⃗**), where

**v⃗**

_{m}is a composite vector containing the measured velocity vectors for each dot in a display,

**x⃗**is a vector containing the positions of each dot, and

*H*is a two-dimensional vector representing the direction of heading. We further simplified the model to take as input only the directions of the local velocity vectors (

**v⃗**

_{m}contains unit-length vectors in the direction of measured flow for each dot) because local speed in the flow pattern contributes little information about direction of heading (relative to direction), both in theory (Crowell & Banks, 1996) and in practice (W. H. Warren, Blackwell, Kurtz, Hatsopoulos, & Kalish, 1991). Furthermore, direction discrimination thresholds are independent of dot speed over a broad range of speeds, including the dot speeds used in our stimuli; thus, the variance of sensory noise on direction can be assumed to be independent of speed (Crowell & Banks, 1996).

**v⃗**

_{H}is a vector containing unit normal vectors representing the direction of motion of each dot predicted by the heading

*H*. We assume that sensory measurements of dot direction are independent and follow a von Mises distribution centered on the true direction in the image, giving for the likelihood function where

*θ*is the angle between the measured velocity vector for dot

_{i}*i*and the velocity vector predicted by a given direction of heading and

*k*is the precision of the sensory measurement (to a good approximation, the inverse variance of the sensory noise; Equation 7).

_{i}*k*).

_{i}*r*and

_{i}*θ*are the retinal eccentricity and direction of dot

_{i}*i*, respectively, and

*N*is the number of dots in the display. The

*N*to give equal performance with increasing number of dots).

*y*dimension and least reliable in the

*x*dimension. Because the focus of expansion is located along the 45° axis, the uncertainty ellipses are oriented obliquely. The uncertainty ellipses are opposite in the same-hemifield quadrant (Figure 3D), where information about the focus of expansion is more reliable in the

*x*dimension than in the

*y*dimension. As a result, the uncertainty ellipses are narrower in the

*x*dimension. The information in the same quadrant as the focus of expansion (Figure 3A) is relatively reliable in both the

*x*and

*y*dimensions, but when combining information from all four quadrants (Figure 3E), this reliability is increased and less distorted along the 45° axis.

**Figure 3**

**Figure 3**

*y*dimension. Thus, a perturbation in this quadrant will shift the ideal observer's estimate more strongly in the

*y*dimension than in the

*x*dimension. Figure 3C indicates that if the diagonal quadrant contained the perturbation, its likelihood function would be shifted along its long axis and the ideal observer would show a minimal change in its estimate of the DOH, equal in the

*x*and

*y*directions. This change would be minimal because of the high uncertainty (represented by wide likelihood curves) of information in the diagonal quadrant.

**Figure 4**

**Figure 4**

*x*and

*y*components of subjects' direction of heading judgments. This gave 32 weights for each subject indexed by three factors: a quadrant's functional location (DOH, same hemifield, cross-hemifield, or diagonal), its retinotopic location (UR, LR, UL, LL), and the component of the DOH judgment influenced by the perturbation (horizontal,

*x*; vertical,

*y*). A three-way repeated measures analysis of variance on subjects' measured weights revealed three significant effects: a main effect of functional location,

*F*(3, 15) = 10.89,

*p*< 0.0005; a main effect of retinotopic location,

*F*(3, 15) = 8.34,

*p*< 0.0017; and an interaction between functional location and the direction of the measured effect of a perturbation (

*x*or

*y*),

*F*(3, 15) = 6.43,

*p*< 0.005. No other effects approached significance.

*x*,

*y*) in which the weight was computed as a third factor. The only significant effect was a main effect of vertical position within the visual field,

*F*(1, 5) = 13.58,

*p*< 0.014. As shown in Figure 4B, subjects weighted the flow information in quadrants above the midline more than flow information in quadrants below the visual field midline.

**Figure 5**

**Figure 5**

- 37 wild-card trials: Directions of heading were chosen for wild-card trials by random selection (without replacement) of one of 37 equally spaced intervals within the 40° horizontal range. For each trial, the direction of heading was randomly chosen from a uniform distribution within the interval chosen for that trial. This enforced uniform sampling of the horizontal range on wild-card trials.
- 60 anchor trials: Directions of heading were randomly chosen from the set [−9, −6, −3, 3, 6, 9] degrees eccentricity along the horizontal midline (Figure 5), subject to the constraint that each anchor direction of heading was tested 10 times.
- 80 perturbation trials: On perturbation trials, the flow in either the upper or lower hemifield simulated a direction of heading of 6° right or left of fixation, while the flow in the other hemifield simulated a direction of heading of 3° or 9° on the same side of fixation. This created eight different perturbation conditions (upper/lower hemifield, left/right of fixation, ±3° perturbation), each of which was repeated at random 10 times.

*x*is the horizontal direction of heading specified by the flow in the upper hemifield and

_{u}*x*is the horizontal direction of heading specified by the flow in the lower hemifield. We calculated weights for the leftward and rightward headings and for the two stimulus conditions separately. The weights were normalized to sum to 1, so an equal distribution of influence to the upper and lower hemispheres of visual space would result in an upper-field weight of 0.5.

_{L}*λ*represents the magnitude of subjects' average upper-field bias (a negative value represents an underweighting of the upper field),

_{U}*λ*represents the effect of task on the upper-field bias,

_{task}*λ*represents the effect of the lateral position of the DOH on the upper-field bias, and

_{side}*λ*represents the interaction of the two.

_{interaction}**Figure 6**

**Figure 6**

*λ*of 0.08 ± 0.02 (Table 1). This represents an average upper/lower field weighting of 0.58 to 0.42 over all stimulus conditions. Weights to the upper visual field were significantly correlated within subjects between the two tasks (

_{U}*r*= 0.796,

*p*= 0.0007 two tailed; averaging the weights for each subject between left and right presentations for each task). It should be noted that in this case, since subjects' weights are close to 50%, the assumption of normally distributed data is not unreasonable. Common statistical strategies for dealing with proportions, such as arcsine transformations or using the Wilcoxon test, do not change the significance of any of the terms.

**Table 1**

*y = x*or

*y = −x*line. In our experiment, we had to occlude the focus of expansion in order to measure motion integration across wide areas of visual space. However, we would predict that even in situations where the focus of expansion is visible, human observers would still utilize information according to its relative reliability. This is supported by the fact that this strategy has been observed with many different models of multicue integration, including those using other sensory domains (Ernst & Banks, 2002; Knill, Friedman, & Geisler, 2003; Alais & Burr, 2004; Gu et al., 2006; Brouwer & Knill, 2009; Issen & Knill, 2012).

*, 20, 2112–2116.*

*Current Biology**, 14, 257–262.*

*Current Biology**, 87, 435–469.*

*Psychological Review**, 16, 476–491.*

*Experimental Brain Research**, 9 (1): 24, 1–19, http://www.journalofvision.org/content/9/1/24, doi:10.1167/9.1.3.[PubMed] [Article]*

*Journal of Vision**, 390, 512–515.*

*Nature**, 15, 61–75.*

*Spatial Vision**, 37, 1001–1011.*

*Neuron**, 226, 544–564.*

*Journal of Comparative Neurology**, 36, 471–490.*

*Vision Research**, 28, 1075–1087.*

*Perception**, 8 (2), e56862, doi:10.1371/journal.pone.0056862.*

*PLoS One**, 35, 2927–2941.*

*Vision Research**, 415, 429–433.*

*Nature**. Boston, MA: Houghton Mifflin.*

*The perception of the visual world**, 15, 2003–2011.*

*Journal of the Optical Society of America A**, 26, 73–85.*

*Journal of Neuroscience**, 42, 1619–1626.*

*Vision Research**, 32, 97–104.*

*Vision Research**, 12 (1): 3, 1–13, http://www.journalofvision.org/content/12/1/3, doi:10.1167/12.1.3.[PubMed] [Article]*

*Journal of Vision**, 20, 1232–1233.*

*Journal of the Optical Society of America A**, doi:10.1152/jn.00697.2013.*

*Journal of Neurophysiology**, 14 (10): 1, http://www.journalofvision.org/content/14/10/1, doi:10.1167/14.10.3.[Abstract]*

*Journal of Vision**, 266, 535–555.*

*Journal of Comparative Neurology**, 13, 519–542.*

*Behavioral and Brain Sciences**, 58, 836–856.*

*Perception and Psychophysics**, 8, 1191–1194.*

*Current Biology**, 8, 1531–1568.*

*Journal of Neuroscience**, 24, 429–448.*

*Vision Research**, 19, 1555–1560.*

*Current Biology**, 65, 311–320.*

*Biological Cybernetics**, 7, 160–169.*

*Journal of the Optical Society of America A**, 4, 213–216.*

*Nature Neuroscience**, 51, 443–454.*

*Attention, Perception, & Psychophysics**, 24, 315–331.*

*Perception**, 32, 2341–2347.*

*Vision Research**, 29, 47–59.*

*Vision Research*