The ability to localize visual objects is a fundamental component of human behavior and requires the integration of position information from object components. The retinal eccentricity of a stimulus and the locus of spatial attention can affect object localization, but it is unclear whether these factors alter the global localization of the object, the localization of object components, or both. We used psychophysical methods in humans to quantify behavioral responses in a centroid estimation task. Subjects located the centroid of briefly presented random dot patterns (RDPs). A peripheral cue was used to bias attention toward one side of the display. We found that although subjects were able to localize centroid positions reliably, they typically had a bias toward the fovea and a shift toward the locus of attention. We compared quantitative models that explain these effects either as biased global localization of the RDPs or as anisotropic integration of weighted dot component positions. A model that allowed retinal eccentricity and spatial attention to alter the weights assigned to individual dot positions best explained subjects' performance. These results show that global position perception depends on both the retinal eccentricity of stimulus components and their positions relative to the current locus of attention.

^{2}) squares (0.16° × 0.16°) on a black (0.4 cd/m

^{2}) background. On each trial, 25 unique dot positions were selected randomly from a grid of 712 possible dot positions within a radius of 15° from the fixation point. Each potential dot location in the grid was 1° away from its nearest horizontal and vertical neighbors. In addition, no dots appeared within a 2° × 2° square region surrounding the fixation point (Figure 1A). The actual centroids of the RDPs across all trials approximated a normal distribution with a horizontal and vertical mean of 0° and a standard deviation of 1.5°.

^{2}, 0.51°) appeared at fixation 750 ms after target offset. Subjects were instructed to locate the centroid, i.e., average position, of all dots presented on a trial by moving a cursor to the centroid using a computer mouse in their right hand (regardless of handedness) and then clicking the left button.

*x*) coordinates, but analogous equations were used for the vertical (

*y*) coordinates.

_{ x }as a simple function of the actual centroid,

*c*

_{ x }. Here, we consider a simple linear bias according to

*β*

_{ x }is a slope parameter that quantifies the magnitude of the eccentricity-dependent horizontal bias and

*ɛ*

_{ x }is an error term along the horizontal dimension. The value of

*β*in the fitted model for a given subject indicates whether the observer had an overall linear foveofugal (

*β*> 1) or foveopetal (

*β*< 1) bias in their centroid estimates relative to the point of fixation. If

*β*= 1, then there was no overall linear bias due to the retinal eccentricity of the centroid position. The parameter

*ɛ*represents a constant bias in the centroid estimates across all trials regardless of the position of the actual centroid. We determined a separate

*β*and

*ɛ*for the vertical coordinates.

*ɛ*. Specifically, the weighted average model for displays containing 25 dots (as used in this study) is

*ω*

_{ i }=

*ω*(

*x*

_{ i },

*y*

_{ i }) is a weighting function that assigns a weight to the

*i*th dot on the basis of its horizontal and vertical positions in the visual field (see below). A dot position with a higher weight contributes more to the centroid estimate,

_{ x }, compared to a dot position with a lower weight. Preliminary, non-parametric analyses, in which we used a spatially-gridded model and allocated weights to specific grid locations (up to 120), showed that the effects of eccentricity were well described by a unimodal, Gaussian-shaped weighting function anchored at the point of fixation. Therefore, we chose the following form:

*σ*

_{ x }and

*σ*

_{ y }) of the Gaussian function and a constant offset across all spatial positions,

*b*. The amplitude of the Gaussian,

*a,*was either +1 or −1 to model an upright or inverted Gaussian, respectively. By definition, all weights should be positive in a weighted average calculation; therefore, we constrained the weighting function to prevent negative weights (see below).

*ω*(Equation 3), now allowing the amplitude (

*a*) to range freely. We again constrained the weighting function to only allow positive weights. While this is not imperative in a weighted sum calculation as it is in a weighted average calculation, in the context of our model, a negative weight would alter the sign of the dot component position. This would cause a dot to shift the perceived centroid toward the opposite hemifield. Preliminary (non-parametric) analyses showed that only one subject (S7) had a small subset (<10%) of negative weights. Therefore, to maximize the similarity between weighted average and weighted sum models, parameter constraints remained consistent in both cases.

*ɛ,*or modulates an eccentricity-dependent bias,

*β*. We hypothesized that

*ɛ*

_{ x }would differ between the left-cue and right-cue conditions and, specifically, would be greater in the right-cue condition.

*μ*

_{ x }and

*μ*

_{ y }) and extended the weighting function with linear gradients in both the horizontal and vertical dimensions (

*m*

_{ x }and

*m*

_{ y }):

*x*and

*y*coordinates. Pearson's correlation analysis confirmed that each of the fitted models had a significant correlation between the model predictions and subject responses (

*t*(>500) > 14,

*p*< 10

^{−6}).

*lsqcurvefit*routine from the Optimization Toolbox in Matlab 7.9 (The MathWorks, Natick, MA). The non-negativity constraint on the weights (see above) required us to use constrained non-linear optimization to fit the weighted average and weighted sum models. To do this, we used the

*fmincon*routine from the Optimization Toolbox in Matlab with the following constraint: (

*a*+

*b*) > 0. We also constrained the lower and upper bounds for each parameter and set them as follows:

*μ*

_{ x }and

*μ*

_{ y }to −15 and 15 to keep the center of the Gaussian function within the stimulus display area,

*a, b,*and

*ɛ*to −100 and 100, and

*σ*

_{ x }and

*σ*

_{ y }to 0 and 7.5 so that the Gaussian function would reach an asymptote level within the stimulus presentation area. Preliminary non-parametric analyses supported the use of 7.5° as the maximum value. We then used repeated curve fits, starting from 1000 random initial parameter choices within these bounds to find the optimal set of parameters. We used this optimal set of parameter estimates for subsequent analysis.

*bootci*function in Matlab. For each of 1000 bootstrapped sets, we resampled the data with replacement and reran the

*fmincon*procedure with the optimal parameters as initial values.

*n*is the number of trials,

*K*is the number of free parameters. This calculation assumes that the errors are normally distributed and have constant variance.

*psignifit*toolbox version 2.5.6 (Wichmann & Hill, 2001a, 2001b) in Matlab 7.9. We determined confidence intervals for the PSE using a bootstrapping method and used these confidence intervals to determine whether subject responses differed significantly between cuing conditions (95% CI method).

*t*(>500) > 14,

*p*< 0.0001). This demonstrates that subjects used the positions of the dots on a trial-by-trial basis to guide their behavioral responses and did not just click at the center of the screen.

*p*< 0.0001) than the partial correlation between subject responses and the centroid of the implied shape (group median: 0.14), given the centroid of all the dots. Using the same methods, we also investigated whether subjects used the average position of the dots on the convex hull, and we found similar results. Therefore, there was no indication that subjects mainly used the outermost dots of the RDP when determining the centroid estimate. We will explore and quantify other behavioral strategies in more detail in the Model selection and analysis section.

*t*(>700) > 21,

*p*< 0.0001). Importantly, attention yielded a significant horizontal bias in the direction of the attended locus for all subjects (

*t*(>1500) < −4.5,

*p*< 0.001; see Figure 3B). The constant error, averaged across subjects, was −0.07° (horizontal; STE = 0.13°) and −0.21° (vertical; STE = 0.09°) in the left-cue condition and 0.66° (horizontal; STE = 0.18°) and −0.27° (vertical; STE = 0.11°) in the right-cue condition. Only one subject (S1) had a significant difference, 0.22°, in the vertical direction (

*t*(1797) = 3.45,

*p*< 0.001). These differences are not due to subjects' eye position as their mean horizontal eye position during presentation of the RDP did not differ significantly between the left-cue condition, −0.11° (STE = 0.08°), and the right-cue condition, −0.12° (STE = 0.08°), for any of the subjects.

*F*= 1.848,

*p*= 0.20). The variable error in the left-cue and right-cue conditions ranged from 1.28° to 2.74°.

*perceived centroid was to the left or right of a reference line*that appeared briefly after the offset of the RDP. Even in this paradigm, though, there is the possibility for motor-response bias. To allow us to determine whether the cue biased the subjects' selection of button presses or their perception, we also reversed the task instructions (in separate sessions), that is, subjects were asked to report

*whether the line was to the left or right of the perceived centroid*.

*β,*characterizes the eccentricity-related bias across the horizontal or vertical dimensions of the visual field. A

*β*significantly less than 1 indicates an overall foveopetal bias in subjects' centroid estimates, and thus, subjects tended to report the centroid to be closer to the fovea than its true position. A value of

*β*significantly greater than 1 indicates that the observer had an overall linear foveofugal bias. Four out of the seven subjects had a significant foveopetal bias in the perceived centroid in both the horizontal and vertical dimensions (i.e.,

*β*< 1 [95% CI method]; mean = 0.68°, STE = 0.03°; Figure 5). In contrast, two out of the remaining three subjects had a significant foveofugal bias in the perceived centroid in both the horizontal and vertical dimensions; therefore, their responses exaggerated the true eccentricity of the centroid (i.e.,

*β*> 1 [95% CI method]; mean = 1.30°, STE = 0.1°). The remaining subject had a significant foveofugal bias (

*β*= 1.20°) in the horizontal direction and a foveopetal bias (

*β*= 0.78°) in the vertical direction.

^{2}(1.03 deg

^{2}< MPSE < 7.69 deg

^{2}), whereas the median MPSE for the late bias model was 3.94 deg

^{2}(1.09 deg

^{2}< MPSE < 8.57 deg

^{2}) and 4.16 deg

^{2}for the weighted average model (1.24 deg

^{2}< MPSE < 9.28 deg

^{2}). These comparisons of relative model performance, however, do not take into account the fact that each of the models, late bias, weighted average, and weighted sum, has a different number of free parameters. We used the Akaike Information Criterion (AIC) to overcome this limitation. Lower AIC values indicate a more parsimonious model, and one model is considered to outperform another model significantly if its AIC value is lower than the comparison model by four or more units (Burnham & Anderson, 2002).

*ɛ,*in Equation 4 of the weighted sum model represents an additive late bias. For the majority of subjects (6 out of 7), this term differed significantly from zero in both the horizontal (mean = 0.17°; STE = 0.08°) and vertical directions (mean = −0.25°; STE = 0.12°). This suggests the application of a rightward and downward bias after the integration of dot components.

^{2}(1.91 deg

^{2}< MPSE < 10.03 deg

^{2}), whereas the median MPSE was 3.89 deg

^{2}for the late bias model (1.95 deg

^{2}< MPSE < 11.08 deg

^{2}). To determine whether the weighted sum model is truly a better model given the additional free parameters, we compared the AIC values for the weighted sum model to the late bias model for the unilateral-cue conditions (Figure 6B). In all subjects, the weighted sum model significantly outperformed the late bias model (AIC difference; median = −78.58, STE = 33.41). Therefore, we conclude that the weighted sum model gave the most parsimonious account of the influence of attention on spatial integration.

*m*

_{ x }) in the right-cue condition was higher than that of the left-cue condition for all subjects (individually significant in four out of seven subjects, 95% CI method). This shows a coarse effect of attention; weights in the attended visual field were generally greater than weights in the unattended field. At the same time, the peak of the weighting function (

*μ*

_{ x }) was shifted rightward in the right-cue compared to the left-cue condition in five out of seven subjects (individually significant in two subjects, 95% CI method). This shows a more focused shift of attention toward the exogenous cue. Lastly, and unexpectedly, the late constant horizontal bias,

*ɛ*

_{ x }, was larger in the left-cue condition than that in the right-cue condition in all subjects. This difference was individually significant in four out of seven subjects (95% CI method). While this late bias is opposite to our expectation, additional analyses in which we omitted this term from the model generated qualitatively similar weight maps and had little effect on overall model performance.

*Vision Research*, 50, 1793–1802. [PubMed] [CrossRef] [PubMed]

*Model selection and multi-model inference: A practical information-theoretic approach*(2nd ed.). New York: Springer-Verlag.

*Quarterly Journal of Experimental Psychology A: Human Experimental Psychology*, 43, 859–880. [PubMed] [CrossRef]

*Vision Research*, 47, 1907–1923. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 49, 1037–1044. [PubMed] [Article] [CrossRef] [PubMed]

*Perception*, 35, 1073–1087. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 10(10):20, 1–16, http://www.journalofvision.org/content/10/10/20, doi:10.1167/10.10.20. [PubMed] [Article] [CrossRef] [PubMed]

*Perception & Psychophysics*, 53, 633–641. [PubMed] [CrossRef] [PubMed]

*Attention, Perception, & Psychophysics*, 73, 809–828. [PubMed] [Article] [CrossRef]

*Journal of the Optical Society of America A*, 8, 440–449. [PubMed] [CrossRef]

*Visual Neuroscience*, 9, 181–197. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 18, 1217–1222. [PubMed] [CrossRef] [PubMed]

*Annual Review of Neuroscience*, 23, 315–341. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 10(12):33, 1–13, http://www.journalofvision.org/content/10/12/33, doi:10.1167/10.12.33. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 35, 1741–1754. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 41, 529–539. [PubMed] [CrossRef] [PubMed]

*Trends in Neurosciences*, 24, 335–339. [PubMed] [CrossRef] [PubMed]

*Proceedings of the SPIE*, 1913, 506–517.

*Journal of the Optical Society of America A*, 18, 2307–2320. [PubMed] [CrossRef]

*Biological Cybernetics*, 49, 111–118. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 38, 895–909. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 39, 2929–2946. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 30, 1793–1810. [PubMed] [CrossRef] [PubMed]

*Journal of Experimental Psychology: Human Perception and Performance*, 15, 315–330. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 61, 1646–1661. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 36, 1–14. [PubMed] [CrossRef] [PubMed]

*Behavioral Neuroscience*, 118, 237–242. [PubMed] [CrossRef] [PubMed]

*Journal of Experimental Psychology: Human Perception and Performance*, 24, 261–282. [CrossRef]

*Annual Review of Neuroscience*, 27, 611–647. [PubMed] [CrossRef] [PubMed]

*Neuron*, 61, 168–185. [PubMed] [Article] [CrossRef] [PubMed]

*Nature Neuroscience*, 10, 1483–1491. [PubMed] [Article] [CrossRef] [PubMed]

*Perception*, 21, 289–296. [PubMed] [CrossRef] [PubMed]

*Advances in Cognitive Psychology*, 6, 1–14. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Experimental Psychology: Human Perception and Performance*, 23, 443–463. [PubMed] [CrossRef] [PubMed]

*Psychonomic Bulletin & Review*, 6, 292–296. [PubMed] [CrossRef] [PubMed]

*Psychological Research*, 62, 20–35. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 43, 1637–1653. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 63, 1293–1313. [PubMed] [CrossRef]

*Perception & Psychophysics*, 63, 1314–1329. [PubMed] [CrossRef]

*Nature Neuroscience*, 9, 1156–1160. [PubMed] [CrossRef] [PubMed]

*Journal of Neuroscience*, 28, 8934–8944. [PubMed] [Article] [CrossRef] [PubMed]