With the eyes in forward gaze, stereo performance worsens when one eye's image is larger than the other's. Near, eccentric objects naturally create retinal images of different sizes. Does this mean that stereopsis exhibits deficits for such stimuli? Or does the visual system compensate for the predictable image-size differences? To answer this, we measured discrimination of a disparity-defined shape for different relative image sizes. We did so for different gaze directions, some compatible with the image-size difference and some not. Magnifications of 10–15% caused a clear worsening of stereo performance. The worsening was determined only by relative image size and not by eye position. This shows that no neural compensation for image-size differences accompanies eye-position changes, at least prior to disparity estimation. We also found that a local cross-correlation model for disparity estimation performs like humans in the same task, suggesting that the decrease in stereo performance due to image-size differences is a byproduct of the disparity-estimation method. Finally, we looked for compensation in an observer who has constantly different image sizes due to differing eye lengths. She performed best when the presented images were roughly the same size, indicating that she has compensated for the persistent image-size difference.

*i*is inter-pupillary distance, object position is (

*X, Z*), and left- and right-eye positions are (−

*i*/2,0) and (

*i*/2,0). Figure 1 plots the relative sizes of the two eyes' images as a function of the head-centric position of an object. The object is a small surface patch that is perpendicular to the line of sight (slant = 0°). The circles are iso-magnification contours representing object positions for which one eye's image is a particular percentage larger than the other eye's image. This figure shows that large relative magnifications occur in natural viewing of near eccentric objects. This creates an interesting problem for disparity estimation via correlation: when the two images have different sizes, the correlation between images is necessarily reduced. Does this mean that stereopsis exhibits deficits for near eccentric stimuli? Or does the visual system have a mechanism that compensates for the predictable image-size differences associated with such viewing?

*R*

_{l}and

*R*

_{r}are the horizontal rotation angles of the left and right eye, respectively. Ogle proposed that when the eyes are fixating eccentrically, the retinal image in the nasally turned eye (the eye receiving the smaller retinal image) is magnified “psychologically” relative to the other eye's image; such magnification would be the reciprocal of the magnification in Equation 2. Ogle made no claim about where in visual processing the neural magnification occurs. We reasoned that if the hypothesized neural magnification occurred before the stage of disparity estimation (i.e., before correlating the two eyes' images), it would reduce the difference in the sizes of the represented images and thereby increase the reliability of disparity estimates. As a consequence, one would predict that the deterioration of stereopsis that accompanies large differences in image size would not occur when the size differences are compatible with eye position. Ogle presented experimental evidence that the hypothesized neural magnification occurs and that the trigger for the magnification is an extra-retinal, eye-position signal. Specifically, dichoptic images of different shapes, but the same retinal size appeared to differ in size when the eyes were in eccentric gaze (Ames, Ogle, & Gliddon, 1932; Herzau & Ogle, 1937; Ogle, 1939).

^{1}

^{,}

^{2}

*average*image-size difference at the two eyes. And this in turn would guarantee that disparity estimation was most precise for the most likely surfaces.

^{2}). Horizontal disparities were then created by shifting the dots horizontally in opposite directions in each eye's stimulus. The disparities specified a sinusoidal corrugation in depth with a spatial frequency at screen center of 0.4 cycles/deg and peak-to-trough amplitude of 20.4 arcmin. The relative phase of the corrugation waveform was randomized. The corrugation was −10 or +10° from horizontal and the observer's task was to identify which of the two orientations appeared after each stimulus presentation. The stimulus area was circular with a diameter of 10°. A fixation cross was always present to help the observer maintain appropriate binocular eye alignment. Dot size was random from 1.6 to 3.3 arcmin to minimize monocular cues to stimulus orientation.

*p*> 0.37 in all conditions and all subjects), which proves that stereopsis was required to perform the task.

*L*(

*x, y*) and

*R*(

*x, y*) are the image intensities in the left and right half-images,

*W*

_{ L}and

*W*

_{ R}are the windows applied to the half-images (2D isotropic Gaussians), and

*μ*

_{ L}and

*μ*

_{ R}are the mean intensities within the two windowed images. Because uniform magnification alters both horizontal and vertical disparities, we needed to estimate disparities in two dimensions. Thus,

*δ*

_{ x}is the horizontal displacement of

*W*

_{ R}relative to

*W*

_{ L}, and

*δ*

_{ y}is the vertical displacement (where displacement is disparity).

*W*

_{ L}along two straight trajectories, −10 and +10° from horizontal: trajectories that are parallel to one of the two possible stimulus orientations. For each position of

*W*

_{ L}, we computed the correlation between the left- and right-eye samples for different positions of

*W*

_{ R}. To minimize computation time, the trajectory of

*W*

_{ L}was restricted to one line for −10° and another line for +10°; this simplification did not affect the pattern of results, but it did increase coherence threshold uniformly.

*W*

_{ R}was shifted both horizontally and vertically with respect to

*W*

_{ L}. Figure 7 shows example half-images and the associated cross-correlation output. The

*x*-axis represents the position of

*W*

_{ L}along its trajectory. The

*y*- and

*z*-axes represent respectively the relative horizontal and vertical displacement of

*W*

_{ R}relative to

*W*

_{ L}(corresponding to horizontal and vertical disparities, respectively).

*W*

_{L}and

*W*

_{R}from 6 to 30 arcmin to investigate how window size affects behavior in the task of Experiment 1. We chose 6 arcmin because there is evidence that that size corresponds to the smallest used by the human visual system (Filippini & Banks, 2009; Harris et al., 1997). We chose 30 arcmin because that size is still small enough to not encroach on the Nyquist sampling limit given the spatial frequency of the corrugation waveform. Figure 8 shows the results. There is little, if any, effect of window size on how relative magnification affects disparity estimation, so window size had little effect on the model's behavior in this task.

*t*is coherence threshold,

*S*is percentage of image magnification (negative percentages represent larger images in the left eye than in the right eye), and

*a, b, c,*and

*m*are fitting parameters. To do the fitting, we weighted each data point by the inverse of its squared standard error and used a least-squares criterion on those weighted values. We were most interested in the value of

*m*for the best-fitting function because it represents the magnification for which coherence threshold was lowest. Taking into account the fact that she wore her contact lenses while being tested, the expected

*m*would be 0.79% if she were adapted to her aniseikonia when she is not wearing any optical correction (purple dashed line in the figure), 0% if she were adapted when she is wearing her contact lenses (blue dashed line), −3.24% if she were adapted to her spectacles (red dashed line), and −4.61% if she were not adapted at all (green dashed line). The value of

*m*for the best-fitting function was −0.30% (gray line). With bootstrapping, we determined that the 95% confidence intervals for

*m*are −1.37 to 0.94%. The predictions for adaptation to contacts and no optical correction fall well within the confidence intervals, while the predictions for no adaptation and for adaptation to her spectacles do not fall within the intervals. Thus, the data show that she has adapted to her optical aniseikonia, but we cannot determine whether she has adapted to no optical correction or to her contact-lens correction.