Abstract
To generate accurate representations of 3D scenes from binocular disparity, horizontal disparities must be interpreted using estimates of eye position. These are derived from extraretinal signals and vertical disparities. Vertical disparities are pooled over an area of about 20 deg. However, previous studies have considered only the 2D configuration of a scene. Here we examined how vertical disparities appropriate for different fixation distances in 2 superimposed arrays of dots at different depths affects the interpretation of horizontal disparities in the displays. In experiment 1 we examined the effect of vertical disparities of arrays of dots at different depths on superimposed arrays without vertical disparities. Observers viewed 2 arrays of dots with horizontal disparities and vergence corresponding to frontal surfaces. One was a central, horizontal row of dots at the fixation distance (45cm) and the second was a field of dots with added horizontal disparity of up to +−40 arcmin. The vertical disparities of the field of dots corresponded to convergence nearer or beyond the fixation distance, producing apparent concave or convex curvature in depth around the vertical meridian. Subjects matched a subsequently viewed comparison field of dots with adjustable curvature to the apparent curvature of each of the two test arrays. When the two superimposed test arrays had the same depth, the horizontal disparities of the dot row were processed in the same way as the horizontal disparities of the field. However, disparity processing became increasingly depth-specific as the depth separation of the arrays increased up to a horizontal disparity of around +−20 arcmin. In experiment 2, we examined vertical disparity pooling in depth with 2 superimposed arrays of dots, evenly distributed in 2D, with different vertical disparity manipulations. We conclude that, although vertical disparities are pooled over sizeable 2D regions, they are pooled only over a narrow range of depth.