The finest stereoacuity is known to depend on the disparity of a target relative to other visible points. Here we show that a more important factor in determining sensitivity to displacement can be the disparity of a target relative to an invisible interpolation plane through other neighboring points. We tested the sensitivity of observers to displacements of the central column of a regular grid of dots that was either fronto-parallel or slanted about a vertical axis. We found that subjects’ sensitivity to displacement was better predicted by a model based on the disparity of a target with respect to the grid plane than it was by a model based on disparity with respect to other reference points. In control conditions carried out on one subject, we found that this result did not depend on adaptation to the grid slant because it also occurred when the direction of grid slant varied from trial to trial. Nor did it depend on the perception of slant, because the data were similar for trials on which the grid was perceived as approximately fronto-parallel or markedly slanted. Our results indicate that sensitivity to the depth component of the target displacement is based on disparity relative to a local reference plane.

^{2}measured with a Pritchard photometer), and the dots were bright (space-averaged luminance of 6 cd/m

^{2}for a 1.6 by 1.6 arcmin lattice). Viewing distance was 1.5 m.

^{2}, 2-arcmin width presented on a dark background (0.4 cd/m

^{2}). Screen luminances were linearized and dot edges anti-aliased to allow accurate sub-pixel shifts.

*SD*of the binomial distribution.

*d*′ is plotted against disparity the slope of the best fitting straight line, constrained to pass through the origin, gives a measure of

*d*′ per arcmin of disparity (

*k*

_{1}). For this subject,

*k*

_{1}= 7.2. Similarly, the zero disparity data shown in Figure 1 was used, by the same method, to calculate

*k*

_{2}, the expected

*d*′ per arcmin of lateral displacement. Detectability,

*d*′, was defined as , where

*P*is the proportion of correct responses and

*F*

^{−1}is the inverse of the cumulative Gaussian function. For this subject,

*k*

_{2}= 1.3.

*d*′ contribution from disparity (, where

*d*is target disparity) and from lateral displacement (, where

*l*is target lateral displacement). According to the signal detection integration model, or ‘

*d*’ summation’ (Green & Swets, 1966), the expected detectability of the target,

*d*′

*, is if the disparity and lateral position signals are combined independently. We adjusted these*

_{t}*d*′ estimates to account for cue-independent errors (as if the subject made a random response on a small proportion of trials (Wichmann & Hill,

*d*)). The best fit for this error rate, 2001, was computed once for the entire data set (2001 = 0 for the data in

*λ*). 2001 is the only free parameter in the model and is constrained to lie between 0 and 0.06. In

*λ*, the

*d*′ predictions shown by the solid line have been converted to proportion correct,

*P*using the formula where is the cumulative Gaussian function.

*d*′ summation model (Equation 1). This is very similar to the model shown in Figure 1, except that instead of disparity and lateral displacement, the cues to be combined are now displacement along and disparity with respect to the reference plane. Of course, in the case of a fronto-parallel reference plane, there is no difference between these. As in Experiment 1, we calculated (separately for each grid slant) (i)

*k*

_{1}, detectability per arcmin of disparity when the target had no lateral displacement and (ii)

*k*

_{2}, detectability per arcmin of target displacement along the plane of the grid. The values of

*k*

_{1}and

*k*

_{2}are both lower than in Experiment 1, compatible with the known increase in stereoacuity thresholds in the presence of a slanted reference plane (Kumar & Glaser, 1992).

*k*

_{1}and

*k*

_{2}for the three subjects were SPM 6.2 and 1.1; CQ 4.4 and 2.0; and MDB 3.1 and 1.3. Then, for each target position, we computed the expected

*d*′ contribution from disparity (, where

*d*is target disparity with respect to the reference plane; i.e., the plane of the grid) and from the component of lateral displacement , where

_{r}*l*is target displacement along the reference plane). Note that the target disparities were larger (±0.4) for subject MDB, but the lateral displacements we tested were the same for all subjects. As before, the expected detectability of the target,

_{dr}*d*′

_{r}, is given by Equation 1. As in Figure 2,

*d*′

_{t}was converted to percentage correct to plot the curves in Figure 1. The solid curves show predicted performance for uncrossed target disparities and the dashed curves predictions for crossed disparities.

*d*, are computed with respect to the reference plane not with respect to the fixation plane (or any other fronto-parallel plane). It is this element of the model that gives rise to the asymmetry in the predictions and the dependence on grid slant. Any model that assumes that the disparity and lateral displacement of the target provide independent information will predict a symmetrical pattern of data, as in

_{r}_{r}. To evaluate the two models, we compared the fit of each model to the data using a XFigure 1 statistic. For all three subjects, the fit of the surface model is better than the fronto-parallel model. It should be pointed out that in no case do the data fall within the 95% confidence interval of the model, although for subject MDB XFigure 1 = 32 for the surface model, just outside the confidence interval of 30. The XFigure 1 values are as follows: for SPM, fronto-parallel model XFigure 1 = 388, surface model XFigure 1 = 183, 95% confidence interval XFigure 1 = 49 (34 d.f.); for CQ, fronto-parallel model XFigure 1 = 44.4, surface model XFigure 1 = 42.1, 95% confidence interval XFigure 1 = 30.1 (19 d.f.); and for MDB, fronto-parallel model XFigure 1 = 73.5, surface model XFigure 1 = 32.0, 95% confidence interval XFigure 1 = 30.1 (19 d.f.). Values of the one free parameter (the cue-independent miss-rate, λ) were for SPM, 0.06; for CQ, 0; and for MDB, 0.05. These were calculated using all the data shown in

^{2}for each subject and a fronto-parallel model fit.

*d*′ and cue magnitude, similar to that found in contrast detection experiments, could help explain the deviations from the simple model shown here. Despite its failings, the model we have presented here provides a qualitative prediction, indicating the situations in which performance is likely to be better for crossed or uncrossed disparities. The fronto-parallel model, on the other hand, fails to capture these patterns.

*k*

_{1}, and the lateral acuity,

*k*

_{2}. The ratio of these two (2.4:1) is much smaller than for subject SPM (5.6:1), and hence the degree of predicted asymmetry is less (the ratio

*k*

_{1}:

*k*

_{2}is similar for CQ and MDB, but the stimulus disparity was different for these two subjects, hence the predictions are different, too). Thus, although the data from subject CQ are less useful in distinguishing between rival models than the data of other subjects, they are, nonetheless, compatible with the predictions of the surface model.

*k*

_{1}>

*k*

_{2}), and the lateral displacement in each eye is only half of the total disparity. For non-zero lateral displacements, the data would lie slightly above the crosses in Figure 1 for one direction of lateral displacements (because the disparity component would increase the lateral displacement in that eye) and slightly below the crosses for the other direction. This is clearly not a good description of the data.

*x*(lateral) and

*z*depth directions, respectively. Thus, this is an object about 4-cm wide presented at about 150 cm from the observer in different orientations. The left and right eyes’ views of the object as seen in the left hand plot (+30° slant) are shown beneath it. The differences between the left and right eyes’ views have been exaggerated. The arrows indicate the horizontal width of the surface in the left and right eyes’ views,

*w*and

_{l}_{l}. It is possible to define the location of all the features using these monocular widths. Taking the bottom left hand triangle as the origin in each image, the horizontal location of the

*i*feature,

^{th}*P*, is

_{i}*w*in the left eye’s image and

_{l}*w*in the right eye. The vertical location of features in each eye is equal under orthographic projection and can be ignored here. In the example shown in Figure 5,

_{r}*x*=

_{l}*x*= 0 for triangles on the left hand edge of the surface,

_{r}*x*=

_{l}*x*= 1 for triangles on the right, and

_{r}*x*=

_{l}*x*= 0.5 for triangles in the center. For all points on the surface . This follows from the fact that under orthographic projection the left eye’s image of a surface slanted about a vertical axis is a uniform horizontal expansion/compression of the right eye’s image. () provides a measure of disparity with respect to the plane. Figure 6 illustrates this claim.

_{r}*w*in the left eye and

_{l}_{l}in the right eye. The values of

*w*and

_{l}_{l}are (1 −

_{l}/2) and (1 +

_{l}/2) where

_{l}is the disparity gradient of the surface. The disparities of the triangles, square, diamond, and circle plotted in Figure 6 are the differences between the normalized horizontal location of these features in the left and right eyes.

*Proceedings of the National Academy of Sciences*, 84, 6297–6301. [PubMed] [CrossRef]

*Vision Research*, 41, 3051–3061. [PubMed] [CrossRef] [PubMed]

*Journal of Experimental Psychology*, 38, 708–721. [CrossRef] [PubMed]

*Vision Research*, 33, 2189–2201. [PubMed] [CrossRef] [PubMed]

*Journal of Neuroscience*, 19, 1981–2088. [PubMed]

*Vision Research*, 25, 583–588. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 39, 3057–3069. [PubMed] [CrossRef] [PubMed]

*Current Biology*, 12, 825–828. [PubMed] [CrossRef] [PubMed]

*Signal detection theory and psychophysics*. New York: John Wiley & Sons.

*Journal of the Optical Society of America A*, 8, 377–385. [PubMed] [CrossRef]

*Vision Research*, 32, 1667–1676. [PubMed] [CrossRef] [PubMed]

*Psychological Review*, 107, 6–38. [PubMed] [CrossRef] [PubMed]

*Proceedings of the Royal Society of London (B)*, 204, 301–328. [PubMed] [CrossRef]

*Vision Research*, 30, 879–891. [PubMed] [CrossRef] [PubMed]

*Perception*, 22, 1415–1426. [PubMed] [CrossRef] [PubMed]

*Nature*, 315, 402–404. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 27, 285–294. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 30, 1781–1791. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 24, 1063–1073. [PubMed] [CrossRef] [PubMed]

*Readings in computer vision*(pp. 63–72). Los Altos, CA: Kauffman.

*Vision Research*, 44, 367–376. [PubMed] [CrossRef] [PubMed]

*Readings in computer vision*(pp. 80–86). Los Altos, CA: Kauffman.

*Nature Neuroscience*, 5, 472–478. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 2(9), 597–607, http://journalofvision.org/2/9/2/, doi:10.1167/2.9.2. [PubMed][Article] [CrossRef] [PubMed]

*Perception and Psychophysics*, 19, 375–382. [CrossRef]

*Experimental Brain Research*, 36, 585–597. [PubMed] [CrossRef] [PubMed]

*Perception and Psychophysics*, 63, 1293–1313. [PubMed] [CrossRef] [PubMed]