Abstract
The visual system can estimate binocular disparity in a wide variety of viewing situations. Disparity estimation breaks down, however, when the disparity gradient (the rate at which disparity changes with changes in spatial position) is high. This breakdown has been called the disparity-gradient limit. There is considerable physiological and psychophysical evidence that disparity estimation is done by computing local correlations between the two eyes' images. When the two eyes' images are quite different, as they are when the disparity gradient is high, local correlation becomes low. We investigated the possibility that the disparity-gradient limit is a byproduct of estimating disparity by local cross-correlation. We examined observers' ability to extract a disparity signal as a function of the disparity gradient and compared that to the performance of a local cross-correlator. The stimulus was a disparity-defined sawtooth grating presented in a volume of noise elements. Observers indicated whether the relative phase of the grating stimulus was −90 or 90 deg. Threshold was defined by the proportion of noise that yielded 71% correct performance. We also varied the stimulus distance, which changes the perceived slant but not the disparity gradient. Variation in threshold was systematically related to the disparity gradient and not to spatial frequency, disparity amplitude, or perceived slant. The model exhibited the same behavior as observers. Correctly creating vertical disparities in the stimulus had little effect on the results. We conclude that the disparity-gradient limit is a consequence of estimating disparity by local cross-correlation.