Purchase this article with an account.
Bart Farell; What and where in the computation of relative disparity. Journal of Vision 2004;4(11):37. doi: https://doi.org/10.1167/4.11.37.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Humans are highly sensitive to relative disparities and poor at judging absolute disparities. Yet relative disparity is thought to be computed by differencing absolute disparities. An analog is the fine chromatic discriminations possible despite the color blindness of individual cone classes. Absolute disparity confounds the disparity of the stimulus with the eyes' vergence state, and differencing eliminates the ocular component. If relative disparity is simply a difference between absolute disparities, stimulus parameters other than disparity shouldn't matter, provided the precision of the absolute disparity signal is unaffected. Sensitivity to relative disparity should depend on where the stimuli are, not on what they are. We test this presumption here, and find it wanting. A central Gabor patch and an annular grating had equal spatial frequencies and independently variable orientations. Observers judged the central patch as ‘near’ or ‘far’ relative to the zero-disparity annulus. Presentations, lasting 150ms, were preceded by nonius lines.
When both gratings were vertical, stereo discrimination thresholds were very low. Rotating the annulus 45 deg elevated thresholds by about 4_. Yet when the central grating was also rotated to be parallel to the annulus, thresholds fell to the low value found when both were vertical. Hence, sensitivity to relative disparity depends on relative orientation, independent of absolute disparity. Orientation differences as small as 5–10 deg raised threshold; those of 15–20 deg doubled threshold. Thus, relative disparity is computed within orientation channels and sensitivity to it depends on what the stimuli are, as defined by orientation, not just on where they are located in space.
This PDF is available to Subscribers Only