Abstract
Retinal disparity is usually thought of as a 2D vector representing the deviation from retinal correspondence. It's assumed to decompose naturally into two orthogonal components, called horizontal and vertical disparity. Extensive literature has shown these components to be processed in fundamentally different ways. But when eye movements and non-identical correspondence patterns are taken into account, the simple definition of retinal disparity breaks down. In general, neither horizontal, nor vertical disparity, nor, indeed, the disparity vector itself, are well defined entities. Retinally, a binocular target is represented by one 2D position vector for each eye, or four dimensions. If disparity is assumed to be the difference between these projection vectors and a retinal correspondence pattern, the resulting entity has eight degrees of freedom - four more than a retinally located 2D disparity vector would have. Only when empirical retinal correspondence obeys certain constraints can disparity be reduced to such a vector. But even then it can not be simply split into retinal horizontal and vertical components, because moving eyes change the relationship between retinal locations and epipolar projection geometry. A practical consequence of these theoretical issues is demonstrated using the induced effect as an example. We also present a review of the experimental disparity literature and compare the coordinate systems and effective retinal disparity stimuli used across studies.