The expansion of the definition of the classical horopter has extended it from a tool for visualizing the geometry of correspondence into a quantitative measure of binocular alignment, useful for a range of questions concerning the interaction of eye movements and vision.
We have used this new tool to show why the human oculomotor system breaks Listing's law during vergence. It has been argued that the use of L2 instead of Listing's law during near vision improves binocular alignment (Tweed,
1997; Schreiber et al.,
2001). This improved alignment maximizes the benefit of using the epipolar constraint for the matching problem, without the accompanying cost of having to compute the gaze-dependent location of the epipolar lines on the retina (Schreiber et al.,
2001).
Using the extended horopter, we have shown how the change from Listing's law to L2 expands the utility of binocular correspondence, by keeping the shape of the horopter similar across eye position changes, and by increasing the fusable part of visual space for a given disparity limit.
We also demonstrate a rotation of the classical theoretical horopter out of the visual plane with targets in the midsagittal plane when the eyes follow Listing's law (
Figure 5). We have shown that L2 eliminates this rotation.
This realignment of the retinal images by changes in ocular torsion has significance beyond simple matching alignment. Current models of stereoscopic slant perception (Backus, Banks, van Ee, & Crowell,
1999; Banks, Hooge, & Backus,
2001) use vertical disparity signals, in the form of the vertical size ratio (VSR) or the gradient of vertical disparities, as a cue to slant. In eccentric gaze, Listing's law changes cyclovergence for both vertical and horizontal eye position changes. A change in cyclovergence creates a gradient of vertical disparities along the horizontal retinal meridian, and a gradient of horizontal disparity along the vertical meridian. These gradients complicate the interpretation of the overall disparity field, but they could also be used to determine the viewing situation and compute an azimuth signal. Backus et al. (
1999) explicitly ignored the effects of ocular torsion in their study of slant about a vertical axis. Banks et al. (
2001), on the other hand, looked at signals influencing the perception of slant about a horizontal axis and found that ocular torsion is not taken into account at all.
If the eyes follow L2 instead of Listing's law, Helmholtz (
1867) cyclovergence is kept at zero for all gaze directions and vergence angles. This explains why cyclovergence does not have to be taken into account for perceiving slant about a horizontal axis, and it also justifies ignoring torsion in models of azimuth estimation and slant perception. In effect, the motor system driving the eyes assures that the visual system can rely on the simple assumption of zero cyclovergence.
This motor strategy then is what makes it possible for the visual system to ignore ocular torsion in its perceptual computations. For this strategy to work, the motor program needs a mechanism to keep itself calibrated. It has been demonstrated recently that the control of ocular torsion can indeed be changed by a cyclodisparity stimulus (Maxwell, Graf, & Schor,
2001; Maxwell & Schor,
1999; Schor, Maxwell, & Graf,
2001). This suggests a view where ocular torsion programs are dynamically controlled to optimize binocular image alignment and simplify the calculations necessary for veridical slant perception.
There are at least two further theoretical considerations arising from the work presented here.
As we have stated in the methods section, the attempt to define a disparity vector for each horopter point relative to its pair of corresponding points reveals a weakness in the concept of retinal disparity. This weakness disappears for a situation of identical corresponding points, but for any other empirical pattern of retinal correspondence there is no unambiguous definition of absolute disparity relative to that pattern. Specifically, suppose C is the transformation producing the coordinates of retinal point R in the right eye corresponding to point L in the left eye, that is, R = C(L). For an object O in space that projects onto ol and or in the left and right eyes, respectively, there then exist two equally appropriate retinal disparities relative to the correspondence mapping in the two eyes, namely, dl = ol − C−1(or) for the left eye and dr = or − C(ol) for the right eye. If C is an identity mapping, that is, if corresponding points are identical points, these two definitions coincide except for sign and represent the common definition of retinal disparity as the difference in visual angle between the two projections. But for general correspondence functions C, the two eyes' disparity vectors differ in size and direction. In other words, there is no unambiguous retinal disparity relative to nonidentical retinal mapping.
We have avoided this theoretical problem here by defining disparity as the average of the two eyes' disparity vectors. Furthermore, as long as disparity direction is ignored (i.e., as long as Panum's areas are isotropic around corresponding points), our definition of horopter points ensures that the disparity vectors have the same length in either eye. More work is needed to determine the implications of this ambiguity for the concept of retinal disparity as a physiological signal used in the visual system.
Secondly, the use of a disparity metric in the definition of our extended horopter implies a retinal coordinate system for measuring the length of this disparity vector and for constraining it to a fusional zone. While the shape of our extended horopter, that is, the location of its points in space, is independent of the coordinate systems used in describing either eye movements or retinal locations, the same is not true for the disparity vectors. The value of their length and the meaning of their components, that is, which directions from a corresponding point we will call horizontal and vertical, depends on the retinal coordinate system used. What is the correct coordinate system then?
Horizontal and vertical disparities are thought to code different aspects of the geometry of the visual scene. This is based on the geometrical fact that target motion in depth results in the shift of its retinal projections along epipolar lines. For static eyes, this means that the depth of objects is coded as a retinal disparity in a certain retinal direction. Disparities in the orthogonal direction cannot change for real targets at all (ignoring added lenses and equivalent distortions). Eye movements change the arrangement of epipolar lines, sliding and rotating them on the retina. In general, an object's projections that fall on the epipolar lines for the current eye position will not fall on the epipolar lines for a different eye position. While there is ample evidence for differential processing of horizontal and vertical disparities, the retinal coordinate system in which these signals are coded and whether that coordinate system is static or changes with eye position is presently unknown.
A related further empirical question raised by our empirical horopter is the shape and extent of Panum's fusional areas across the retina. Foveally, the limits for stereomatching have been reported to be nonuniform on the retina (Stevenson & Schor,
1997), with horizontal disparity limits being about 1 deg and vertical limits about 0.5 deg.
These limits, when measured for a given spatial frequency, are independent of retinal eccentricity (Schor, Wood, & Ogawa,
1984; Wilson, Blake, & Pokorny,
1988), but they scale with the spatial period of the stimulus for spatial frequencies lower than 2.5 cpd (Schor et al.,
1984; Schor, Wesson, & Robertson,
1986). The upper cutoff spatial frequency of the visual system decreases with retinal eccentricity, falling below 2.5 cpd for eccentricities above 10 deg. By this reasoning, Panum's area is constant out to 10 deg and then increases with larger retinal eccentricities, in proportion to the period of the spatial frequency cutoff.
For our simulations, however, we chose uniform and constant disparity limits instead, partly because the direction for vertical disparities would be unclear in the periphery because of the coordinate system problem described above. More importantly, had we increased the size of Panum's area with eccentricity, this increase in the disparity limits would have more than matched the increase in retinal disparity at eccentric retinal locations. This means that there would have been a horopter point projecting within Panum's area everywhere, and the extended horopter would have covered all of the visual field. This would have made it impossible to compare the size of the fusable surface across eye movement patterns.
But even when every part of the visual horopter surface is fusable, stereo acuity is reduced as the stimulus moves away from perfect correspondence (Badcock & Schor,
1985), meaning that larger disparities are detrimental to perception even if they fall within Panum's area. So regardless of the size and shape of Panum's area, there is value in reducing retinal disparity by using L2 rather than Listing's law.