Free
Research Article  |   December 2009
Latitude and longitude vertical disparities
Author Affiliations
Journal of Vision December 2009, Vol.9, 11. doi:https://doi.org/10.1167/9.13.11
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jenny C. A. Read, Graeme P. Phillipson, Andrew Glennerster; Latitude and longitude vertical disparities. Journal of Vision 2009;9(13):11. https://doi.org/10.1167/9.13.11.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The literature on vertical disparity is complicated by the fact that several different definitions of the term “vertical disparity” are in common use, often without a clear statement about which is intended or a widespread appreciation of the properties of the different definitions. Here, we examine two definitions of retinal vertical disparity: elevation-latitude and elevation-longitude disparities. Near the fixation point, these definitions become equivalent, but in general, they have quite different dependences on object distance and binocular eye posture, which have not previously been spelt out. We present analytical approximations for each type of vertical disparity, valid for more general conditions than previous derivations in the literature: we do not restrict ourselves to objects near the fixation point or near the plane of regard, and we allow for non-zero torsion, cyclovergence, and vertical misalignments of the eyes. We use these expressions to derive estimates of the latitude and longitude vertical disparities expected at each point in the visual field, averaged over all natural viewing. Finally, we present analytical expressions showing how binocular eye position—gaze direction, convergence, torsion, cyclovergence, and vertical misalignment—can be derived from the vertical disparity field and its derivatives at the fovea.

Introduction
Because the two eyes are set apart in the head, the images of an object fall at different positions in the two eyes. The resulting binocular disparity is, in general, a two-dimensional vector quantity. Since Helmholtz (1925), psychophysicists have divided this vector into two components: horizontal and vertical disparities. There is now an extensive literature discussing how vertical disparities influence perception and how this may be achieved within the brain, for example Backus, Banks, van Ee, and Crowell (1999), Banks, Backus, and Banks (2002), Berends, van Ee, and Erkelens (2002), Brenner, Smeets, and Landy (2001), Cumming (2002), Duke and Howard (2005), Durand, Zhu, Celebrini, and Trotter (2002), Friedman, Kaye, and Richards (1978), Frisby et al. (1999), Garding, Porrill, Mayhew, and Frisby (1995), Gillam and Lawergren (1983), Kaneko and Howard (1997b), Longuet-Higgins (1982), Matthews, Meng, Xu, and Qian (2003), Ogle (1952), Porrill, Mayhew, and Frisby (1990), Read and Cumming (2006), Rogers and Bradshaw (1993, 1995), Westheimer (1978), and Williams (1970). Yet the literature is complicated by the fact that the term “vertical disparity” is used in several different ways by different authors. The first and most fundamental distinction is whether disparity is defined in a head-centric or retino-centric coordinate system. The second issue concerns how disparity, as a two-dimensional vector quantity, is divided up into “vertical” and “horizontal” components. 
In a head-centric system, vertical and horizontal disparities are defined in the optic array, that is the set of light rays passing through the nodal points of each eye. One chooses an angular coordinate system to describe the line of sight from each eye to a point in space; vertical disparity is then defined as the difference in the elevation coordinates. If Helmholtz coordinates are used (Figure 1A, Howard & Rogers, 2002), then for any point in space, the elevation is the same from both eyes. Thus, head-centric Helmholtz elevation disparity is always zero for real objects (Erkelens & van Ee, 1998). This definition is common in the physiology literature, where “vertical disparity” refers to a vertical off-set between left and right images on a frontoparallel screen (Cumming, 2002; Durand, Celebrini, & Trotter, 2007; Durand et al., 2002; Gonzalez, Justo, Bermudez, & Perez, 2003; Stevenson & Schor, 1997). In this usage, a “vertical” disparity is always a “non-epipolar” disparity: that is, a two-dimensional disparity that cannot be produced by any real object, given the current eye position. A different definition uses Fick coordinates to describe the angle of the line of sight (Figure 1B). With this definition, vertical disparities occur in natural viewing (e.g., Backus & Banks, 1999; Backus et al., 1999; Bishop, 1989; Hibbard, 2007; Rogers & Bradshaw, 1993), and so non-zero vertical disparities are not necessarily non-epipolar. 
Figure 1
 
Two coordinate systems for describing head-centric or optic-array disparity. Red lines are drawn from the two nodal points L, R to an object P. (A) Helmholtz coordinates. Here, we first rotate up through the elevation angle λ to get us into the plane LRP, and the azimuthal coordinate ζ rotates the lines within this plane until they point to P. The elevation is thus the same for both eyes; no physical object can have a vertical disparity in optic-array Helmholtz coordinates. (B) Fick coordinates. Here, the azimuthal rotation ζ is applied within the horizontal plane, and the elevation λ then lifts each red line up to point at P. Thus, elevation is in general different for the two lines, meaning that object P has a vertical disparity in optic-array Fick coordinates.
Figure 1
 
Two coordinate systems for describing head-centric or optic-array disparity. Red lines are drawn from the two nodal points L, R to an object P. (A) Helmholtz coordinates. Here, we first rotate up through the elevation angle λ to get us into the plane LRP, and the azimuthal coordinate ζ rotates the lines within this plane until they point to P. The elevation is thus the same for both eyes; no physical object can have a vertical disparity in optic-array Helmholtz coordinates. (B) Fick coordinates. Here, the azimuthal rotation ζ is applied within the horizontal plane, and the elevation λ then lifts each red line up to point at P. Thus, elevation is in general different for the two lines, meaning that object P has a vertical disparity in optic-array Fick coordinates.
Head-centric disparity is independent of eye position, and thus mathematically tractable. However, all visual information arrives on the retinas, and it seems clear that the brain's initial encoding of disparity is retinotopic (Gur & Snodderly, 1997; Read & Cumming, 2003). Accordingly, the appropriate language for describing the neuronal encoding of disparity must be retinotopic (Garding et al., 1995). The retinal disparity of an object is the two-dimensional vector linking its two images in the two retinas. This depends both on the scene viewed and also on the binocular posture of the eyes viewing it. Thus, two-dimensional retinal disparity can be used to extract eye posture as well as scene structure (Garding et al., 1995; Longuet-Higgins, 1981). It is more complicated to handle than head-centric disparity, but it contains more information. 
For retinal disparity, as well, two definitions of vertical disparity are in common use within the literature, stemming from the difficulty of defining a “vertical” direction on a spherical eyeball. One possibility is to define “vertical” as the projection of vertical lines in space onto the retina, so that a line of constant “vertical” position on the retina is the projection of a horizontal line in space. This produces lines of elevation longitude on the spherical eyeball ( Figures 2BD and 3AC). We shall refer to the corresponding vertical coordinate as elevation longitude, η; it is called inclination by Bishop, Kozak, and Vakkur (1962) and is analogous to the Helmholtz coordinates of Figure 1A. Equivalently, one can project the hemispherical retina onto a plane and take the vertical Cartesian coordinate on the plane, y (Figure 2A). This is usual in the computer vision literature. Since there is a simple one-to-one mapping between these two coordinates, y = tanη, we shall not need to distinguish between them in this paper. The natural definition of vertical disparity within this coordinate system is then the difference between the elevation longitude of the images in the two eyes. Papers that have defined vertical disparity to be differences in either η or y include Garding et al. (1995), Hartley and Zisserman (2000), Longuet-Higgins (1982), Mayhew (1982), Mayhew and Longuet-Higgins (1982), and Read and Cumming (2006). 
Figure 2
 
Different retinal coordinate systems. (A) Cartesian planar. Here, x and y refer to position on the virtual plane behind the retina; the “shadow” shows where points on the virtual plane correspond to on the retina, i.e., where a line drawn from the virtual plane to the center of the eyeball intersects the eyeball. (B) Azimuth longitude/elevation longitude. (C) Azimuth longitude/elevation latitude. (D) Azimuth latitude/elevation longitude. (E) Azimuth latitude/elevation latitude. For the (B–E) angular coordinate systems, lines of latitude/longitude are drawn at 15° intervals between ±90°. For the (A) Cartesian system, the lines of constant x and y are at intervals of 0.27 = tan15°. Lines of constant x are also lines of constant α, but lines that are equally spaced in x are not equally spaced in α. In this paper, we use the sign convention that positive x, α, β represent the left half of the retina, and positive y, η, κ represent the top. This figure was generated in Matlab by the program Fig_DiffRetCoords.m in the Supplementary material.
Figure 2
 
Different retinal coordinate systems. (A) Cartesian planar. Here, x and y refer to position on the virtual plane behind the retina; the “shadow” shows where points on the virtual plane correspond to on the retina, i.e., where a line drawn from the virtual plane to the center of the eyeball intersects the eyeball. (B) Azimuth longitude/elevation longitude. (C) Azimuth longitude/elevation latitude. (D) Azimuth latitude/elevation longitude. (E) Azimuth latitude/elevation latitude. For the (B–E) angular coordinate systems, lines of latitude/longitude are drawn at 15° intervals between ±90°. For the (A) Cartesian system, the lines of constant x and y are at intervals of 0.27 = tan15°. Lines of constant x are also lines of constant α, but lines that are equally spaced in x are not equally spaced in α. In this paper, we use the sign convention that positive x, α, β represent the left half of the retina, and positive y, η, κ represent the top. This figure was generated in Matlab by the program Fig_DiffRetCoords.m in the Supplementary material.
Figure 3
 
Two definitions of vertical retinal disparity. (AB) A point in space, P, projects to different positions I L and I R on the two retinas. (CD) The two retinas are shown superimposed, with the two half-images of P shown in red and blue for the left and right retinas, respectively. In (AC), the retinal coordinate system is azimuth longitude/elevation longitude. In (BD), it is azimuth longitude/elevation latitude. Point P and its images I L and I R are identical between (AC) and (BD); the only difference between left and right halves of the figure is the coordinate system drawn on the retinas. The eyes are converged 30° fixating a point on the midline: X = 0, Y = 0, Z = 11. The plane of gaze, the XZ plane, is shown in gray. Lines of latitude and longitude are drawn at 15° intervals. Point P is at X = −6, Y = 7, Z = 10. In elevation-longitude coordinates, the images of P fall at η L = −30°, η R = −38°, so the vertical disparity η Δ is −8°. In elevation latitude, κ L = −27°, κ R = −34°, and the vertical disparity κ Δ = −6°. This figure was generated by Fig_VDispDefinition.m in the Supplementary material.
Figure 3
 
Two definitions of vertical retinal disparity. (AB) A point in space, P, projects to different positions I L and I R on the two retinas. (CD) The two retinas are shown superimposed, with the two half-images of P shown in red and blue for the left and right retinas, respectively. In (AC), the retinal coordinate system is azimuth longitude/elevation longitude. In (BD), it is azimuth longitude/elevation latitude. Point P and its images I L and I R are identical between (AC) and (BD); the only difference between left and right halves of the figure is the coordinate system drawn on the retinas. The eyes are converged 30° fixating a point on the midline: X = 0, Y = 0, Z = 11. The plane of gaze, the XZ plane, is shown in gray. Lines of latitude and longitude are drawn at 15° intervals. Point P is at X = −6, Y = 7, Z = 10. In elevation-longitude coordinates, the images of P fall at η L = −30°, η R = −38°, so the vertical disparity η Δ is −8°. In elevation latitude, κ L = −27°, κ R = −34°, and the vertical disparity κ Δ = −6°. This figure was generated by Fig_VDispDefinition.m in the Supplementary material.
An alternative approach is to define the vertical coordinate as being lines of latitude on the sphere ( Figures 2CE and 3BD). This is analogous to the Fick coordinates of Figure 1B. We shall refer to the corresponding vertical coordinate as elevation latitude, κ (see Table A2 in 1 for a complete list of symbols used in this paper). Studies defining vertical disparity as the difference in elevation latitude κ include Barlow, Blakemore, and Pettigrew (1967), Bishop et al. (1962), and Howard and Rogers (2002). 
Both definitions of vertical disparity are perfectly valid and in common use. The trouble is that statements that are true of one are not true of the other. For example, elevation-longitude vertical disparity is always zero when the eyes are in primary position, but elevation-latitude vertical disparity is not. Horizontal rotations of the eyes away from primary position change the elevation longitude to which an object projects, but leave its elevation latitude unaltered. As a consequence, elevation-latitude vertical disparity is independent of convergence, whereas elevation-longitude vertical disparity increases as the eyes converge. Elevation-latitude vertical disparity is always zero for objects on the mid-sagittal plane, but elevation-longitude vertical disparity is not. Yet despite these crucial differences, papers on vertical disparity often do not spell out which definition they are employing. From personal experience, we believe that the differences between the definitions are not widely appreciated, perhaps because the two definitions become equivalent at the fovea. A key aim of this paper is to lay out the similarities and differences between both definitions in a single convenient reference. 
A second aim is to obtain analytical expressions for both types of disparity that are as general as possible. Most mathematical treatments in the psychology literature make simplifying assumptions, e.g., that the eyes have no torsion, that the object is in the plane of gaze, and that the eyes are correctly fixating on a single point in space. Conversely the computer vision literature allows for completely general camera positions but does not provide explicit expressions for vertical disparity. Here, we derive unusually general explicit expressions for both types of vertical disparity. We still use the small baseline approximation required by previous treatments and also small vergence angles. However, we are able to produce approximate expressions that allow for small amounts of cycloversion, cyclovergence, and vertical misalignments between the eyes. 
These general expressions allow us to derive simple expressions for the expected pattern of vertical disparity across the visual field, averaged across all scenes and eye positions. A few previous studies have attempted to estimate the two-dimensional distribution of disparities encountered in normal viewing (Hibbard, 2007; Liu, Bovik, & Cormack, 2008; Read & Cumming, 2004), but these have all averaged results across the entire visual field. This study is the first to examine the expected vertical disparity as a function of position in the visual field, and we hope it will be useful to physiologists studying the neuronal encoding of vertical disparity. 
In the vicinity of the fovea, the distinction between latitude and longitude becomes immaterial. The two definitions of vertical disparity are therefore equivalent. We show how eye position can be read off very straightforwardly from this unified vertical disparity. We derive simple analytic expressions giving estimates of each eye position parameter in terms of the vertical disparity at the fovea and its rate of change there. These are almost all implicit in the existing literature (Backus & Banks, 1999; Backus et al., 1999; Banks et al., 2002; Banks, Hooge, & Backus, 2001; Kaneko & Howard, 1996, 1997b; Koenderink & van Doorn, 1976; Rogers & Bradshaw, 1993, 1995; Rogers & Cagenello, 1989) but are brought together here within a single clear set of naming conventions and definitions so that the similarities and differences between definitions can be readily appreciated. 
Methods
All simulations were carried out in Matlab 7.8.0 R2009a ( www.mathworks.com), using the code made available in the Supplementary material. Most of the figures in this paper can be generated by this code. To produce a figure, first download all the Matlab (.m) files in the Supplementary material to a single directory. In Matlab, move to this directory and type the name of the file specified in the figure legend. 
Results
General expressions for elevation-longitude and elevation-latitude vertical disparities
Figure 3 shows the two definitions of retinal vertical disparity that we consider in this paper. A point P in space is imaged to the points I L and I R in the left and right retinas, respectively ( Figure 3AB). Figures 3C and 3D show the left and right retinas aligned and superimposed, so that the positions of the images I L and I R can be more easily compared. The left (AC) and right-hand panels (BD) of Figure 3 are identical apart from the vertical coordinate system drawn on the retina: Figure 3AC shows elevation longitude η, and Figure 3BD shows elevation latitude κ. The vertical disparity of point P is the difference between the vertical coordinates of its two half-images. For this example, the elevation-longitude vertical disparity of P is η Δ = −8°, while the elevation-latitude disparity is κ Δ = −6°. 
In the Appendices, we derive approximate expressions for both types of retinal vertical disparity. These are given in Table C2. In general, vertical disparity depends on the position of object P (both its visual direction and its distance from the observer) and on the binocular posture of the eyes. Each eye has three degrees of freedom, which we express in Helmholtz coordinates as the gaze azimuth H, elevation V, and torsion T ( Figure 4; 1). Thus, in total the two eyes have potentially 6 degrees of freedom. It is convenient to represent these by the mean and the difference between the left and right eyes. Thus, we shall parametrize eye position by the three coordinates of an imaginary cyclopean eye ( Figure 5), H c, V c, and T c, and the three vergence angles, H Δ, V Δ, and T Δ, where H c = ( H R + H L)/2, and H Δ = H RH L, and so on ( Tables A1 and A2). When we refer below to convergence, we mean the horizontal vergence angle H Δ. We shall refer to V Δ as vertical vergence error or vertical vergence misalignment. We call V Δ a misalignment because, in order for the two eyes' optic axes to intersect at a single fixation point, V Δ must be zero, and this is empirically observed to be usually the case. 
Figure 4
 
Helmholtz coordinates for eye position (A) shown as a gimbal, after Howard (2002, Figure 9.10) and (B) shown for the cyclopean eye. The sagittal YZ plane is shown in blue, the horizontal XZ plane in pink, and the gaze plane in yellow. There are two ways of interpreting Helmholtz coordinates: (1) Starting from primary position, the eye first rotates through an angle T about an axis through the nodal point parallel to Z, then through H about an axis parallel to Y, and finally through V about an axis parallel to X. Equivalently, (2) starting from primary position, the eye first rotates downward through V, bringing the optic axis into the desired gaze plane (shown in yellow) then rotates through H about an axis orthogonal to the gaze plane, and finally through T about the optic axis. Panel B was generated by the program Fig_HelmholtzEyeCoords.m in the Supplementary material.
Figure 4
 
Helmholtz coordinates for eye position (A) shown as a gimbal, after Howard (2002, Figure 9.10) and (B) shown for the cyclopean eye. The sagittal YZ plane is shown in blue, the horizontal XZ plane in pink, and the gaze plane in yellow. There are two ways of interpreting Helmholtz coordinates: (1) Starting from primary position, the eye first rotates through an angle T about an axis through the nodal point parallel to Z, then through H about an axis parallel to Y, and finally through V about an axis parallel to X. Equivalently, (2) starting from primary position, the eye first rotates downward through V, bringing the optic axis into the desired gaze plane (shown in yellow) then rotates through H about an axis orthogonal to the gaze plane, and finally through T about the optic axis. Panel B was generated by the program Fig_HelmholtzEyeCoords.m in the Supplementary material.
Figure 5
 
Different ways of measuring the distance to the object P. The two physical eyes are shown in gold; the cyclopean eye is in between them, in blue. F is the fixation point; the brown lines mark the optic axes, and the blue line marks the direction of the cyclopean gaze. Point P is marked with a red dot. It is at a distance R from the origin. Its perpendicular projection on the cyclopean gaze axis is also drawn in red (with a corner indicating the right angle); the distance of this projection from the origin is S, marked with a thick red line. This figure was generated by the program Fig_DistancesRS.m in the Supplementary material.
Figure 5
 
Different ways of measuring the distance to the object P. The two physical eyes are shown in gold; the cyclopean eye is in between them, in blue. F is the fixation point; the brown lines mark the optic axes, and the blue line marks the direction of the cyclopean gaze. Point P is marked with a red dot. It is at a distance R from the origin. Its perpendicular projection on the cyclopean gaze axis is also drawn in red (with a corner indicating the right angle); the distance of this projection from the origin is S, marked with a thick red line. This figure was generated by the program Fig_DistancesRS.m in the Supplementary material.
We shall call T Δ the cyclovergence. Non-zero values of T Δ mean that the eyes have rotated in opposite directions about their optic axes. This occurs when the eyes look up or down: if we specify the eyes' position in Helmholtz coordinates, moving each eye to its final position by rotating through its azimuth H about a vertical axis and then through the elevation V about the interocular axis, we find that in order to match the observed physical position of each eye, we first have to apply a rotation T about the line of sight. If V > 0, so the eyes are looking down, this initial torsional rotation will be such as to move the top of each eyeball nearer the nose, i.e., incyclovergence. Note that the sign of the cyclovergence depends on the coordinate system employed; if eye position is expressed using rotation vectors or quaternions, converged eyes excycloverge when looking downward (Schreiber, Crawford, Fetter, & Tweed, 2001). 
We shall refer to T c as cycloversion. Non-zero values of T c mean that the two eyes are both rotated in the same direction. This happens, for example, when the head tilts to the left; both eyes counter-rotate slightly in their sockets so as to reduce their movement in space, i.e., anti-clockwise as viewed by someone facing the observer (Carpenter, 1988). 
As noted, V Δ is usually zero. It is also observed that for a given elevation, gaze azimuth, and convergence, the torsion of each eye takes on a unique value, which is small and proportional to elevation (Tweed, 1997c). Thus, out of the 6 degrees of freedom, it is a reasonable approximation to consider that the visual system uses only 3: Hc, Vc, and HΔ, with VΔ = 0, and cycloversion Tc and cyclovergence TΔ given by functions of Hc, Vc, and HΔ. Most treatments of physiological vertical disparity have assumed that there is no vertical vergence misalignment or torsion: VΔ = TΔ = Tc = 0. We too shall use this assumption in subsequent sections, but we start by deriving the most general expressions that we can. The expressions given in Table C2 assume that all three vergence angles are small but not necessarily zero. This enables the reader to substitute in realistic values for the cyclovergence TΔ at different elevations (Minken & Van Gisbergen, 1994; Somani, DeSouza, Tweed, & Vilis, 1998; Van Rijn & Van den Berg, 1993). The expressions in Table C2 also assume that the interocular distance is small compared to the distance to the viewed object. If the eyes are fixating near object P, then the small vergence approximation already implies this small baseline approximation, since if P is far compared to the interocular separation, then both eyes need to take up nearly the same posture in order to view it. While Porrill et al. (1990) extended the results of Mayhew and Longuet-Higgins (1982) to include cyclovergence, we believe that this paper is the first to present explicit expressions for two-dimensional retinal disparity that are valid all over the visual field and which allow for non-zero vertical vergence misalignment and cycloversion as well as cyclovergence. 
Under these assumptions, the vertical disparity expressed as the difference in elevation-longitude coordinates is  
η Δ cos 2 η c ( sin T c cos H c sin H c tan η c ) I S + ( tan α c sin η c cos η c cos T c sin T c ) H Δ ( cos H c cos T c + tan α c sin η c cos η c cos H c sin T c + tan α c cos 2 η c sin H c ) V Δ ( tan α c cos 2 η c ) T Δ
(1)
assuming that I/ S, H Δ, T Δ, and V Δ are all small, while the vertical disparity expressed as the difference in elevation-latitude coordinates is  
κ Δ ( cos T c cos H c sin α c sin κ c sin H c cos α c sin κ c + sin T c cos H c cos κ c ) I R sin T c cos α c H Δ ( cos H c cos T c cos α c + sin α c sin H c ) V Δ sin α c T Δ
(2)
assuming that I/ R, H Δ, T Δ, and V Δ are all small. 
The coordinates ( α c, η c) represent the visual direction of the viewed object P in the azimuth-longitude/elevation-longitude coordinate system shown in Figure 3AC, while ( α c, κ c) represent visual direction in the azimuth-longitude/elevation-latitude system of Figure 3BD. ( α c, η c) or ( α c, κ c) specify the position of P's image on an imaginary cyclopean retina midway between the two real eyes, with gaze azimuth H c, elevation V c, and torsion T c
S and R both represent the distance to the viewed object P. R is the distance of P from the cyclopean point midway between the eyes. S is the length of the component along the direction of cyclopean gaze ( Figure 5). These are simply related by the following equation:  
S = R cos α c cos κ c = R cos β c cos η c .
(3)
 
As noted, Equations 1 and 2 assume that I/ S, I/ R, H Δ, V Δ, and T Δ are all small, and they are correct to first order in these terms. However, they make no assumptions about α c, η c, κ c, H c, V c, and T c. They are thus valid over the entire retina, not just near the fovea, and for all cyclopean eye positions. Under this small-vergence approximation, the total vertical disparity is the sum of four terms, respectively proportional to one of four possible sources of disparity: (i) the interocular separation as a fraction of object distance, I/ R or I/ S, (ii) the horizontal vergence H Δ, (iii) vertical vergence error V Δ, and (iv) cyclovergence T Δ. Each source of disparity is multiplied by a term that depends on one or both of the components of visual direction ( α c and η c or κ c), the gaze azimuth H c and the overall torsion T c. For example, cyclovergence T Δ is multiplied by α c, and so makes no contribution to vertical disparity on the vertical retinal meridian. None of the four disparity terms explicitly depends on elevation V c, although elevation would affect the disparity indirectly, because it determines the torsion according to Donders' law (Somani et al., 1998; Tweed, 1997a). 
The contribution of the three vergence sources of disparity is independent of object distance; object distance only enters through the interocular separation term (i). This may surprise some readers used to treatments that are only valid near fixation. In such treatments, it is usual to assume that the eyes are fixating the object P, so the vergence H Δ is itself a function of object distance R. We shall make this assumption ourselves in the next section. However, this section does not assume that the eyes are fixating the object P, so the three vergence angles H Δ, V Δ, and T Δ are completely independent of the object's distance R. Thus, object distance affects disparity only through the explicit dependence on R (or S) in the first term (i). The contribution of the three vergence terms (ii–iv) is independent of object distance, provided that the visual direction and eye posture is held constant. That is, if we move the object away but also increase its distance from the gaze axis such that the object continues to fall at the same point on the cyclopean retina, then the contribution of the three vergence terms to the disparity at that point are unchanged. (If the vergence changed to follow the object as it moved away, then of course this contribution would change.) 
Of course, expressing disparity as a sum of independent contributions from four sources is valid only to first order. A higher order analysis would include interactions between the different types of vergence, between vergence and interocular separation, and so on. Nevertheless, we and others (e.g., Garding et al., 1995) have found that first-order terms are surprisingly accurate, partly because several second-order terms vanish. We believe that the present analysis of disparity as arising from 4 independent sources (interocular separation plus 3 vergence angles) is both new and, we hope, helpful. In the next section, we show how we can use this new analysis to derive expressions for the average vertical disparity experienced in different parts of the visual field. 
Average vertical disparity expected at different positions on the retina
There have been a few previous attempts to derive the distribution of vertical disparity encountered during normal viewing (Hibbard, 2007; Liu et al., 2008; Read & Cumming, 2004). However, these studies averaged results across all visual directions. For example, Read and Cumming calculated the distribution of physically possible disparities for all objects whose images fall within 15° of the fovea in both retinas. Critically, they averaged this distribution not only over all possible objects but over the whole 15° parafoveal area. The spread of their distribution thus reflects both variation in the vertical disparities that are possible at different positions on the retina, and variation that is possible at a single retinal location. To make the distinction clear with a toy example, suppose that all eye position parameters are frozen (Hc = Tc = TΔ = VΔ = 0), except for vergence, HΔ, which varies between 0 and 40°, so that elevation-longitude disparity is ηΔ ≈ 0.5HΔtan(αc)sin(2ηc). Under these circumstances, 10° to the left and 10° up from the fovea, vertical disparity would always be positive, running from 0° to +0.9°. On the opposite side of the retina, 10° right and 10° up, vertical disparity would always be negative, running from 0° to −0.9°. Along the retinal meridians, the vertical disparity would always be zero. Read and Cumming's analysis would lump all these together to report that the range of possible vertical disparity is from −0.9° to +0.9°. In other words, the results of Read and Cumming (2004), like those of Hibbard (2007) and Liu et al. (2008), confound variation in the vertical disparity that is possible at a given retinal location with variation across different locations. Similarly, physiological studies that have investigated tuning to vertical disparity have not reported where in the visual field individual neurons were, making it impossible to relate the tuning of these neurons to ecological statistics. For example, one would expect the range of vertical disparity tuning to be narrower for neurons located directly above or below the fovea than for neurons to the “northwest.” The published physiological literature does not make it possible to examine this prediction. 
Deriving a full probability density function for ecological disparity requires making assumptions about the eye postures adopted during natural viewing, the scenes viewed, and the fixations chosen within each scene. Although there have been major steps toward realistic estimates of these quantities (Hibbard, 2007; Liu et al., 2008), there is as yet no definitive study, and the issue is beyond the scope of the present paper. However, the expressions derived in the previous section do enable us to estimate the mean vertical disparity as a function of position in the visual field. The previous studies (Hibbard, 2007; Liu et al., 2008; Read & Cumming, 2004) say only that the mean vertical disparity, averaged across the visual field, is zero; they do not discuss how the mean varies as a function of position. In this section, making only some fairly limited and plausible assumptions about eye position, we shall obtain expressions showing how mean vertical disparity varies as a function of position in the visual field. 
The last term in both Equations 1 and 2 depends on eye position only through T Δ, the cyclovergence. According to the extended binocular versions of Listing's law (Minken & Van Gisbergen, 1994; Mok, Ro, Cadera, Crawford, & Vilis, 1992; Somani et al., 1998; Tweed, 1997c), this term depends on elevation and more weakly on convergence. Relative to zero Helmholtz torsion, the eyes twist inward (i.e., top of each eye moves toward the nose) on looking down from primary position and outward on looking up, and this tendency is stronger when the eyes are converged: TΔ = Vc(Λ + MHΔ), where Λ and M are constants < 1 (Somani et al., 1998). Humans tend to avoid large elevations: if we need to look at something high in our visual field, we tilt our head upward, thus enabling us to view it in something close to primary position. Thus, cyclovergence remains small in natural viewing, and since both positive and negative values occur, the average is likely to be smaller still. Thus, we can reasonably approximate the mean cyclovergence as zero: 〈TΔ〉 = 0. This means that the last term in the expressions for both kinds of vertical disparity vanishes. 
The next-to-last term is proportional to vertical vergence error, V Δ. We assume that this is on average zero and independent of gaze azimuth or torsion, so that terms like V Δcos H ccos T c all average to zero. This assumption may not be precisely correct, but vertical vergence errors are likely to be so small in any case that neglecting this term is not likely to produce significant errors in our estimate of mean vertical disparity. 
The next term is proportional to convergence angle H Δ. This is certainly not zero on average. However, part of its contribution depends on sin( T c), the sine of the cycloversion. Empirically, this is approximately T c ∼ − V c H c/2 (Somani et al., 1998; Tweed, 1997b). So, although cycloversion can be large at eccentric gaze angles, provided we assume that gaze is symmetrically distributed about primary position, then 〈VcHc〉 = 0 and so the mean torsion is zero. Again, in the absence of a particular asymmetry, e.g., that people are more likely to look up and left while converging and more likely to look up and right while fixating infinity, we can reasonably assume that 〈HΔsinTc〉 = 0. Equation 1 also contains a term in HΔcosTc. This does not average to zero, but under the assumption that convergence and cycloversion are independent, and that cycloversion is always small, the mean value of this term will be approximately 〈HΔ〉. Thus, under the above assumptions, Equations 1 and 2 become 
ηΔcos2ηc(sinTccosHcsinHctanηc)IS+(tanαcsinηccosηc)HΔ
(4)
 
κΔ(cosTccosHcsinαcsinκcsinHccosαcsinκc+sinTccosHccosκc)IR.
(5)
 
Looking at Equation 4, we see that the average elevation-longitude disparity encountered in natural viewing contains terms in the reciprocal of S, the distance to the surface: 〈sin( T c)cos( H c)/ S〉 and 〈sin( H c)/ S〉. We now make the reasonable assumption that, averaged across all visual experience, gaze azimuth is independent of distance to the surface. This assumes that there are no azimuthal asymmetries such that nearer surfaces are systematically more likely to be encountered when one looks left, for example. Under this assumption, the term 〈sin( H c)/ S〉 averages to zero. Similarly we assume that 〈sin( T c)cos( H c)/ S〉 = 0. Thus, the entire term in I/ S averages to zero. The vast array of different object distances encountered in normal viewing makes no contribution to the mean elevation-longitude disparity at a particular place on the retina. The mean elevation-longitude disparity encountered at position ( α c, η c) is simply  
η Δ H Δ tan α c sin η c cos η c .
(6)
 
We have made no assumptions about the mean convergence, 〈 H Δ〉, but simply left it as an unknown. It does not affect the pattern of expected vertical disparity across the retina but merely scales the size of vertical disparities. Convergence is the only eye-position parameter that we cannot reasonably assume is zero on average, and thus it is the only one contributing to mean vertical disparity measured in elevation longitude. 
For elevation-latitude disparity, the dependence on object distance S does not immediately average out. Again, we assume that the terms 〈sin( H c)/ R〉 and 〈sin( T c)cos( H c)/ R〉 are zero, but this still leaves us with  
κ Δ I R cos H c cos T c sin α c sin κ c .
(7)
 
To progress, we need to make some additional assumptions about scene structure. We do this by introducing the fractional distance from fixation, δ:  
R = R 0 ( 1 + δ ) ,
(8)
where R 0 is the radial distance from the origin to the fixation point (or to the point where the optic axes most nearly intersect, if there is a small vertical vergence error), i.e., the distance OF in Figure 5. This is  
R 0 I cos H c H Δ .
(9)
Thus, assuming δ is small,  
I R cos H c H Δ ( 1 δ ) .
(10)
 
Substituting Equation 10 into Equation 7, we obtain  
κ Δ H Δ ( 1 δ ) cos T c sin α c sin κ c .
(11)
 
We can very plausibly assume that torsion is independent of convergence and scene structure and on average zero, so that 〈 H Δ(1 − δ)cos T c〉 averages to 〈 H Δ(1 − δ)〉. In natural viewing, the distributions of H Δ and δ will not be independent (cf. Figure 6 of Liu et al., 2008). For example, when HΔ is zero, its smallest value, the fixation distance is infinity, and so δ must be negative or zero. Conversely when the eyes are converged on a nearby object (large HΔ), perhaps most objects in the scene are usually further away than the fixated object, making δ predominantly positive. In the absence of accurate data, we assume that the average 〈HΔδ〉 is close to zero. We then obtain 
κΔHΔsinαcsinκc.
(12)
 
Equation 12 gives the expected elevation-latitude disparity 〈 κ Δ〉 as a function of cyclopean elevation latitude, κ c, whereas Equation 6 gave the expected elevation-longitude disparity 〈 η Δ〉 as a function of cyclopean elevation longitude, η c. To make it easier to compare the two, we now rewrite Equation 6 to give the expected elevation-longitude disparity 〈 η Δ〉 as a function of cyclopean elevation latitude, κ c. The expected vertical disparity at ( α c, κ c) is thus, in the two definitions,  
η Δ ( α c , κ c ) H Δ sin α c tan κ c / ( cos 2 α c + tan 2 κ c ) κ Δ ( α c , κ c ) H Δ sin α c sin κ c
(13)
 
These expressions will strike many readers as familiar. They are the longitude and latitude vertical disparity fields that would be obtained when the eyes adopt their average position, i.e., fixating on the mid-sagittal plane with no cyclovergence, cycloversion, or vertical vergence and with the average convergence 〈 H Δ〉, and viewing a spherical surface centered on the cyclopean point and passing through fixation. The much more general expressions we have considered reduce to this, because vertical-disparity contributions from eccentric gaze, from the fact that objects may be nearer or further than fixation, from cyclovergence and from vertical vergence all cancel out on average. Thus, they do not affect the average vertical disparity encountered at different points in the visual field (although of course they will affect the range of vertical disparities encountered at each position). 
It is often stated that vertical disparity increases as a function of retinal eccentricity. Thus, it may be helpful to give here expressions for retinal eccentricity ξ:  
cos ξ = cos α c cos κ c tan 2 ξ = tan 2 α c + tan 2 η c .
(14)
 
Here, eccentricity ξ is defined as the angle E Ĉ V, where E is the point on the retina whose retinal eccentricity is being calculated, C is the center of the eyeball, and V is the center of the fovea ( Figure 6). 
Figure 6
 
Definition of retinal eccentricity ξ: the eccentricity of Point E is the angle E Ĉ V, where C is the center of the eyeball and V is the center of the fovea.
Figure 6
 
Definition of retinal eccentricity ξ: the eccentricity of Point E is the angle E Ĉ V, where C is the center of the eyeball and V is the center of the fovea.
The mean convergence, 〈 H Δ〉, is not known but must be positive. It does not affect the pattern of vertical disparity expected at different points in the visual field but simply scales it. Figure 7 shows the pattern expected for both types of vertical disparity. In our definition, α c and κ c represent position on the cyclopean retina, and their signs are thus inverted with respect to the visual field (bottom of the retina represents upper visual field). However, conveniently Equation 13 is unchanged by inverting the sign of both α c and κ c, meaning that Figure 7 can be equally well interpreted as the pattern across either the cyclopean retina or the visual field. 
Figure 7
 
Expected vertical disparity in natural viewing, as a function of position in the cyclopean retina, for (a) elevation-longitude and (b) elevation-latitude definitions of vertical disparity. Vertical disparity is measured in units of 〈 H Δ〉, the mean convergence angle. Because the vertical disparity is small over much of the retina, we have scaled the pseudocolor as indicated in the color bar, so as to concentrate most of its dynamic range on small values. White contour lines show values in 0.1 steps from −1 to 1. This figure was generated by Fig_ExpectedVDisp.m in the Supplementary material.
Figure 7
 
Expected vertical disparity in natural viewing, as a function of position in the cyclopean retina, for (a) elevation-longitude and (b) elevation-latitude definitions of vertical disparity. Vertical disparity is measured in units of 〈 H Δ〉, the mean convergence angle. Because the vertical disparity is small over much of the retina, we have scaled the pseudocolor as indicated in the color bar, so as to concentrate most of its dynamic range on small values. White contour lines show values in 0.1 steps from −1 to 1. This figure was generated by Fig_ExpectedVDisp.m in the Supplementary material.
Conveniently, within the central 45° or so, the expected vertical disparity is almost identical for the two definitions of retinal vertical disparity we are considering. Near the fovea, the value expected for both types of vertical disparity is roughly 〈 H Δα c κ c. Throughout the visual field, the sign of vertical disparity depends on the quadrant. Points in the top-right or bottom-left quadrants of the visual field experience predominantly negative vertical disparity in normal viewing, while points in the top-left or bottom-right quadrants experience predominantly positive vertical disparities. Points on the vertical or horizontal meridian experience zero vertical disparity on average, although the range would clearly increase with vertical distance from the fovea. To our knowledge, no physiological studies have yet probed whether the tuning of disparity-sensitive neurons in early visual areas reflects this retinotopic bias. 
Properties of elevation-longitude and elevation-latitude definitions of vertical disparity, in the absence of torsion or vertical vergence
So far, we have provided general expressions for vertical disparity but, in order to make comparisons with previous literature more straightforward, in this and subsequent sections we make the simplifying assumption that cycloversion, cyclovergence, and vertical vergence are all zero ( T c = T Δ = V Δ = 0) and demonstrate the consequences of this assumption on the properties of the two types of vertical disparity. In this case, the only degrees of freedom that affect disparity are the horizontal rotation of each eye, expressed as the convergence H Δ and the gaze angle H c. From Equation 1, elevation-longitude vertical disparity in the absence of torsion and vertical vergence is  
η Δ 1 2 sin 2 η c ( I S sin H c + H Δ tan α c )
(15)
while from Equation 2, elevation-latitude vertical disparity is  
κ Δ I R sin κ c sin ( α c H c ) .
(16)
 
In general, these two types of vertical disparity have completely different properties. We see that elevation-longitude vertical disparity is zero for all objects, irrespective of their position in space, if the eyes are in primary position, i.e., H c = H Δ = 0. Elevation-latitude vertical disparity is not in general zero when the eyes are in primary position, except for objects on the midline or at infinite distance. Rotating the eyes into primary position does not affect elevation-latitude disparity because, as noted in the Introduction, horizontal rotations of the eyes cannot alter which line of elevation latitude each point in space projects to; they can only alter the azimuthal position to which it projects. Thus, κ Δ is independent of convergence, while gaze azimuth simply sweeps the vertical disparity pattern across the retina, keeping it constant in space. κ Δ depends on gaze azimuth H c only through the difference ( α cH c), representing azimuthal position in head-centric space. As a consequence, elevation-latitude vertical disparity is zero for all objects on the midline ( X = 0, meaning that α c = H c). The elevation-latitude disparity κ Δ at each point in the cyclopean retina is scaled by the reciprocal of the distance to the viewed object at that point. In contrast, elevation-longitude disparity, η Δ, is independent of object distance when fixation is on the mid-sagittal plane; it is then proportional to convergence. In Table 1, we summarize the different properties of elevation-longitude and elevation-latitude vertical disparities, under the conditions T c = T Δ = V Δ = 0 to which we are restricting ourselves in this section. 
Table 1
 
Summary of the different properties of the two definitions of retinal vertical disparity in the absence of vertical vergence error and torsion.
Table 1
 
Summary of the different properties of the two definitions of retinal vertical disparity in the absence of vertical vergence error and torsion.
Vertical disparity defined as: Properties in the absence of vertical vergence error and torsion ( T c = T Δ = V Δ = 0)
Difference in retinal elevation longitude, η Δ Is zero for objects in plane of gaze.
Is zero when the eyes are in primary position, for objects at any distance anywhere on the retina.
Increases as eyes converge.
May be non-zero even for objects at infinity, if the eyes are converged.
Is proportional to sine of twice the elevation longitude.
Is not necessarily zero for objects on the midsagittal plane.
For fixation on midline, is independent of object distance for a given convergence angle.
Difference in retinal elevation latitude, κ Δ Is zero for objects in plane of gaze.
Is zero for objects at infinity.
Is inversely proportional to object's distance.
Is independent of convergence for objects at a given distance.
May be non-zero even when eyes are in primary position.
Is proportional to sine of elevation latitude.
Is zero for objects on the mid-sagittal plane.
We have seen that η Δ and κ Δ depend very differently on convergence and object distance. For midline fixation, η Δ is proportional to convergence and is independent of object distance, whereas κ Δ is independent of convergence and is inversely proportional to object distance. However, if we consider only objects close to fixation, then object distance and convergence convey the same information. Under these circumstances the two definitions of vertical disparity become similar. This is shown in Figure 8, which plots the vertical disparity field for a frontoparallel plane. The two left panels show elevation-longitude vertical disparity; the two right panels show elevation-latitude vertical disparity. In the top row, the eyes are viewing a frontoparallel plane at a distance of 60 cm, and in the bottom row, a plane at 10 m. In each case, the eye position is the same: looking 15° to the left, and converged so as to fixate the plane at 60 cm. 
Figure 8
 
Vertical disparity field all over the retina, where the visual scene is a frontoparallel plane, i.e., constant head-centered coordinate Z. AB: Z = 60 cm; CD: Z = 10 m. The interocular distance was 6.4 cm, gaze angle H c = 15°, and convergence angle H Δ = 5.7°, i.e., such as to fixate the plane at Z = 60 cm. Vertical disparity is defined as difference in (AC) elevation longitude and (BD) elevation latitude. Lines of azimuth longitude and (AC) elevation longitude, (BD) elevation latitude are marked in black in 15° intervals. The white line shows where the vertical disparity is zero. The fovea is marked with a black dot. The same pseudocolor scale is used for all four panels. Note that the elevation-longitude disparity, η Δ, goes beyond the color scale at the edges of the retina, since it tends to infinity as | α c| tends to 90°. This figure was generated by DiagramOfVerticalDisparity_planes.m in the Supplementary material.
Figure 8
 
Vertical disparity field all over the retina, where the visual scene is a frontoparallel plane, i.e., constant head-centered coordinate Z. AB: Z = 60 cm; CD: Z = 10 m. The interocular distance was 6.4 cm, gaze angle H c = 15°, and convergence angle H Δ = 5.7°, i.e., such as to fixate the plane at Z = 60 cm. Vertical disparity is defined as difference in (AC) elevation longitude and (BD) elevation latitude. Lines of azimuth longitude and (AC) elevation longitude, (BD) elevation latitude are marked in black in 15° intervals. The white line shows where the vertical disparity is zero. The fovea is marked with a black dot. The same pseudocolor scale is used for all four panels. Note that the elevation-longitude disparity, η Δ, goes beyond the color scale at the edges of the retina, since it tends to infinity as | α c| tends to 90°. This figure was generated by DiagramOfVerticalDisparity_planes.m in the Supplementary material.
In Figure 8AB, where the eyes are fixating the viewed surface, both definitions of vertical disparity give similar results, especially near the fovea. For both definitions, vertical disparity is zero for objects in the plane of gaze ( Y = 0, i.e., η c = κ c = 0) and also along a vertical line whose position depends on the gaze angle. For elevation-latitude disparity κ Δ, this line is simply the line of azimuth longitude α c = H c, here 15°. This is the retinal projection of the mid-sagittal plane, X = 0. That is, in the absence of torsion or vertical vergence error, elevation-latitude vertical disparity is zero for objects on the midline, independent of their distance or of the convergence angle. For elevation-longitude vertical disparity η Δ, no such simple result holds. The locus of zero vertical disparity (vertical white line in Figure 8AC) depends on object distance and the eyes' convergence, as well as gaze angle. However, for objects relatively near fixation, these differences are minor, so the locus of zero η Δ is also close to 15°. 
It is not always the case, however, that the differences between elevation-longitude and elevation-latitude vertical disparities are minor. Figure 8CD shows the two vertical disparity fields for a surface at a much greater distance from the observer than the fixation point. The position of the eyes is the same as in AB, but now the viewed surface is a plane at 10 m from the observer. Now, the pattern of vertical disparities is very different. As we saw from Equation 16, elevation-latitude vertical disparity is zero for all objects at infinity, no matter what the vergence angle H Δ. Thus, for Z = 10 m, it is already close to zero across the whole retina ( Figure 8D). Elevation-longitude vertical disparity does not have this property. It is zero for objects at infinity only if the eyes are also fixating at infinity, i.e., H Δ = 0. Figure 8C shows results for H Δ = 5.7°, and here the second term in Equation 15 gives non-zero vertical disparity everywhere except along the two retinal meridians. 
In summary, the message of this section is that elevation-longitude and elevation-latitude definitions of vertical disparity give very similar results for objects near the fovea and close to the fixation distance. However, when these conditions are not satisfied, the two definitions of vertical disparity can produce completely different results. 
Epipolar lines: Relationship between vertical and horizontal disparities
Disparity is a two-dimensional quantity, but for a given eye position, not all two-dimensional disparities are physically possible. Figure 9A shows how the physically possible matches for the red dot on the left retina fall along a line in the right retina. The object projecting to the red dot could lie anywhere along the red line shown extending to infinity, and each possible position implies a different projection onto the right retina. The set of all possible projections is known in the literature as an epipolar line (Hartley & Zisserman, 2000). 
Figure 9
 
Epipolar line and how it differs from the “line of possible disparities” shown below in (D). (A) How an epipolar line is calculated: it is the set of all possible points on the right retina (heavy blue curve), which could correspond to the same point in space as a given point on the left retina (red dot). (B) Epipolar line plotted on the planar retina. Blue dots show 3 possible matches in the right eye for a fixed point in the left retina (red dot). The cyclopean location or visual direction (mean of left and right retinal positions, black dots) changes as one moves along the epipolar line. (C) Possible matches for a given cyclopean position (black dot). Here, we keep the mean location constant and consider pairs of left/right retinal locations with the same mean. (D) Line of possible disparities implied by the matches in (B). These are simply the vectors linking left to right retinal positions for each match (pink lines). Together, these build up a line of possible disparities (green line). Panel A was generated by Fig_EpipolarLine.m in the Supplementary material.
Figure 9
 
Epipolar line and how it differs from the “line of possible disparities” shown below in (D). (A) How an epipolar line is calculated: it is the set of all possible points on the right retina (heavy blue curve), which could correspond to the same point in space as a given point on the left retina (red dot). (B) Epipolar line plotted on the planar retina. Blue dots show 3 possible matches in the right eye for a fixed point in the left retina (red dot). The cyclopean location or visual direction (mean of left and right retinal positions, black dots) changes as one moves along the epipolar line. (C) Possible matches for a given cyclopean position (black dot). Here, we keep the mean location constant and consider pairs of left/right retinal locations with the same mean. (D) Line of possible disparities implied by the matches in (B). These are simply the vectors linking left to right retinal positions for each match (pink lines). Together, these build up a line of possible disparities (green line). Panel A was generated by Fig_EpipolarLine.m in the Supplementary material.
This definition of epipolar line treats the eyes asymmetrically: one considers a point in one eye, and the corresponding line in the other eye. Everywhere else in this paper, we have treated the eyes symmetrically, rewriting left and right eye coordinates in terms of their sum and difference: position on the cyclopean retina and disparity. So in this section, we shall consider something slightly different from the usual epipolar lines: we shall consider the line of possible disparities at a given point in the cyclopean visual field. Figures 9B9D show how this differs from an epipolar line. As one moves along an epipolar line ( Figure 9B), not only the two-dimensional disparity, but also the cyclopean position, varies. We shall consider how disparity varies while keeping cyclopean position constant ( Figure 9D). 
To achieve this, we need to express vertical disparity as a function of horizontal disparity. So far in this paper, we have expressed vertical disparity as a function of object distance and eye position. Of course, horizontal disparity is also a function of object distance and eye position. So if we substitute in for object distance using horizontal disparity, we obtain the function relating horizontal and vertical disparities, for a given eye position and location in the visual field. Using the expressions given in the Appendix, it is simple to obtain the line of possible disparities for arbitrary cyclovergence, cycloversion, and vertical vergence error. In this section, for simplicity, we continue to restrict ourselves to T c = T Δ = V Δ = 0. Under these circumstances, azimuth-longitude horizontal disparity is ( 2; Table C5)  
α Δ I S cos α c cos ( α c H c ) + H Δ .
(17)
 
If we use horizontal disparity to substitute for object distance in Equation 15, we obtain the following relationship between horizontal (azimuth-longitude) and vertical (elevation-longitude) disparities:  
η Δ 1 2 sin 2 η c sec α c ( H Δ sin α c ( α Δ H Δ ) sec ( α c H c ) sin H c ) .
(18)
 
For elevation-latitude vertical disparity, again substituting for object distance in Equation 16, we obtain  
κ Δ 1 2 ( sin 2 κ c ) ( α Δ H Δ ) tan ( H c α c ) .
(19)
 
Thus, the vertical disparities that are geometrically possible at a given position in the visual field are a linear function of the horizontal disparity. This is shown for elevation-latitude disparity by the green line in Figure 10. Where we are on this line depends on object distance. 
Figure 10
 
The thick green line shows the line of two-dimensional disparities that are physically possible for real objects, for the given eye posture (specified by convergence H Δ and gaze azimuth H c) and the given visual direction (specified by retinal azimuth α c and elevation κ c). The green dot shows where the line terminates on the abscissa. For any given object, where its disparity falls on the green line depends on the distance to the object at this visual direction. The white circle shows one possible distance. Although, for clarity, the green line is shown as having quite a steep gradient, in reality it is very shallow close to the fovea. Thus, it is often a reasonable approximation to assume that the line is flat in the vicinity of the distance one is considering (usually the fixation distance), as indicated by the horizontal green dashed line. This is considered in more detail in the next section.
Figure 10
 
The thick green line shows the line of two-dimensional disparities that are physically possible for real objects, for the given eye posture (specified by convergence H Δ and gaze azimuth H c) and the given visual direction (specified by retinal azimuth α c and elevation κ c). The green dot shows where the line terminates on the abscissa. For any given object, where its disparity falls on the green line depends on the distance to the object at this visual direction. The white circle shows one possible distance. Although, for clarity, the green line is shown as having quite a steep gradient, in reality it is very shallow close to the fovea. Thus, it is often a reasonable approximation to assume that the line is flat in the vicinity of the distance one is considering (usually the fixation distance), as indicated by the horizontal green dashed line. This is considered in more detail in the next section.
Note that this expression is not valid if either α c or ( α cH c) is 90°, since then horizontal disparity is independent of object distance ( Equation 17). So for example if we are considering an azimuthal direction of 45° ( α c = 45°) and the eyes are looking off 45° to the right ( H c = −45°), this expression fails. Apart from this relatively extreme situation, it is generally valid. 
Note also that the line of possible disparities does not extend across the whole plane of disparities. We have adopted a sign convention in which “far” disparities are positive. The largest possible horizontal disparity occurs for objects at infinity. Then, we see from Equation 17 that the horizontal disparity is equal to the convergence angle, H Δ. For objects closer than infinity, the horizontal disparity is smaller, becoming negative for objects nearer than the fixation point. Thus, the green line in Figure 10 terminates at α Δ = H Δ. The elevation-latitude vertical disparity at this point in the visual field thus has only one possible sign, either negative or positive depending on the sign of κ c( H cα c) (since ( α ΔH Δ) is always negative). For elevation-latitude vertical disparity, the eye-position parameters have a particularly simple effect on the line of possible disparities. The convergence angle H Δ controls the intercept on the abscissa, i.e., the horizontal disparity for which the vertical disparity is zero. The gradient of the line is independent of convergence, depending only on the gaze angle. To avoid any confusion, we emphasize that this “disparity gradient” is the rate at which vertical disparity would change if an object slid nearer or further along a particular visual direction, so that its horizontal disparity varied while its position in the cyclopean visual field remained constant. Thus, we are considering the set of two-dimensional disparities that can be produced by a real object for a given binocular eye position. This might theoretically be used by the visual system in solving the stereo correspondence problem if eye position were known. This “disparity gradient” is not the same as the disparity derivatives discussed below (see Discussion section) in the context of deriving eye position given the solution of the correspondence problem, which concern the rate at which vertical disparity changes as a function of visual direction in a given scene. 
In Figure 10, the gradient of the green line is exaggerated for clarity. In fact, when κ c and (H cα c) are both small (i.e., for objects near the midline and near the plane of regard), the gradient is close to zero. Even quite large changes in horizontal disparity produce very little effect on vertical disparity. In these circumstances, it is reasonable to approximate vertical disparity by its value at the chosen distance, ignoring the gradient entirely. We go through this in the next section. 
For objects near fixation, vertical disparity is independent of object distance
It is often stated that, to first order, vertical disparity is independent of object distance, depending only on eye position (e.g., Garding et al., 1995; Read & Cumming, 2006). Horizontal disparity, in contrast, depends both on eye position and object distance. Thus, vertical disparity can be used to extract an estimate of eye position, which can then be used to interpret horizontal disparity. 
At first sight, these statements appear to conflict with much of this paper. Consider the first-order expressions for vertical disparity given in Equations 15 and 16. Both depend explicitly on the object distance (measured radially from the origin, R, or along the gaze direction, S, Figure 5). Figure 8AB versus Figure 8CD, which differ only in the object distance, show how both types of vertical disparity depend on this value. Elevation-latitude disparity does not even depend on the convergence angle H Δ, making it appear impossible to reconstruct vergence from measurements of elevation-latitude disparity alone. 
This apparent contradiction arises because the authors quoted were considering the disparity of objects near to fixation. Equations 15 and 16, in contrast, are valid for all object locations, provided only the object distance is large compared to the interocular distance (small baseline approximation). We now restrict ourselves to the vicinity of fixation. That is, we assume that the object is at roughly the same distance as the fixation point. We express the radial distance to the object, R, as a fraction of the distance to fixation, R 0:  
R = R 0 ( 1 + δ ) .
(20)
 
Under our small vergence angle approximation, the radial distance to the fixation point is  
R 0 I cos H c H Δ .
(21)
 
For small values of δ, then, using the approximation (1 + δ) −1 ≈ (1 − δ), we have  
I R H Δ ( 1 δ ) cos H c I S H Δ ( 1 δ ) cos α c cos κ c cos H c .
(22)
 
Note that this breaks down at H c = 90°. This is the case where the eyes are both directed along the interocular axis. Then, the distance to the fixation point is undefined, and we cannot express R as a fraction of it. The case H c = 90° is relevant to optic flow, but not to stereo vision. Our analysis holds for all gaze angles that are relevant to stereopsis. 
From Equations 15 and 16, using the fact that R = Ssec α csec κ c, the two definitions of vertical disparity then become  
η Δ sin η c cos η c ( H Δ ( 1 δ ) cos H c sec α c sec κ c sin H c + H Δ tan α c ) κ Δ H Δ ( 1 δ ) cos H c sin κ c sin ( α c H c ) .
(23)
 
The dependence on object distance is contained in the term δ, the fractional distance from fixation. However, by assumption, this is much smaller than 1. The vertical disparities are dominated by terms independent of distance; to an excellent approximation, we have  
η Δ H Δ sin η c cos η c ( tan α c tan H c sec α c 1 + tan 2 η c cos 2 α c ) κ Δ H Δ cos H c sin κ c sin ( α c H c ) ,
(24)
where we have used tan κ = tan ηcos α to substitute for elevation latitude κ in the expression for elevation-longitude vertical disparity, η Δ
Thus, for objects at the same distance as fixation, the dependence on object distance can be expressed as the convergence angle. Changes in scene depth produce negligible changes in vertical disparity: to a good approximation, vertical disparity is independent of scene structure, changing only with slow gradients, which reflect the current binocular eye position. This statement is true all across the retina (i.e., for all α c and κ c). 
For horizontal disparity, the situation is more subtle. It can be shown that, under the same approximations used in Equation 23, azimuth-longitude horizontal disparity is given by  
α Δ H Δ ( 1 δ ) cos H c sec κ c cos ( α c H c ) + H Δ .
(25)
 
This equation resembles Equation 23, so at first sight it seems that we can drop δ as being small in comparison with 1, meaning that horizontal disparity is also independent of object distance. If H c, α c, and κ c are large, this is correct. Under these “extreme” conditions (far from the fovea, large gaze angles), horizontal disparity behaves just like vertical disparity. It is dominated by eye position and location in the visual field, with object distance making only a small contribution. However, the conditions of most relevance to stereo vision are those within ∼10° of the fovea, where spatial resolution and stereoacuity is high. In this region, a key difference now emerges between horizontal and vertical disparities: Vertical disparity becomes independent of scene structure, whereas horizontal disparity does not. The terms in Equation 25 that are independent of object distance δ cancel out nearly exactly, meaning that the term of order δ is the only one left. Thus, horizontal disparity becomes  
α Δ H Δ ( δ α c tan H c ) ( p a r a f o v e a l a p p r o x i m a t i o n ) .
(26)
 
This expression is valid near the fixation point ( δ, α c, κ c all small) and for gaze angles that do not approach 90° (where Equation 25 diverges). Near the fovea, elevation latitude and elevation longitude become indistinguishable (see lines of latitude and longitude in Figure 8). For the near-fixation objects we are considering, therefore, elevation-latitude and elevation-longitude definitions of vertical disparity we derived previously ( Equation 24) become identical and both equal to  
η Δ κ Δ H Δ κ c ( α c tan H c ) ( p a r a f o v e a l a p p r o x i m a t i o n ) .
(27)
 
Critically, this means that for the near-fixation case most relevant to stereo vision, horizontal disparity reflects scene structure as well as eye position, whereas vertical disparity depends only on eye position. This means that estimates of eye position, up to elevation, can be obtained from vertical disparity and used to interpret horizontal disparity. In this section, we have set 3 of the 6 eye position parameters—cyclovergence, cycloversion, and vertical vergence error—to zero, meaning that we only have 2 eye position parameters left to extract. Thus before proceeding, we shall generalize, in the next section, to allow non-zero values for all 6 eye position parameters. We shall then show how 5 of these parameters can be simply derived from the vertical disparity in the vicinity of the fovea. 
Obtaining eye position from vertical disparity and its derivatives at the fovea
In this section, we derive approximate expressions for 5 binocular eye position parameters in terms of the vertical disparity and its derivatives near the fovea. As throughout this paper, we work in terms of retinal disparity, since this is all that is available to the visual system before eye position has been computed. We do not require any special properties of the viewed surface other than that it is near fixation and smooth, so that all derivatives exist. We allow small amounts of cycloversion, cyclovergence, and vertical vergence error but restrict ourselves to small gaze angles. Mathematically, this means we approximate cos H c ∼ 1 and sin H cH c. In Figure 12, we show that our results hold up well at least out to H c = 15°. This is likely to cover most gaze angles adopted during natural viewing. We work in the vicinity of the fovea, so retinal azimuth α c and elevation κ c are also both small. In this case, the distinction between latitude and longitude becomes immaterial. We shall write our expressions in terms of elevation latitude κ, but in this foveal approximation, the same expressions would also hold for elevation longitude η. We shall show how our equations for the vertical disparity field, κ Δ, can be used to read off gaze angle, convergence, cyclovergence, cycloversion, and vertical vergence. 
We begin with Equation 10, which expressed I/ R in terms of horizontal vergence H Δ and the fractional distance of an object relative to fixation, δ. If there is a vertical vergence error V Δ, then there will not be a fixation point, because gaze rays will not intersect. However, Equation 10 is still valid, with δ interpreted as a fraction of the distance to the point where the gaze rays most closely approach each other. We substitute Equation 10 into our most general expression for vertical disparity, Equation 2, and make the additional approximation that the gaze azimuth H c and overall torsion T c are both small:  
κ Δ ( sin α c sin κ c H c cos α c sin κ c + T c cos κ c ) H Δ ( 1 δ ) ( T c cos α c ) H Δ ( cos α c + H c sin α c ) V Δ ( sin α c ) T Δ .
(28)
 
When we finally make the approximation that we are near the fovea, i.e., that α c and κ c are also small, we find that the lowest order terms are  
κ Δ = V Δ α c T Δ + H Δ κ c ( α c H c ) δ H Δ T c α c H c V Δ .
(29)
 
Because we are allowing non-zero torsion T c, vertical disparity now also depends on object distance, through δ. However, this is a third-order term. To first order, the vertical disparity at the fovea measures any vertical vergence error. Thus, we can read off vertical vergence V Δ simply from the vertical disparity measured at the fovea ( Figure 11):  
V Δ κ Δ .
(30)
 
Figure 11
 
Partial differentiation on the retina. The cyclopean retina is shown colored to indicate the value of the vertical disparity field at each point. Differentiating with respect to elevation κ while holding azimuth constant means finding the rate at which vertical disparity changes as one moves up along a line of azimuth longitude, as shown by the arrow labeled ∂/∂ κ. Differentiating with respect to azimuth α, while holding elevation constant, means finding the rate of change as one moves around a line of elevation latitude. This figure was generated by Fig_DifferentiatingAtFovea.m in the Supplementary material.
Figure 11
 
Partial differentiation on the retina. The cyclopean retina is shown colored to indicate the value of the vertical disparity field at each point. Differentiating with respect to elevation κ while holding azimuth constant means finding the rate at which vertical disparity changes as one moves up along a line of azimuth longitude, as shown by the arrow labeled ∂/∂ κ. Differentiating with respect to azimuth α, while holding elevation constant, means finding the rate of change as one moves around a line of elevation latitude. This figure was generated by Fig_DifferentiatingAtFovea.m in the Supplementary material.
To derive expressions for the remaining eye position parameters, we will need to differentiate Equation 28 with respect to direction in the visual field. We will use subscripts as a concise notation for differentiation: for example, κ Δ α indicates the first derivative of the vertical disparity κ Δ with respect to azimuthal position in the visual field, α c, holding the visual field elevation κ c constant. Similarly, κ Δ ακ is the rate at which this gradient itself alters as one moves vertically:  
κ Δ α κ Δ α c | κ c κ Δ α κ κ c | α c α c | κ c κ Δ .
(31)
 
Note that these derivatives examine how vertical disparity changes on the retina as the eyes view a given static scene. This is not to be confused with the gradient discussed in Figure 10, which considered how vertical disparity varies as an object moves in depth along a particular visual direction. We assume that the visual scene at the fovea consists of a smooth surface that remains close to fixation in the vicinity of the fovea. The surface's shape is specified by δ, the fractional difference between the distance to the surface and the distance to fixation. In the Average vertical disparity expected at different positions on the retina section, we were considering a single point in the cyclopean retina, and so δ was just a number: the fractional distance at that point. Since we are now considering changes across the retina, δ is now a function of retinal location, δ( α c, κ c). The first derivatives of δ specify the surface's slant, its second derivatives specify surface curvature, and so on. δ and its derivatives δ α, and so on, are assumed to remain small in the vicinity of the fovea. 
After performing each differentiation of Equation 28, we then apply the parafoveal approximation and retain only the lowest order terms. In this way, we obtain the following set of relationships between derivatives of the vertical disparity field and the binocular eye position parameters:  
κ Δ V Δ ,
(32)
 
κ Δ α T Δ ,
(33)
 
κ Δ κ H c H Δ ,
(34)
 
κ Δ α κ H Δ ,
(35)
 
κ Δ κ κ T c H Δ ,
(36)
assuming that α c, κ c, δ, δ α, δ κ, δ αα, δ ακ, δ κκ, H c, T c, H Δ, T Δ, and V Δ are all small. 
To lowest order, there is no dependence on scene structure: under this near-fixation approximation, vertical disparity and its derivatives depend only on eye position. Each term enables us to read off a different eye-position parameter. Any vertical disparity at the fovea reflects a vertical vergence error ( Equation 32; Howard, Allison, & Zacher, 1997; Read & Cumming, 2006). The rate at which vertical disparity changes as we move horizontally across the visual field, sometimes called the vertical shear disparity (Banks et al., 2001; Kaneko & Howard, 1997b), tells us the cyclovergence (Equation 33). A “saddle” pattern, i.e., the second derivative with respect to both horizontal and vertical disparities, tells us the vergence (Equation 35; Backus et al., 1999). Given this, the rate at which vertical disparity changes as we move vertically across the visual field tells us the gaze angle (Equation 34; Backus & Banks, 1999; Banks & Backus, 1998; Gillam & Lawergren, 1983; Mayhew, 1982; Mayhew & Longuet-Higgins, 1982). Finally, the second derivative provides an estimate of cyclopean cycloversion (Equation 36). Although many of these relationships with aspects of eye position have been identified in the past, it is useful to be able to identify the extent to which the approximations hold under a range of eye positions. 
The relationships given in Equations 3236 provide an intuitive insight into how different features of the parafoveal vertical disparity field inform us about eye position. The approximations used lead to some small errors, but they are sufficiently small to ignore under most circumstances. For example, retaining third-order terms in the expression for the second derivative κ Δ κκ yields  
κ Δ κ κ H Δ T c H Δ δ κ κ T c 2 H Δ δ κ ( α c H c ) + H Δ ( H c α c ) κ c + δ H Δ T c .
(37)
 
If torsion T c is zero, then near the fovea κ Δ κκ will in fact be dominated by a term depending on the rate of change of δ as we move vertically in the visual field, reflecting surface slant:  
κ Δ κ κ 2 H Δ H c δ κ .
(38)
 
Applying the formula in Equation 36 would lead us to conclude T c ≈ −2 H c δ κ, instead of the correct value of zero. Now Equation 36 was derived assuming small H c and δ κ, so the misestimate will be small but nevertheless present. In Figure 12, we examine how well our approximations bear up in practice. Each panel shows the eye position parameters estimated from Equations 3236 plotted against their actual values, for 1000 different simulations. On each simulation run, first of all a new binocular eye posture was generated, by picking values of H c, T c, V c, H Δ, T Δ, and V Δ randomly from uniform distributions. Torsion T c, cyclovergence T Δ, and vertical vergence error V c are all likely to remain small in normal viewing and were accordingly picked from uniform distributions between ±2°. Gaze azimuth and elevation were picked from uniform distributions between ±15°. Convergence was picked uniformly from the range 0 to 15°, representing viewing distances from infinity to 25 cm or so. Note that it is not important, for purposes of testing Equations 3236, to represent the actual distribution of eye positions during natural viewing but simply to span the range of those most commonly adopted. A random set of points in space was then generated in the vicinity of the chosen fixation point. The X and Y coordinates of these points were picked from uniform random distributions, and their Z coordinate was then set according to a function Z( X, Y), whose exact properties were picked randomly on each simulation run but which always specified a gently curving surface near fixation (for details, see legend to Figure 12). The points were then projected onto the two eyes, using exact projection geometry with no small baseline or other approximations, and their cyclopean locations and disparities were calculated. In order to estimate derivatives of the local vertical disparity field, the vertical disparities of points within 0.5° of the fovea, of which there were usually 200 or so, were then fitted with a parabolic function:  
κ Δ = c 0 + c 1 α c + c 2 κ c + c 3 α c 2 + c 4 κ c 2 + c 5 α c κ c .
(39)
The fitted coefficients c i were then used to obtain estimates of vertical disparity and its gradients at the fovea ( κ Δ α = c 1, and so on). Finally, these were used in Equations 3236 to produce the estimates of eye position shown in Figure 12
Figure 12
 
Scatterplots of estimated eye position parameters against actual values, both in degrees, for 1000 different simulated eye positions. Black lines show the identity line. Some points with large errors fall outside the range of the plots, but the quoted median absolute errors are for all 1000 simulations. On each simulation run, eye position was estimated as follows. First, the viewed surface was randomly generated. Head-centered X and Y coordinates were generated randomly near the fixation point ( X F, Y F, Z F). Surface Z-coordinates were generated from Z d = Σ ij a ij X d i Y d j, where X d is the X-position relative to fixation, X d = XX F ( Y d, Z d similarly, all in centimeters), i and j both run from 0 to 3, and the coefficients a ij are picked from a uniform random distribution between ±0.02 on each simulation run. This yielded a set of points on a randomly chosen smooth 3D surface near fixation. These points were then projected to the retinas, and the vertical disparity within 0.5° of the fovea was fitted with a parabolic surface. This simulation is Matlab program ExtractEyePosition.m in the Supplementary material.
Figure 12
 
Scatterplots of estimated eye position parameters against actual values, both in degrees, for 1000 different simulated eye positions. Black lines show the identity line. Some points with large errors fall outside the range of the plots, but the quoted median absolute errors are for all 1000 simulations. On each simulation run, eye position was estimated as follows. First, the viewed surface was randomly generated. Head-centered X and Y coordinates were generated randomly near the fixation point ( X F, Y F, Z F). Surface Z-coordinates were generated from Z d = Σ ij a ij X d i Y d j, where X d is the X-position relative to fixation, X d = XX F ( Y d, Z d similarly, all in centimeters), i and j both run from 0 to 3, and the coefficients a ij are picked from a uniform random distribution between ±0.02 on each simulation run. This yielded a set of points on a randomly chosen smooth 3D surface near fixation. These points were then projected to the retinas, and the vertical disparity within 0.5° of the fovea was fitted with a parabolic surface. This simulation is Matlab program ExtractEyePosition.m in the Supplementary material.
The results in Figure 12 show that most eye position parameters can be recovered with remarkable accuracy. The worst is the cycloversion T c, which is recovered quite accurately (say to within 10 arcmin) about half the time, but the rest of the time is widely scattered, for the reasons discussed around Equation 38. Nevertheless, overall performance is good, with a median error of <0.3°. This shows that our simple intuitive analytical expressions relating eye position to vertical disparity ( Equations 3236) are reliable under most circumstances. 
This is a theoretical paper, and the expressions above ( Equations 3236) are simply a mathematical statement, spelling out the relationships that exist between eye position and vertical disparity. Does the visual system, in fact, use retinal eye position estimates extracted from the disparity field? In the case of vertical vergence, cycloversion, and cyclovergence, the answer seems to be yes. Disparity fields indicating non-zero values of these parameters elicit corrective eye movements tending to null the retinal disparity, suggesting that the retinal disparity field was taken as evidence of ocular misalignment (Carpenter, 1988; Howard, 2002). In the case of gaze azimuth and convergence, the use is more subtle. There is little evidence that the perceived head-centric direction of a stimulus corresponds to the gaze azimuth indicated by its vertical disparity field (Banks et al., 2002; Berends et al., 2002). Rather, retinal estimates of gaze azimuth and vergence seem to be used to convert horizontal disparity directly into estimates of surface slant and curvature. 
Here, we briefly sketch this process, showing how the eye position parameters obtained from Equations 3236 can be used to interpret horizontal disparity, α Δ. In the domain we are considering, horizontal disparity differs from vertical disparity in that it is affected by the viewed scene as well as eye position (recall that Equation 29 showed that, to lowest order, vertical disparity is independent of scene structure). The horizontal disparity itself depends on the distance of the viewed surface relative to fixation, δ, while its first derivatives reflect the surface's slant. Again retaining terms to lowest order, it can be shown that  
α Δ δ H Δ T c V Δ + κ c T Δ α Δ α δ α H Δ H Δ ( H c α c ) κ c V Δ α Δ κ δ κ H Δ H Δ κ c + ( H c α c ) V Δ + T Δ ,
(40)
where δ α is the rate of change of δ as we move horizontally in the visual field, δ α = ∂ δ/∂ α| κ. It is a measure of surface slant about a vertical axis, and δ κ, defined analogously, reflects surface slant about a horizontal axis. δ is a dimensionless quantity, the fractional distance from the fixation point, but the derivative δ α is approximately equal to R α/ R, where R is the distance to the fixated surface and R α is the rate at which this distance changes as a function of visual field azimuth. Thus, δ α is the tangent of the angle of slant about a vertical axis, while δ κ represents the tangent of the angle of slant about a horizontal axis. We can invert Equation 40 to obtain estimates of surface distance and slant in terms of horizontal disparity and eye position, and then substitute in the eye position parameters estimated from vertical disparity. Note that the estimates of surface slant are unaffected by small amounts of cycloversion, T c. This is convenient for us, since cycloversion was the aspect of eye position captured least successfully by our approximate expressions ( Figure 12). 
From Equations 3236 and Equation 40, we can solve for δ and its derivatives in terms of horizontal and vertical disparities and their derivatives:  
δ α Δ + κ c κ Δ α κ Δ α κ + κ Δ κ κ κ Δ κ Δ α κ 2 δ α α Δ α κ Δ κ κ c κ Δ κ Δ α κ ,
(41)
 
δ κ ( α Δ κ + κ Δ α ) κ Δ α κ κ Δ κ Δ κ κ Δ α κ 2 + κ c .
(42)
 
These expressions relate horizontal and vertical retinal disparities directly to surface properties, without any explicit dependence on eye position. It seems that the visual system does something similar. As many previous workers have noted, the visual system appears to use purely local estimates, with no attempt to enforce global consistency in the underlying eye postures implicit in these relationships. Thus, values of surface slant consistent with opposite directions of gaze (left versus right) can simultaneously be perceived at different locations in the visual field (Allison, Rogers, & Bradshaw, 2003; Kaneko & Howard, 1997a; Pierce & Howard, 1997; Rogers & Koenderink, 1986; Serrano-Pedraza, Phillipson, & Read, in press). Enforcing consistency across different points of the visual field would require lateral connections that might be quite costly in cortical wiring and would be completely pointless, since in the real world eye position must always be constant across the visual field (Adams et al., 1996; Garding et al., 1995). 
Relationship to previous literature
There is a substantial literature on obtaining metric information about scene structure from two-dimensional disparity. It can be divided into two fairly distinct categories: “photogrammetric” and “psychological.” The first comes mainly from the computer vision community (Hartley & Zisserman, 2000; Longuet-Higgins, 1981). Here, one uses the two-dimensional disparities of a limited number of point correspondences to solve for binocular eye position, and then back-projects to calculate each object's 3D location in space. The second approach is more common in the psychological literature (Backus & Banks, 1999; Backus et al., 1999; Banks et al., 2001; Banks et al., 2002; Kaneko & Howard, 1996, 1997b; Koenderink & van Doorn, 1976; Rogers & Bradshaw, 1993, 1995; Rogers & Cagenello, 1989). Here, one calculates quantities such as horizontal and vertical size ratios, which are effectively local derivatives of disparity, and uses these either to extract estimates of eye position parameters (Banks et al., 2001; Kaneko & Howard, 1997b) or to move directly to scene properties such as surface slant, without computing an explicit estimate of eye position. These two approaches are closely related (Adams et al., 1996; Garding et al., 1995). In the photogrammetric approach, the point correspondences can be anywhere in the visual field (subject to certain restrictions, e.g., not all collinear; Longuet-Higgins, 1981). If the points all happen to be closely spaced together, then they contain the same information as the derivatives of disparity at that location. Thus, in this regard the psychological literature represents a special case of the photogrammetric approach: extracting eye position from a particular set of correspondences. 
However, the photogrammetric literature does not provide explicit expressions for eye position in terms of disparity; rather, eye position is given implicitly, in large matrices that must be inverted numerically. Because the treatment is fully general, the distinction between horizontal and vertical disparities is not useful (e.g., because a torsion of 90° transforms one into the other, or because some epipolar lines become vertical as gaze azimuth approaches 90°). Thus, in the machine vision literature, disparity is considered as a vector quantity, rather than analyzed as two separate components. The psychological literature is less general but offers a more intuitive understanding of how eye position affects disparity in the domain most relevant to natural stereo viewing (objects close to the fovea, eyes close to primary position). As we saw in the previous section, in this domain, disparity decomposes naturally into horizontal and vertical components, which have different properties. Critically, in this domain, vertical disparity is essentially independent of scene structure, and eye position can be estimated from this component alone. 
However, as far as we are aware, no paper gives explicit expressions for all eye position parameters in terms of retinal vertical disparity. Much of the psychological literature jumps straight from disparity derivatives to properties such as surface slant, without making explicit the eye position estimates on which these implicitly depend. In addition, the psychological literature can be hard to follow, because it does not always make it clear exactly what definition of disparity is being used. Sometimes, the derivation appears to use optic array disparity, so it is not clear how the brain could proceed given only retinal disparity; or the derivation appears to rely on special properties of the scene (e.g., it considers a vertically oriented patch), and it is not clear how the derivation would proceed if this property did not hold. Our derivation makes no assumptions about surface orientation and is couched explicitly in retinal disparity. 
Our expression for δ α, Equation 41, is a version of the well-known expressions deriving surface slant from horizontal and vertical size ratios (Backus & Banks, 1999; Backus et al., 1999; Banks et al., 2002; Banks et al., 2001; Kaneko & Howard, 1996, 1997b; Koenderink & van Doorn, 1976; Rogers & Bradshaw, 1993, 1995; Rogers & Cagenello, 1989). “Horizontal size ratio” or HSR is closely related to the rate of change of horizontal disparity as a function of horizontal position in the visual field, whereas “vertical size ratio” reflects the gradient of vertical disparity as a function of vertical position. In the notation of Backus and Banks (1999), for example, which defines HSR and VSR around the fixation point, 
ln(HSR)αΔα,ln(VSR)κΔκ.
(43)
In their notation, S = surface slant, so our δα = tan(S), and convergence, our HΔκΔακ, is μ. Thus if there is no vertical disparity at the fovea, Equation 41 becomes 
δαtanSαΔακΔκκΔακ1μln(HSRVSR),
(44)
which is Equation 1 of Backus et al. (1999) and Backus and Banks (1999). 
This relationship has been proposed as an explanation of the induced effect. In the induced effect, one eye's image is stretched vertically by a factor m about the fixation point, thus adding a term c to the vertical disparity field. The vertical disparity at the fovea is still zero, and the only vertical disparity derivative to be affected is κ Δ κ, which gains a term m. This causes a misestimate of surface slant about a vertical axis:  
δ α , e s t δ α , t r u e m H Δ .
(45)
 
Size-ratio-based theories of the induced effect are often contrasted with the photogrammetric approach based on misestimates of gaze angle (Clement, 1992; Garding et al., 1995; Longuet-Higgins, 1982; Mayhew & Longuet-Higgins, 1982; Ogle, 1952). Size-ratio theories use local disparity derivatives to produce a direct estimate of slant. Photogrammetric theories use a set of point correspondences distributed across the whole retina. These are used to obtain an estimate of eye position, which is then used to interpret horizontal disparity. The treatment here makes clear that the mathematics underlying both theories is really the same. Suppose that in the photogrammetric approach, the point correspondences are close together in the visual field. The points project to slightly different points on the cyclopean retina and have slightly different disparities. We can express these differences as disparity gradients on the cyclopean retina, or equivalently as size ratios. From these disparity gradients we can derive eye posture, and hence the surface distance and slant. Thus, both size ratio and photogrammetric explanations of the induced effect rely, mathematically, on the fact that vertical magnification can be interpreted as a misestimate of gaze angle. This is obscured in Equation 44 because there is no explicit mention of gaze angle, but in fact, as we see by comparing Equations 3236 and Equation 40, the reason that VSR is useful in interpreting horizontal disparity is because it acts as a proxy for gaze angle (Adams et al., 1996). 
The real difference between the theories is the scale at which they operate (Adams et al., 1996; Garding et al., 1995). Mayhew and Longuet-Higgins originally described an algorithm for fitting a unique eye posture to the correspondences across the whole retina (Mayhew, 1982; Mayhew & Longuet-Higgins, 1982). This would seem to be the best strategy for obtaining the most reliable estimate of eye position, and for that reason is the approach used in computer vision. However, as noted in the Discussion section, the brain appears to proceed locally, at least in the case of surface slant. That is, it directly estimates surface slant from local disparity derivatives, as in Equation 41, without checking that the eye postures implied by these local derivatives are globally consistent. There seems to be considerable inter-subject variation in what “local” means, ranging from as large as 30° for some subjects down to 3° for others (Kaneko & Howard, 1997a; Serrano-Pedraza et al., in press). 
Discussion
Vertical disparity has been much discussed in recent years (Adams et al., 1996; Backus & Banks, 1999; Backus et al., 1999; Banks & Backus, 1998; Banks et al., 2002; Banks et al., 2001; Berends & Erkelens, 2001; Berends et al., 2002; Bishop, 1989; Brenner et al., 2001; Clement, 1992; Cumming, Johnston, & Parker, 1991; Garding et al., 1995; Gillam, Chambers, & Lawergren, 1988; Kaneko & Howard, 1997a; Longuet-Higgins, 1981, 1982; Mayhew, 1982; Mayhew & Longuet-Higgins, 1982; Read & Cumming, 2006; Rogers & Bradshaw, 1993; Schreiber et al., 2001; Serrano-Pedraza et al., in press; Serrano-Pedraza & Read, 2009; Stenton, Frisby, & Mayhew, 1984; Stevenson & Schor, 1997). However, progress has been hampered by the lack of a clear agreed set of definitions. In the Introduction section, we identified no fewer than 4 definitions of vertical disparity: two types of optic-array disparity, and two types of retinal disparity. Individual papers are not always as clear as they could be about which definition they are using, and the different properties of the different definitions are not widely appreciated. This means that different papers may appear at first glance to contradict one another. 
In this paper, we aim to clarify the situation by identifying two definitions of retinal vertical disparity that are in common use in the literature. Vertical disparity is sometimes defined as the difference between the elevation-longitude coordinates of the two retinal images of an object, sometimes as the difference in elevation latitude. Both definitions are valid and sensible, but they have rather different properties, as summarized in Table 1. The differences between the two types of vertical disparity are most significant for objects not at the fixation distance ( Figure 8CD), and in the visual periphery. The periphery is where retinal vertical disparities tend to be largest during natural viewing, which has motivated physiologists to investigate vertical disparity tuning there (Durand et al., 2002). Psychophysically, it has been shown that the perceived depth of centrally viewed disparities (Rogers & Bradshaw, 1993) can be influenced by manipulations of “vertical” disparities in the periphery (i.e., when the field of view is large). Thus, it is particularly important to clarify the difference between the alternative definitions of vertical disparity where stimuli fall on peripheral retina. 
For objects close to the fixation point, the images fall close to the fovea in both eyes. Here, latitude and longitude definitions of vertical disparity reduce to the same quantity. In this regime, vertical disparity is much less strongly affected than horizontal disparity by small variations in depth relative to the fixation point. Where this variation is small, it can be treated as independent of surface structure. We have derived expressions giving estimates of each eye position parameter, except elevation, in terms of vertical disparity and its derivatives at the fovea. Although these are only approximations, they perform fairly well in practice ( Figure 12). These expressions are closely related to the vertical size ratios discussed in the literature (Backus & Banks, 1999; Backus et al., 1999; Banks et al., 2002; Gillam & Lawergren, 1983; Kaneko & Howard, 1996, 1997b; Koenderink & van Doorn, 1976; Liu, Stevenson, & Schor, 1994; Rogers & Bradshaw, 1993). 
Little if anything in this paper will be new to experts in vertical disparity. However, for non-cognoscenti, we hope that it may clarify some points that can be confusing. Even for experts, it may serve as a useful reference. We identify, in particular, four areas where we hope this paper makes a useful contribution. 
  1.  
    Previous derivations have often been couched in terms of head-centric disparity or have assumed that the surfaces viewed have special properties such as being oriented vertically. Our derivations are couched entirely in terms of retinal images and do not assume the viewed surface has a particular orientation. We feel this may provide a more helpful mathematical language for describing the properties of disparity encoding in early visual cortex.
  2.  
    We present analytical expressions for both elevation-longitude and elevation-latitude vertical disparities that are valid across the entire retina, for arbitrary gaze angles and cycloversion, and for non-zero vertical vergence and cyclovergence. Much previous analysis has relied on parafoveal approximations and has assumed zero vertical vergence, cycloversion, and cyclovergence.
  3.  
    We present analytical expressions for the average vertical disparity expected at each position in the visual field, up to a scale factor representing the mean convergence.
  4.  
    Explanations relating the perceptual effects of vertical disparity to disparity gradients have sometimes been contrasted with those based on explicit estimates of eye position (Garding et al., 1995; Longuet-Higgins, 1982; Mayhew & Longuet-Higgins, 1982). This paper is the first to give explicit (though approximate) expressions for 5 binocular eye position parameters in terms of retinal vertical disparity at the fovea. The way in which all 5 eye position parameters can be derived immediately from vertical disparity derivatives has not, as far as we are aware, been laid out explicitly before. Thus, this paper clarifies the underlying unity of gaze-angle and vertical-size-ratio explanations of vertical-disparity illusions such as the induced effect.
Binocular eye position is specified by 6 parameters, 5 of which we have been able to derive from the vertical disparity field around the fovea. The exception is elevation. All the other parameters have a meaning as soon as the two optic centers are defined (and a zero torsion line on the retina), whereas elevation needs an additional external reference frame to say where “zero elevation” is. Disparity is, to first order, independent of how this reference is chosen, meaning that elevation cannot be directly derived from disparity. However, in practice, the visual system obeys Donder's law, meaning that there is a unique relationship between elevation and torsion. The torsional states of both eyes can be deduced from the vertical disparity field, as laid out in Equations 33 and 36. Thus, in practice the brain could derive torsion from the gradient of vertical disparity and use this to obtain an estimate of elevation independent of oculomotor information regarding current eye position (although clearly it would rely on an association between torsion and elevation that would ultimately stem from the oculomotor system). It has already been suggested that the Listing's law relationship between torsion and elevation helps in solving the stereo correspondence problem (Schreiber et al., 2001; Tweed, 1997c). The fact that it enables elevation to be deduced from the two-dimensional disparity field may be another beneficial side effect. 
Another benefit of the framework we have laid out is that it leads to a set of predictions about the physiological range of retinal disparities. The existing physiological literature does not test such predictions. For example, Durand et al. (2002) explain that “VD is naturally weak in the central part of the visual field and increases with retinal eccentricity” but then report their results in terms of head-centric Helmholtz disparity (Figure 1A), in which naturally occurring vertical disparities are always zero, everywhere in the visual field. This makes it impossible to assess whether the results of Durand et al. are consistent with the natural distribution of retinal vertical disparities to which they drew attention in their introduction. This paper has emphasized the importance of calculating neuronal tuning as a function of retinal vertical disparity (whether elevation longitude or latitude). Our expressions for average vertical disparity as a function of position in the visual field predict the expected sign of vertical disparity preference. It is intuitively clear that in natural viewing early cortical neurons viewing the top-right visual field should receive a diet of inputs in which the left half-image is higher on the retina than the right (Figure 3), and vice versa for those viewing the top-left visual field. One would expect the tuning of neurons to reflect this biased input. This simple qualitative prediction has not yet been discussed or examined in the physiological literature. Our analysis also makes quantitative predictions. For example, consider eccentricities 5° and 15° in the direction “northeast” from the fovea. Our calculations (Equation 13) predict that the mean vertical disparity tuning of V1 neurons at 15° eccentricity should be 9 times that at an eccentricity of 5°. This too could be tested by appropriate physiological investigations. 
There are, of course, limitations to the quantitative predictions we can make from geometric considerations alone. To predict the expected range of vertical disparities at any retinal location requires a knowledge of the statistics of binocular eye movements (especially version and vergence) under natural viewing conditions. As recent studies have pointed out, such statistics are quite difficult to gather (Hibbard, 2007; Liu et al., 2008), but they are crucial if the diet of 2D disparities received by binocular neurons across the retina is to be estimated accurately. 
Conclusion
The term “vertical disparity” is common in the stereo literature, and the impression is often given that it has an established definition and familiar properties. In fact, neither of these assumptions hold. If the terms “vertical” and “horizontal” are to continue to be used in discussions of binocular disparity, and we argue here that there are reasons in favor of doing so, it is critical that the respective definitions and properties should be set out explicitly, as we have done here. 
Supplementary Materials
ApproxDisparity.m - ApproxDisparity.m 
DiagramOfVerticalDisparity_planes.m - DiagramOfVerticalDisparity_planes.m 
DrawEyeCoords.m - DrawEyeCoords.m 
DrawEyes.m - DrawEyes.m 
ExtractEyePosition.m - ExtractEyePosition.m 
Fig_DifferentiatingAtFovea.m - Fig_DifferentiatingAtFovea.m 
Fig_DiffRetCoords.m - Fig_DiffRetCoords.m 
Fig_DistancesRS.m - Fig_DistancesRS.m 
Fig_EpipolarLine.m - Fig_EpipolarLine.m 
Fig_ExpectedVDisp.m - Fig_ExpectedVDisp.m 
Fig_HelmholtzEyeCoords.m - Fig_HelmholtzEyeCoords.m 
Fig_VDispDefinition.m - Fig_VDispDefinition.m 
GetEyesGivenCV.m - GetEyesGivenCV.m 
MarkIdentity.m - MarkIdentity.m 
plotvector.m - plotvector.m 
ProjectToRetina.m - ProjectToRetina.m 
RotationMatrix.m - RotationMatrix.m 
ToFitParabola.m - ToFitParabola.m 
Appendix A: Definitions
Subscripts
Table A1
 
Meaning of subscripts.
Table A1
 
Meaning of subscripts.
L left eye
R right eye
Δ difference between left and right eye values, e.g., convergence angle H Δ = H RH L
δ half-difference between left and right eye values, e.g., half-convergence H δ = ( H RH L)/2
c cyclopean eye (mean of left and right eye values), e.g., cyclopean gaze angle H c = ( H R + H L)/2
Symbols
Table A2
 
Definition of symbols.
Table A2
 
Definition of symbols.
I interocular distance
i half-interocular distance, i = I/2
k, l integer counters taking on values 1, 2, 3
M L, M R rotation matrix for left and right eyes, respectively
M c cyclopean rotation matrix, M c = ( M R + M L)/2
M δ half-difference rotation matrix, M δ = ( M RM L)/2
m vectors m j are the three columns of the corresponding rotation matrix M, e.g., m c1 = [ M c 11 M c 21 M c 31]; m δ2 = [ M δ 12 M δ 22 M δ 32] ( Equation A6)
H L,R,c gaze azimuth in Helmholtz system for left, right, and cyclopean eyes
V L,R,c gaze elevation in Helmholtz system for left, right, and cyclopean eyes
T L,R,c gaze torsion in Helmholtz system for left, right, and cyclopean eyes
H Δ horizontal convergence angle
V Δ vertical vergence misalignment (non-zero values indicate a failure of fixation)
T Δ cyclovergence
X, Y, Z position in space in Cartesian coordinates fixed with respect to the head ( Figure A1)
X ^ unit vector parallel to the X-axis
P vector representing position in space in head-centered coordinates: P = ( X, Y, Z)
U, W, S position in space in Cartesian coordinates fixed with respect to the cyclopean gaze. The S-axis is the optic axis of the cyclopean eye (see Figure 5)
R distance of an object from the origin. R 2 = X 2 + Y 2 + Z 2 = U 2 + W 2 + S 2 (see Figure 5)
R 0 distance of the fixation point from the origin (or distance to the point where the gaze rays most nearly intersect, if the eyes are misaligned so that no exact intersection occurs)
δ fractional difference between the fixation distance, R 0, and the distance to the object under consideration, R. That is, δ = ( RR 0)/ R 0
x horizontal position on the retina in Cartesian coordinate system ( Figure 2A)
y vertical position on the retina in Cartesian coordinate system ( Figure 2A)
α azimuth-longitude coordinate for horizontal position on the retina ( Figures 2B and 2C)
η elevation-longitude coordinate for vertical position on the retina ( Figures 2B and 2D)
β azimuth-latitude or declination coordinate for horizontal position on the retina ( Figures 2D and 2E)
κ elevation-latitude or inclination coordinate for vertical position on the retina ( Figures 2C and 2E)
ξ retinal eccentricity ( Equation 14)
Coordinate systems
Head-centered coordinate system (X, Y, Z) for object position in space
Figure A1 shows the right-handed head-centered coordinate system used throughout this paper. The X-axis points left, the Y-axis upward, and the Z-axis straight ahead of the observer. By definition, the nodal point of the left eye is at ( X, Y, Z) = ( i, 0, 0) and the nodal point of the right eye is at ( X, Y, Z) = (− i, 0, 0), where i represents half the interocular distance I. The position of a point in space can be described as a vector, P = ( X, Y, Z). 
Figure A1
 
Head-centered coordinate system used throughout this paper. The origin is the point midway between the two eyes. The X-axis is defined by the nodal points of the two eyes and points leftward. The orientation of the XZ plane is defined by primary position but is approximately horizontal. The Y-axis points upward and the Z-axis points in front of the observer.
Figure A1
 
Head-centered coordinate system used throughout this paper. The origin is the point midway between the two eyes. The X-axis is defined by the nodal points of the two eyes and points leftward. The orientation of the XZ plane is defined by primary position but is approximately horizontal. The Y-axis points upward and the Z-axis points in front of the observer.
Eye posture
Each eye has potentially three degrees of freedom, two to specify the gaze direction (azimuth left/right and elevation up/down) and a third to specify the rotation of the eyeball around this axis (torsion). We adopt the Helmholtz coordinate system for describing eye posture ( Figure 4). We start with the eye in primary position, looking straight forward so that its optic axis is parallel to the Z-axis ( Figure A1). We define the torsion here to be zero. To move from this reference state in which all three coordinates are zero to a general posture with torsion, azimuth H, and elevation V, we start by rotating the eyeball about the optic axis by the torsion angle T. Next rotate the eye about a vertical axis, i.e., parallel to the Y-axis, through the gaze azimuth H. Finally rotate the eye about a horizontal, i.e., interocular axis, through the gaze elevation V. We define these rotation angles to be anti-clockwise around the head-centered coordinate axes. This means that we define positive torsion to be clockwise when viewed from behind the head, positive gaze azimuth to be to the observer's left, and positive elevation to be downward. 
We use the subscripts L and R to indicate the left and right eyes ( Table A1). Thus, V L is the Helmholtz elevation of the left eye and V R that of the right eye. 
One advantage of Helmholtz coordinates is that it is particularly simple to see whether the eyes are correctly fixating, such that their optic axes intersect at a common fixation point. This occurs if, and only if, the Helmholtz elevations of the two eyes are identical and the optic axes are not diverging. Thus, any difference between V L and V R means that the eyes are misaligned. We refer to this as the vergence error, V RV L. The difference in the Helmholtz gaze azimuths is the horizontal vergence angle, H RH L. Negative values mean that the eyes are diverging. 
In the mathematical expressions we shall derive below, the vergence angles will usually occur divided by two. We therefore introduce symbols for half the vergence angles. As shown in Table A1, these are indicated with the subscript δ:  
H δ ( H R H L ) / 2 , a n d s o o n .
(A1)
 
We also introduce cyclopean gaze angles, which are the means of the left and right eyes. As shown in Table A1, these are indicated with the subscript c:  
H c ( H R + H L ) / 2 .
(A2)
 
Rotation matrices
Eye posture can be summarized by a rotation matrix M. So for example if we have a vector that is fixed with respect to the eye, then if the vector is initially r in head-centered coordinates when the eye is in its reference position, it will move to M r when the eye adopts the posture specified by rotation matrix M. An eye's rotation matrix M depends on the eye's elevation V, gaze azimuth H, and torsion T. As above, we use subscripts L and R to indicate the left and right eyes. For the left eye, the rotation matrix is M L = M VL M HL M TL, where  
M V L = [ 1 0 0 0 cos V L sin V L 0 sin V L cos V L ] ; M H L = [ cos H L 0 sin H L 0 1 0 sin H L 0 cos H L ] ; M T L = [ cos T L sin T L 0 sin T L cos T L 0 0 0 1 ] ,
(A3)
where V L, H L, and T L are the gaze elevation, gaze azimuth, and torsion of the left eye. The ordering of the matrix multiplication, M L = M VL M HL M TL, is critical, reflecting the definition of the Helmholtz eye coordinates. Obviously, analogous expressions hold for the right eye. Once again, it will be convenient to introduce the cyclopean rotation matrix, which is defined as the mean of the left- and right-eye rotation matrices:  
M c = ( M R + M L ) / 2 ,
(A4)
and the half-difference rotation matrix:  
M δ = ( M R M L ) / 2 .
(A5)
 
It will also be convenient to introduce vectors m that are the columns of these matrices:  
m c 1 = [ M c 11 M c 21 M c 31 ] ; m c 2 = [ M c 12 M c 22 M c 32 ] ; m c 3 = [ M c 13 M c 23 M c 33 ] m δ 1 = [ M δ 11 M δ 21 M δ 31 ] ; m δ 2 = [ M δ 12 M δ 22 M δ 32 ] ; m δ 3 = [ M δ 13 M δ 23 M δ 33 ] ,
(A6)
where M kl indicates the entry in the kth row and lth column of matrix M
Gaze-centered coordinate system for object position in space
We use the vectors m c k to define a new coordinate system for describing an object's position in space. As well as the head-centered coordinate system ( X, Y, Z), we introduce a coordinate system ( U, W, S) centered on the direction of cyclopean gaze, as specified by the three Helmholtz angles H c, V c, and T c. Whereas Z is the object's distance from the observer measured parallel to the “straight ahead” direction, S is the object's distance parallel to the line of gaze ( Figure 5). The coordinates ( U, W, S) are defined by writing the vector P = ( X, Y, Z) as a sum of the three m c vectors:  
P = U m c 1 + W m c 2 + S m c 3 .
(A7)
 
Retinal coordinate systems for image position on retina
The retina is at least roughly hemispherical, and treating it as perfectly hemispherical involves no loss of generality, since there is a one-to-one map between a hemisphere and a physiological retina. All the coordinate systems we shall consider are based on the vertical and horizontal retinal meridians. These are great circles on a spherical retina. They are named after their orientations when the eye is in its reference position, looking straight ahead parallel to the Z-axis in Figure A1. By definition, our retinal coordinate systems are fixed with respect to the retina, not the head, so as the eye rotates in the head, the “horizontal” and “vertical” meridians will in general no longer be horizontal or vertical in space. For this reason we shall call the angle used to specify “horizontal” location the azimuth α, and the angle used to specify “vertical” location, the elevation η. Both azimuth and elevation can be defined as either latitude or longitude. This gives a total of 4 possible retinal coordinate systems ( Figures 2B2E). The azimuth-latitude/elevation-longitude coordinate system is the same Helmholtz system we have used to describe eye position (cf. Figure 1A). The azimuth-longitude/elevation-latitude coordinate system is the Fick system (cf. Figure 1B). One can also choose to use latitude or longitude for both directions. Such azimuth-longitude/elevation-longitude or azimuth-latitude/elevation-latitude systems have the disadvantage that the coordinates become ill-defined around the great circle at 90° to the fovea. However, this is irrelevant to stereopsis, since it is beyond the boundaries of vision. The azimuth-longitude/elevation-longitude coordinate system is very simply related to the Cartesian coordinate system, which is standard in the computer vision literature. We can imagine this as a virtual plane, perpendicular to the optic axis and at unit distance behind the nodal point ( Figure 2A). To find the image of a point P, we imagine drawing a ray from point P through the nodal point N and see where this intersects the virtual plane (see Figure 3 of Read & Cumming, 2006). The ray has vector equation p = N + s(PN), where s represents position along the ray. Points on the retina are given by the vector p = NM
Z^
+ xM
X^
+ yM
Y^
, where x and y are the Cartesian coordinates on the planar retina, and the rotation matrix M describes how this plane is rotated with respect to the head. Equating these two expressions for p, we find that 
s(PN)=MZ^+xMX^+yMY^.
(A8)
 
Multiplying the matrix M by the unit vectors simply picks off a column of the matrix, e.g., M
X ^
= m 1. Using this plus the fact that M
X ^
, M
Y ^
, and M
Z ^
are orthonormal, we find that the ray intersects the retina at the Cartesian coordinates  
x = m 1 . ( P N ) m 3 . ( P N ) ; y = m 2 . ( P N ) m 3 . ( P N ) .
(A9)
 
It is sometimes imagined that the use of planar retinas involves a loss of generality or is only valid near the fovea, but in fact, no loss of generality is involved, since there is a one-to-one map from the virtual planar retina to the hemispherical retina. 
Each coordinate system has a natural definition of “horizontal” or “vertical” disparity associated with it. Disparity is defined to be the difference between the horizontal and vertical coordinates of the two retinal images. So we immediately have three different definitions of retinal vertical disparity: (1) Cartesian vertical disparity, y Δ = y Ry L; (2) elevation-longitude disparity, η Δ = η Rη L; and (3) elevation-latitude disparity, κ Δ = κ Rκ L. In 2, we shall derive expressions for all 3 definitions. 
It may also be useful to collect together here for reference the relationships between gaze-centered coordinates and the corresponding retinal coordinates. The equations in Table A3 show where an object located at ( U, W, S) in gaze-centered coordinates projects to on the cyclopean retina, in different retinal coordinate systems. Table A4 gives the relationships between location on the retina in different coordinate systems. 
Table A3
 
The relationship between the quantities ( U, W, S), giving an object's location in gaze-centered coordinates (cf. Figure 5), and that object's projection onto the cyclopean retina. The projection is given in planar Cartesian coordinates ( x c, y c) and as azimuth longitude α c, elevation longitude η c, azimuth latitude β c, and elevation latitude κ c. The object's head-centered coordinates ( X, Y, Z) will depend on eye position.
Table A3
 
The relationship between the quantities ( U, W, S), giving an object's location in gaze-centered coordinates (cf. Figure 5), and that object's projection onto the cyclopean retina. The projection is given in planar Cartesian coordinates ( x c, y c) and as azimuth longitude α c, elevation longitude η c, azimuth latitude β c, and elevation latitude κ c. The object's head-centered coordinates ( X, Y, Z) will depend on eye position.
U ≈ − Sx c W ≈ − Sy c R 2 = U 2 + W 2 + S 2 = X 2 + Y 2 + Z 2
U ≈ − Stan α c W ≈ − Stan κ csec α c S = Rcos α ccos κ c
U ≈ − Stan β csec η c W ≈ − Stan η csec α c S = Rcos β ccos η c
Table A4
 
Relationships between the different retinal coordinate systems shown in Figure 2.
Table A4
 
Relationships between the different retinal coordinate systems shown in Figure 2.
Cartesian ( x, y) ( Figure 2A) Azimuth longitude, elevation longitude ( α, η) ( Figure 2B) Azimuth longitude, elevation latitude: ( α, κ) (Fick; Figure 2C) Azimuth latitude, elevation longitude ( β, η) (Helmholtz; Figure 2D) Azimuth latitude, elevation latitude: ( β, κ) ( Figure 2E)
( x, y) x = tan ⁢ α y = tan ⁢ η x = tan ⁢ α y = tan ⁢ κ ⁢ sec ⁢ α x = tan ⁢ β ⁢ sec ⁢ η y = tan ⁢ η x = sin ⁢ β / √ ( cos 2 κ sin 2 β ) y = sin ⁢ κ / √ ( cos 2 β sin 2 κ )
( α, η) α = arctan ( x ) η = arctan ⁢ y α = α η = arctan ( tan ⁢ κ cos ⁢ α ) α = arctan ( tan ⁢ β · sec ⁢ η ) η = η α = arcsin ( sin ⁢ β ⁢ sec ⁢ κ ) η = arcsin ( sin ⁢ κ ⁢ sec ⁢ β )
( α, κ) α = arctan ( x ) κ = arctan ( y / x 2 + 1 ) α = α κ = arctan ( tan ⁢ η ⁢ cos ⁢ α ) α = arcsin ( sin ⁢ β cos ⁢ κ ) κ = κ α = arcsin ( sin ⁢ β cos ⁢ κ ) κ = κ
( β, η) β = arctan ( x / y 2 + 1 ) η = arctan ( y ) β = arctan ( tan ⁢ α ⁢ cos ⁢ η ) η = η β = arcsin ( sin ⁢ α ⁢ cos ⁢ κ ) η = arctan ( tan ⁢ κ ⁢ sec ⁢ α ) β = β η = arcsin ( sin ⁢ κ cos ⁢ β )
( β, κ) β = arctan ( x / √ ( 1 + y 2 ) ) κ = arctan ( y / √ ( 1 + x 2 ) ) β = arctan ( tan ( α ) cos ( η ) ) κ = arctan ( tan ( η ) cos ( α ) ) β = arcsin ( cos ⁢ κ ⁢ sin ⁢ α ) κ = κ β = β κ = arcsin ( cos ⁢ β ⁢ sin ⁢ η )
Appendix B: Derivations
Relationships between the rotation vectors
The fact that rotation matrices are orthogonal means that certain simple relationships hold between the vectors m c k and m δk defined in Equation A6. First, the inner product of any difference vector m δk with the corresponding cyclopean vector m c k is identically zero:  
m δ k · m c k = 0 f o r k = 1 , 2 , 3 .
(B1)
 
This is actually a special case of the following more general statement:  
m δ k · m c l = m δ l · m c k f o r k , l = 1 , 2 , 3 .
(B2)
 
Equations B1 and B2 are exact and do not depend on any approximations at all. 
To obtain the values of these dot products, we need to use Equation A3 to derive expressions for M c and M δ in terms of the 6 Helmholtz gaze parameters for the two eyes: H L, V L, T L, H R, V R, T R. We can then use trigonometric identities to re-express these in terms of the cyclopean (half-sum) and vergence (half-difference) equivalents: H c, V c, T c, H δ, V δ, T δ. Needless to say, this yields extremely complicated expressions. However, we now introduce the first critical approximation of this paper. We assume that differences in eye posture are small. We therefore work to first order in the horizontal vergence H δ, the vertical vergence half-error V δ, and the half-cyclovergence T δ, i.e., we replace terms like cos H δ with 1, and we neglect terms in sin 2 H δ, sin H δ·sin V δ, and so on. Under these approximations, the 3 m c and the 3 m δ are approximately orthonormal, i.e.,  
m c k . m c l 1 i f k = l a n d 0 o t h e r w i s e ; m δ k . m δ l 1 i f k = l a n d 0 o t h e r w i s e ;
(B3)
and we obtain the following simple expressions for inner products of an m c and an m δ vector:  
m δ 1 . m c 2 = m δ 2 . m c 1 T δ + V δ sin H c m δ 2 . m c 3 = m δ 3 . m c 2 H δ sin T c + V δ cos H c cos T c m δ 1 . m c 3 = m δ 3 . m c 1 H δ cos T c + V δ cos H c sin T c .
(B4)
 
Notice that if the eyes are correctly fixating ( V δ = 0) and there is no torsion ( T c = T δ = 0), then the only non-zero inner product is m δ1. m c3 ≈ − H δ
Below, we shall also encounter the inner products m c1.
X ^
, m c2.
X ^
, and m c3.
X ^
, where
X ^
is a unit vector along the X-axis. These are the entries in the top row of the cyclopean rotation matrix, which under the above approximation are  
m c 1 . X ^ = M c 11 cos H c cos T c ; m c 2 . X ^ = M c 12 sin T c cos H c ; m c 3 . X ^ = M c 13 sin H c .
(B5)
 
We shall also use the following:  
2 m δ 1 . P W ( T Δ + H c sin V Δ ) + S ( T c cos H Δ + H c cos T c sin V Δ ) 2 m δ 2 . P U ( T Δ + H c sin V Δ ) + S ( T c sin H Δ + H c cos T c cos V Δ ) 2 m δ 3 . P U ( T c cos H Δ H c cos T c sin V Δ ) W ( T c sin H Δ + H c cos T c cos V Δ )
(B6)
where to save space we have introduced the notation T c cos = cos T c, and so on. 
Deriving expressions for retinal disparity
Disparity in Cartesian coordinates on a planar retina
The utility of the above expressions will now become clear. Suppose that an object's position in space is represented by the vector P = ( X, Y, Z) in head-centered coordinates. Then, the object projects onto the left retina at a point given by ( x L, y L) in Cartesian coordinates, where ( Equation A9)  
x L = m L 1 . ( P N L ) m L 3 . ( P N L ) ; y L = m L 2 . ( P N L ) m L 3 . ( P N L ) ,
(B7)
where N L is the vector from the origin to the nodal point of the left eye, and m L k is the kth column of the left eye's rotation matrix M L. For the left eye, we have N L = i
X ^
, where
X ^
is a unit vector along the X-axis and i is half the interocular distance, while for the right eye, N R = − i
X ^
. We shall also rewrite the left and right eyes' rotation vectors, m L and m R, in terms of the half-sum and half-difference between the two eyes:  
m L = m c m δ ; m R = m c + m δ .
(B8)
 
The image in the left eye is then  
x L = ( m c 1 m δ 1 ) . ( P i X ^ ) ( m c 3 m δ 3 ) . ( P i X ^ ) ,
(B9)
while the expression for the right eye is the same but with the signs of i and m δ reversed:  
x R = ( m c 1 + m δ 1 ) . ( P + i X ^ ) ( m c 3 + m δ 3 ) . ( P + i X ^ ) .
(B10)
 
Thus, there are two distinct sources of retinal disparity. One of them arises from the fact that the eyes are in different locations in the head and contributes terms in i. The other arises from the fact that the eyes may point in different directions and contributes terms in m δ. We shall see these two sources emerging in all our future expressions for binocular disparity. 
We now make the approximation that both sources of disparity, i and m δ, are small. We carry out a Taylor expansion in which we retain only first-order terms of these quantities. To do this, it is helpful to introduce dummy quantities s and j, where m δj = ɛ s j and i = ɛj, and the variable ɛ is assumed to be so small that we can ignore terms in ɛ 2:  
x L = ( m c 1 ɛ s 1 ) . ( P ɛ j X ^ ) ( m c 3 ɛ s 3 ) . ( P ɛ j X ^ ) m c 1 . P m c 3 . P ( 1 ɛ j m c 1 . X ^ m c 1 . P ɛ s 1 . P m c 1 . P + ɛ j m c 3 . X^ m c 3 . P + ɛ s 3 . P m c 3 . P + O ( ɛ 2 ) ) .
(B11)
 
Now removing the dummy variables, we have an expression for x L under the small-eye-difference approximation:  
x L m c 1 . P m c 3 . P ( 1 + ( i m c 3 . X ^ + m δ 3 . P ) m c 3 . P ( i m c 1 . X ^ + m δ 1 . P ) m c 1 . P + O ( ɛ 2 ) ) .
(B12)
 
Again, the expression for x R is the same but with the signs of i and m δ reversed. The expressions for y are the same except with subscripts 1 replaced with 2. We can therefore derive the following expressions for the cyclopean position of the image:  
x c = x R + x L 2 m c 1 . P m c 3 . P ; y c = y R + y L 2 m c 2 . P m c 3 . P
(B13)
while for the Cartesian disparity, we obtain  
x Δ = x R x L 2 m c 1 . P m c 3 . P ( ( i m c 3 . X ^ + m δ 3 . P ) m c 3 . P ( i m c 1 . X ^ + m δ 1 . P ) m c 1 . P ) y Δ = y R y L 2 m c 2 . P m c 3 . P ( ( i m c 3 . X ^ + m δ 3 . P ) m c 3 . P ( i m c 2 . X ^ + m δ 2 . P ) m c 2 . P ) .
(B14)
 
Expressions for m cj.
X ^
were given in Equation B5. Now, instead of specifying P = ( X, Y, Z) in head-centered coordinates, we move to the gaze-centered coordinate system ( U, W, S) in which an object's position is specified relative to the cyclopean gaze direction ( Equation A7):  
P = U m c 1 + W m c 2 + S m c 3 .
(B15)
 
Now recall that the inner product of any difference vector m δj with the corresponding cyclopean vector m c j is identically zero ( Equation B1). Thus, the term m δ3. P is independent of the object's distance measured along the cyclopean gaze direction, S:  
m δ 3 . P = U m δ 3 . m c 1 + W m δ 3 . m c 2 .
(B16)
 
Using the relationships between the various m vectors ( Equations B1B3), we obtain  
x c U S , y c W S ,
(B17)
which is in fact obvious given the definition of the cyclopean retina and the cyclopean gaze-centered coordinate system. For the disparity, we obtain  
x Δ I S 2 ( U M c 13 S M c 11 ) 2 S 2 ( ( U 2 + S 2 ) m δ 1 . m c 3 + U W m δ 2 . m c 3 + S W m δ 1 . m c 2 ) y Δ I S 2 ( W M c 13 S M c 12 ) 2 S 2 ( ( W 2 + S 2 ) m δ 2 . m c 3 + U W m δ 1 . m c 3 S U m δ 1 . m c 2 ) .
(B18)
 
Expressions for the vector inner products, valid under the approximation we are considering, were given in Equations B4 and B5. Substituting these, using the small angle approximation for the δ quantities, we obtain the following expressions for an object's horizontal and vertical disparities in Cartesian planar coordinates, expressed as a function of its spatial location in gaze-centered coordinates:  
x Δ I S ( U S H c sin H c cos T c cos ) + [ ( U 2 S 2 + 1 ) T c cos U W S 2 T c sin ] H Δ [ ( U 2 S 2 + 1 ) H c cos T c sin + U W S 2 H c cos T c cos + W S H c sin ] V Δ W S T Δ y Δ I S ( W S H c sin + T c sin H c cos ) + [ U W S 2 T c cos ( W 2 S 2 + 1 ) T c sin ] H Δ + [ U S H c sin ( W 2 S 2 + 1 ) H c cos T c cos U W S 2 H c cos T c sin ] V Δ + U S T Δ ,
(B19)
where to save space we have again defined T c cos = cos T c, and so on. 
Here, the disparity is expressed as a function of the object's position in space, ( U, W, S). However, this is not very useful, since the brain has no direct access to this. It is more useful to express disparities in terms of ( x c, y c), the position on the cyclopean retina or equivalently the visual direction currently under consideration, together with the distance to the object along the cyclopean gaze, S. The brain has direct access to the retinal position ( x c, y c), leaving distance S as the sole unknown, to be deduced from the disparity. Then we obtain the following expressions for an object's horizontal and vertical disparities in Cartesian planar coordinates, expressed as a function of its retinal location in Cartesian planar coordinates:  
x Δ ( x c H c sin + H c cos T c cos ) I S + [ ( x c 2 + 1 ) T c cos x c y c T c sin ] H Δ + [ y c H c sin ( x c 2 + 1 ) H c cos T c sin x c y c H c cos T c cos ] V Δ + y c T Δ y Δ ( y c H c sin T c sin H c cos ) I S + [ x c y c T c cos ( y c 2 + 1 ) T c sin ] H Δ [ x c H c sin + ( y c 2 + 1 ) H c cos T c cos + x c y c H c cos T c sin ] V Δ x c T Δ .
(B20)
 
Disparity in retinal longitude
Azimuth longitude and elevation longitude on the retina, α and η, are simply related to the planar coordinates x and y:  
α = arctan ( x ) ; η = arctan ( y ) .
(B21)
 
From the approximation to x L given in Equation B12, we have  
α L arctan [ m c 1 . P m c 3 . P ( 1 + ( i m c 3 . X ^ + m δ 3 . P ) m c 3 . P ( i m c 1 . X ^ + m δ 1 . P ) m c 1 . P ) ] .
(B22)
 
With the Taylor expansion for arctan, this becomes  
α L arctan [ m c 1 . P m c 3 . P ] [ ( i m c 3 . X ^ + m δ 3 . P ) m c 3 . P ( i m c 1 . X ^ + m δ 1 . P ) m c 1 . P ] ( m c 1 . P ) ( m c 3 . P ) ( m c 3 . P ) 2 + ( m c 1 . P ) 2 .
(B23)
 
As before, the analogous expression for α R is the same but with the signs of i and m δ swapped. Thus, we obtain  
α c arctan [ m c 1 . P m c 3 . P ] arctan U S
(B24)
and  
α δ [ ( i m c 3 . X ^ + m δ 3 . P ) m c 3 . P ( i m c 1 . X ^ + m δ 1 . P ) m c 1 . P ] ( m c 1 . P ) ( m c 3 . P ) ( m c 3 . P ) 2 + ( m c 1 . P ) 2 .
(B25)
 
We similarly obtain the following equation for the elevation-longitude cyclopean position and disparity:  
η c arctan [ m c 2 . P m c 3 . P ] arctan W S η δ [ ( i m c 3 . X ^ + m δ 3 . P ) m c 3 . P ( i m c 2 . X ^ + m δ 2 . P ) m c 2 . P ] ( m c 2 . P ) ( m c 3 . P ) ( m c 3 . P ) 2 + ( m c 2 . P ) 2 .
(B26)
 
Again substituting for m, we obtain, in terms of an object's spatial location in gaze-centered coordinates:  
α Δ 1 S 2 + U 2 { [ U H c sin S H c cos T c cos ] I + [ ( S 2 + U 2 ) T c cos U W T c sin ] H Δ [ ( S 2 + U 2 ) H c cos T c sin + U W H c cos T c cos + W S H c sin ] V Δ W S T Δ } η Δ 1 W 2 + S 2 { [ W H c sin + S T c sin H c cos ] I + [ ( W 2 + S 2 ) T c sin + U W T c cos ] H Δ [ ( W 2 + S 2 ) H c cos T c cos + U W H c cos T c sin U S H c sin ] V Δ + U S T Δ } .
(B27)
 
We now re-express these disparities in terms of the object's retinal location in azimuth-longitude/elevation-longitude coordinates. From U ≈ − Stan α c, we have  
S 2 S 2 + U 2 S 2 S 2 + S 2 tan 2 α c = cos 2 α c .
(B28)
Similarly,
S 2 S 2 + W 2
≈ cos 2 η c
We therefore arrive at the following expressions for longitude disparity expressed as a function of position on the cyclopean retina in azimuth-longitude/elevation-longitude coordinates:  
α Δ I S cos α c ( H c cos T c cos cos α c + H c sin sin α c ) + [ T c cos sin α c cos α c tan η c T c sin ] H Δ [ H c cos T c sin + sin α c cos α c tan η c H c cos T c cos cos 2 α c tan η c H c sin ] V Δ + [ cos 2 α c tan η c ] T Δ η Δ cos 2 η c [ T c sin H c cos H c sin tan η c ] I S + [ tan α c sin η c cos η c T c cos T c sin ] H Δ [ H c cos T c cos + tan α c sin η c cos η c H c cos T c sin + tan α c cos 2 η c H c sin ] V Δ [ tan α c cos 2 η c ] T Δ .
(B29)
 
Alternatively, we may wish to express azimuth-longitude disparity as a function of retinal location in an azimuth-longitude/elevation-latitude coordinate system. Elevation latitude κ is related to azimuth longitude α and elevation longitude η as  
tan κ = tan η . cos α .
(B30)
 
Thus, it is easy to replace η c in Equation B29 with κ c:  
α Δ I S cos α c ( H c cos T c cos cos α c + H c sin sin α c ) + [ T c cos sin α c tan κ c T c sin ] H Δ [ H c cos T c sin + sin α c tan κ c H c cos T c cos cos α c tan κ c H c sin ] V Δ + [ cos α c tan κ c ] T Δ .
(B31)
 
Similarly, we can express elevation-longitude disparity η Δ as a function of retinal location in an azimuth-latitude/elevation-longitude coordinate system ( β, η). Using tan β = tan αcos η, Equation B29 becomes  
η Δ cos 2 η c [ T c sin H c cos H c sin tan η c ] I S + [ tan β c sin η c T c cos T c sin ] H Δ [ H c cos T c cos + tan β c sin η c H c cos T c sin + tan β c cos η c H c sin ] V Δ [ tan β c cos η c ] T Δ .
(B32)
 
Disparity in retinal latitude
Azimuth latitude and elevation latitude on the retina, β and κ, are related to the planar coordinates x and y as  
β = arctan ( x / ( y 2 + 1 ) ) ; κ = arctan ( y / ( x 2 + 1 ) ) .
(B33)
 
From the approximation to x L and y L given in Equation B12, we have  
β L arctan [ m L 1 . ( P N L ) [ m L 2 . ( P N L ) ] 2 + [ m L 3 . ( P N L ) ] 2 ] .
(B34)
 
Again doing the Taylor expansion, we obtain the following expressions for horizontal and vertical retinal latitude disparities in terms of an object's spatial location in gaze-centered coordinates:  
β Δ = 2 ( S m δ 3 . m c 1 + W m δ 2 . m c 1 ) W 2 + S 2 + I ( M c 12 U W + M c 13 U S M c 11 ( W 2 + S 2 ) ) ( U 2 + W 2 + S 2 ) W 2 + S 2 κ Δ = 2 ( S m δ 3 . m c 2 + U m δ 1 . m c 2 ) U 2 + S 2 + I ( M c 11 U W + M c 13 W S M c 12 ( U 2 + S 2 ) ) ( U 2 + W 2 + S 2 ) U 2 + S 2 .
(B35)
 
Substituting for the various entries in the rotation matrix, M c, m δ, and m c, we obtain  
2 m δ 1 . m c 2 T Δ + H c sin V Δ 2 m δ 2 . m c 3 T c sin H Δ + H c cos T c cos V Δ 2 m δ 1 . m c 3 T c cos H Δ + H c cos T c sin V Δ β Δ = I ( T c sin H c cos U W H c sin U S + T c cos H c cos ( W 2 + S 2 ) ) ( U 2 + W 2 + S 2 ) W 2 + S 2 ( S T c cos H Δ + S H c cos T c sin V Δ + W H c sin V Δ + W T Δ ) W 2 + S 2 κ Δ = I ( H c cos T c cos U W + H c sin W S + T c sin H c cos ( U 2 + S 2 ) ) ( U 2 + W 2 + S 2 ) U 2 + S 2 ( S T c sin H Δ + S H c cos T c cos V Δ U H c sin V Δ U T Δ ) U 2 + S 2 .
(B36)
 
To express azimuth-latitude disparity as a function of position on the cyclopean retina in azimuth-latitude/elevation-longitude coordinates ( β c, η c), we use the following relationships:  
W = S tan η c , U = S tan β c sec η c .
(B37)
 
This yields  
β Δ = I S cos β c cos η c ( T c sin H c cos sin β c sin η c + H c sin sin β c cos η c + T c cos H c cos cos β c ) + ( T c cos cos η c ) H Δ + cos η c ( tan η c H c sin H c cos T c sin ) V Δ + ( sin η c ) T Δ .
(B38)
 
Similarly, if we express elevation-latitude disparity as a function of position on the cyclopean retina in azimuth-longitude/elevation-latitude coordinates ( α c, κ c), we obtain  
κ Δ = I S cos α c cos κ c ( T c cos H c cos sin α c sin κ c H c sin cos α c sin κ c + T c sin H c cos cos κ c ) T c sin H Δ cos α c ( H c cos T c cos cos α c + sin α c H c sin ) V Δ sin α c T Δ .
(B39)
 
This expression simplifies slightly if we replace S, the distance component along the cyclopean line of sight, with R, the shortest distance from the origin to the viewed point. R 2 = U 2 + W 2 + S 2, and hence  
S = R cos α c cos κ c = R cos β c cos η c .
(B40)
 
Then we have  
κ Δ = I R ( T c cos H c cos sin α c sin κ c H c sin cos α c sin κ c + T c sin H c cos cos κ c ) T c sin H Δ cos α c ( H c cos T c cos cos α c + sin α c H c sin ) V Δ sin α c T Δ .
(B41)
 
Appendix C: Tables of expressions for horizontal and vertical disparities in different coordinate systems
Most general
The expressions in Tables C1 and C2 assume that the cyclovergence between the eyes, T Δ, is small. They do not assume anything about the overall cycloversion, T c. Cycloversion rotates the eyes in the head, mixing up vertical and horizontal disparities. This can occur when the head tilts over, so that the interocular axis is no longer horizontal with respect to gravity. In the extreme case of T c = 90°, the vertical and horizontal directions have actually swapped over ( yx and x → − y). One can verify from the above results that the expressions for vertical and horizontal disparities also swap over (i.e., x Δ with T c = 90° is the same as y Δ with T c = 0, after replacing y with x and x with − y), a quick “sanity check” on the results. 
Table C1
 
Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c, elevation V c, or overall cycloversion T c.
Table C1
 
Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c, elevation V c, or overall cycloversion T c.
Horizontal disparity Most general expressions
In planar Cartesian retinal coordinates as a function of spatial position in gaze-centered coordinates x Δ ≈ I S ( U S H c sin − H c cos T c cos ) + [ ( U 2 S 2 + 1 ) T c cos − U ⁢ W S 2 T c sin ] H Δ − [ ( U 2 S 2 + 1 ) H c cos T c sin + U ⁢ W S 2 H c cos T c cos + W S H c sin ] V Δ − W S T Δ
In planar Cartesian retinal coordinates as a function of retinal location in planar Cartesian coordinates x Δ ≈ − ( x c H c sin + H c cos T c cos ) I S + [ ( x c 2 + 1 ) T c cos − x c y c T c sin ] H Δ + [ y c H c sin − ( x c 2 + 1 ) H c cos T c sin − x c y c H c cos T c cos ] V Δ + y c T Δ
In azimuth longitude, as a function of spatial location in gaze-centered coordinates α Δ ≈ 1 S 2 + U 2 { [ U H c sin − S H c cos T c cos ] I + [ ( S 2 + U 2 ) T c cos − U ⁢ W T c sin ] H Δ − [ ( S 2 + U 2 ) H c cos T c sin + U ⁢ W H c cos T c cos + W ⁢ S H c sin ] V Δ − W ⁢ S T Δ }
In azimuth longitude, as a function of retinal location in azimuth-longitude/ elevation-longitude coordinates α Δ ≈ − I S cos α c ( H c cos T c cos cos α c + H c sin sin α c ) + [ T c cos − sin α c cos α c tan η c T c sin ] H Δ − [ H c cos T c sin + sin α c cos α c tan η c H c cos T c cos − cos 2 α c tan η c H c sin ] V Δ + cos 2 α c tan η c T Δ
In azimuth longitude, as a function of retinal location in azimuth-longitude/ elevation-latitude coordinates α Δ ≈ − I S cos α c ( H c cos T c cos cos α c + H c sin sin α c ) + ( T c cos − sin α c tan κ c T c sin ) H Δ − ( H c cos T c sin + T c cos H c cos sin α c tan κ c − H c sin cos α c tan κ c ) V Δ + ( cos α c tan κ c ) T Δ α Δ ≈ − I R sec κ c ( H c cos T c cos cos α c + H c sin sin α c ) + ( T c cos − sin α c tan κ c T c sin ) H Δ − ( H c cos T c sin + T c cos H c cos sin α c tan κ c − H c sin cos α c tan κ c ) V Δ + ( cos α c tan κ c ) T Δ
In azimuth latitude, as a function of spatial location in gaze-centered coordinates β Δ = − I ( T c sin H c cos U ⁢ W − H c sin U ⁢ S + T c cos H c cos ( W 2 + S 2 ) ) ( U 2 + W 2 + S 2 ) W 2 + S 2 − ( − S T c cos H Δ + S H c cos T c sin V Δ + W H c sin V Δ + W T Δ ) W 2 + S 2
In azimuth latitude, as a function of retinal location in azimuth-latitude/ elevation-longitude coordinates β Δ = − I R ( T c sin H c cos sin β c sin η c + H c sin sin β c cos η c + T c cos H c cos cos β c ) + ( T c cos cos η c ) H Δ + cos η c ( tan η c H c sin − H c cos T c sin ) V Δ + ( sin η c ) T Δ
Table C2
 
Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c, elevation V c, or overall cycloversion T c.
Table C2
 
Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c, elevation V c, or overall cycloversion T c.
Vertical disparity Most general expressions
In planar Cartesian retinal coordinates as a function of spatial position in gaze-centered coordinates y Δ ≈ I S ( W S H c sin + T c sin H c cos ) + [ U ⁢ W S 2 T c cos − ( W 2 S 2 + 1 ) T c sin ] H Δ + [ U S H c sin − ( W 2 S 2 + 1 ) H c cos T c cos − U ⁢ W S 2 H c cos T c sin ] V Δ + U S T Δ
In planar Cartesian retinal coordinates as a function of retinal location in planar Cartesian coordinates y Δ ≈ − ( y c H c sin − T c sin H c cos ) I S + [ x c y c T c cos − ( y c 2 + 1 ) T c sin ] H Δ − [ x c H c sin + ( y c 2 + 1 ) H c cos T c cos + x c y c H c cos T c sin ] V Δ − x c T Δ
In elevation longitude, as a function of spatial location in gaze-centered coordinates η Δ ≈ 1 W 2 + S 2 { [ W H c sin + S T c sin H c cos ] I + [ − ( W 2 + S 2 ) T c sin + U ⁢ W T c cos ] H Δ − [ ( W 2 + S 2 ) H c cos T c cos + U ⁢ W H c cos T c sin − U ⁢ S H c sin ] V Δ + U ⁢ S T Δ }
In elevation longitude, as a function of retinal location in azimuth-longitude/ elevation-longitude coordinates η Δ ≈ cos 2 η c [ T c sin H c cos − H c sin tan η c ] I S + [ tan α c sin η c cos η c T c cos − T c sin ] H Δ − [ H c cos T c cos + tan α c sin η c cos η c H c cos T c sin + tan α c cos 2 η c H c sin ] V Δ − tan α c cos 2 η c T Δ
In elevation longitude, as a function of retinal location in azimuth-latitude/ elevation-longitude coordinates η Δ ≈ cos 2 η c [ T c sin H c cos − H c sin tan η c ] I S + [ tan β c sin η c T c cos − T c sin ] H Δ − [ H c cos T c cos + tan β c sin η c H c cos T c sin + tan β c cos η c H c sin ] V Δ − tan β c cos η c T Δ η Δ ≈ I R cos η c cos β c [ T c sin H c cos − H c sin tan η c ] + [ tan β c sin η c T c cos − T c sin ] H Δ − [ H c cos T c cos + tan β c sin η c H c cos T c sin + tan β c cos η c H c sin ] V Δ − tan β c cos η c T Δ
In elevation latitude, as a function of spatial location in gaze-centered coordinates κ Δ = I ( H c cos T c cos U ⁢ W + H c sin W ⁢ S + T c sin H c cos ( U 2 + S 2 ) ) ( U 2 + W 2 + S 2 ) U 2 + S 2 − ( S T c sin H Δ + S H c cos T c cos V Δ − U H c sin V Δ − U T Δ ) U 2 + S 2
In elevation latitude, as a function of retinal location in azimuth-longitude/ elevation-latitude coordinates κ Δ = I S cos α c cos κ c ( T c cos H c cos sin α c sin κ c − H c sin cos α c sin κ c + T c sin H c cos cos κ c ) − T c sin H Δ cos α c − ( H c cos T c cos cos α c + sin α c H c sin ) V Δ − sin α c T Δ κ Δ = I R ( T c cos H c cos sin α c sin κ c − H c sin cos α c sin κ c + T c sin H c cos cos κ c ) − T c sin H Δ cos α c − ( H c cos T c cos cos α c + sin α c H c sin ) V Δ − sin α c T Δ
Zero overall cycloversion
Table C3
 
Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c or elevation V c, provided there is no overall cycloversion, T c = 0.
Table C3
 
Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c or elevation V c, provided there is no overall cycloversion, T c = 0.
Horizontal disparity With zero overall cycloversion, T c = 0
In planar Cartesian retinal coordinates as a function of spatial position in gaze-centered coordinates x Δ ≈ I S ( U S H c sin − H c cos ) + ( U 2 S 2 + 1 ) H Δ − [ U ⁢ W S 2 H c cos + W S H c sin ] V Δ − W S T Δ
In planar Cartesian retinal coordinates as a function of retinal location in planar Cartesian coordinates x Δ ≈ − ( x c H c sin + H c cos ) I S + ( x c 2 + 1 ) H Δ + ( y c H c sin − x c y c H c cos ) V Δ + y c T Δ
In azimuth longitude, as a function of spatial location in gaze-centered coordinates α Δ ≈ 1 S 2 + U 2 { [ U H c sin − S H c cos ] I + ( S 2 + U 2 ) H Δ − [ U ⁢ W H c cos + W ⁢ S H c sin ] V Δ − W ⁢ S T Δ }
In azimuth longitude, as a function of retinal location in azimuth-longitude/ elevation-longitude coordinates α Δ ≈ − I S cos α c cos ( H c − α c ) + H Δ + V Δ cos α c tan η c sin ( H c − α c ) + T Δ cos 2 α c tan η c
In azimuth longitude, as a function of retinal location in azimuth-longitude/ elevation-latitude coordinates α Δ ≈ − I S cos α c cos ( H c − α c ) + H Δ + V Δ tan κ c sin ( H c − α c ) + T Δ cos α c tan κ c α Δ ≈ − I R sec κ c cos ( H c − α c ) + H Δ + V Δ tan κ c sin ( H c − α c ) + ( cos α c tan κ c ) T Δ
In azimuth latitude, as a function of spatial location in gaze-centered coordinates β Δ = − I ( − H c sin U ⁢ S + H c cos ( W 2 + S 2 ) ) ( U 2 + W 2 + S 2 ) W 2 + S 2 − ( − S H Δ + W H c sin V Δ + W T Δ ) W 2 + S 2
In azimuth latitude, as a function of retinal location in azimuth-latitude/ elevation-longitude coordinates β Δ = − I R ( H c sin sin β c cos η c + H c cos cos β c ) + H Δ T c cos cos η c + V Δ H c sin cos η c tan η c + T Δ sin η c
Table C4
 
Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c or elevation V c, provided there is no overall cycloversion, T c = 0.
Table C4
 
Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c or elevation V c, provided there is no overall cycloversion, T c = 0.
Vertical disparity With zero overall cycloversion, T c = 0
In planar Cartesian retinal coordinates as a function of spatial position in gaze-centered coordinates y Δ ≈ I ⁢ W S 2 H c sin + H Δ U ⁢ W S 2 + [ U S H c sin − ( W 2 S 2 + 1 ) H c cos ] V Δ + U S T Δ
In planar Cartesian retinal coordinates as a function of retinal location in planar Cartesian coordinates y Δ ≈ − I S y c H c sin + H Δ x c y c − [ x c H c sin + ( y c 2 + 1 ) H c cos ] V Δ − x c T Δ
In elevation longitude, as a function of spatial location in gaze-centered coordinates η Δ ≈ 1 W 2 + S 2 { [ W H c sin ] I + U ⁢ W H Δ − [ ( W 2 + S 2 ) H c cos − U ⁢ S H c sin ] V Δ + U ⁢ S T Δ }
In elevation longitude, as a function of retinal location in azimuth-longitude/elevation-longitude coordinates η Δ ≈ − I S H c sin sin η c cos η c + H Δ tan α c sin η c cos η c − V Δ ( H c cos + tan α c cos 2 η c H c sin ) − tan α c cos 2 η c T Δ
In elevation longitude, as a function of retinal location in azimuth-latitude/elevation-longitude coordinates η Δ ≈ − I S H c sin sin η c cos η c + H Δ tan β c sin η c − V Δ ( H c cos + tan β c cos η c H c sin ) − tan β c cos η c T Δ η Δ ≈ − I R H c sin sin η c cos β c + H Δ tan β c sin η c − V Δ ( H c cos + tan β c cos η c H c sin ) − tan β c cos η c T Δ
In elevation latitude, as a function of spatial location in gaze-centered coordinates κ Δ = I ( H c cos U ⁢ W + H c sin W ⁢ S ) ( U 2 + W 2 + S 2 ) U 2 + S 2 − ( S H c cos V Δ − U H c sin V Δ − U T Δ ) U 2 + S 2
In elevation latitude, as a function of retinal location in azimuth-longitude/elevation-latitude coordinates κ Δ = − I R sin κ c sin ( H c − α c ) − V Δ cos ( H c − α c ) − sin α c T Δ
Zero overall cycloversion, cyclovergence, and vertical vergence error
Table C5
 
Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the convergence angle H Δ. They assume cycloversion, cyclovergence, and vertical vergence are all zero: T c = T Δ = V Δ = 0. They hold all over the retina and for any cyclopean gaze H c or elevation V c.
Table C5
 
Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the convergence angle H Δ. They assume cycloversion, cyclovergence, and vertical vergence are all zero: T c = T Δ = V Δ = 0. They hold all over the retina and for any cyclopean gaze H c or elevation V c.
Horizontal disparity For zero torsion and vertical vergence error
In planar Cartesian retinal coordinates as a function of retinal location in planar Cartesian coordinates x Δ ≈ − ( x c H c sin + H c cos ) I S + ( x c 2 + 1 ) H Δ
In planar Cartesian retinal coordinates as a function of spatial position in gaze-centered coordinates x Δ ≈ I S 2 ( U H c sin − S H c cos ) + U 2 + S 2 S 2 H Δ
In azimuth longitude, as a function of spatial location in gaze-centered coordinates α Δ ≈ I S 2 + U 2 ( U H c sin − S H c cos ) + H Δ
In azimuth longitude, as a function of retinal location in azimuth-longitude/elevation-longitude coordinates α Δ ≈ − I S cos α c cos ( α c − H c ) + H Δ
In azimuth longitude, as a function of retinal location in azimuth-longitude/elevation-latitude coordinates (same as above since α Δ is then independent of retinal elevation)
In azimuth latitude, as a function of spatial location in gaze-centered coordinates β Δ = I ( H c sin U S − H c cos ( W 2 + S 2 ) ) ( U 2 + W 2 + S 2 ) W 2 + S 2 + S H Δ W 2 + S 2
In azimuth latitude, as a function of retinal location in azimuth-latitude/elevation-longitude coordinates β Δ = − I S cos β c cos η c ( H c sin sin β c cos η c + H c cos cos β c ) + cos η c H Δ
Table C6
 
Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the convergence angle H Δ. They assume cycloversion, cyclovergence, and vertical vergence are all zero: T c = T Δ = V Δ = 0. They hold all over the retina and for any cyclopean gaze H c or elevation V c.
Table C6
 
Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the convergence angle H Δ. They assume cycloversion, cyclovergence, and vertical vergence are all zero: T c = T Δ = V Δ = 0. They hold all over the retina and for any cyclopean gaze H c or elevation V c.
Vertical disparity For zero torsion and vertical vergence error
In planar Cartesian retinal coordinates as a function of retinal location in planar Cartesian coordinates y Δ ≈ y c ( − H c sin I S + x c H Δ )
In planar Cartesian retinal coordinates as a function of spatial position in gaze-centered coordinates y Δ ≈ W S 2 ( I H c sin + U H Δ )
In elevation longitude, as a function of spatial location in gaze-centered coordinates η Δ ≈ W W 2 + S 2 ( I ⁢ sin H c + U H Δ )
In elevation longitude, as a function of retinal location in azimuth-longitude/elevation-longitude coordinates η Δ ≈ ⁢ sin η c cos η c ( − I S sin H c + H Δ tan α c )
In elevation longitude, as a function of retinal location in azimuth-latitude/elevation-longitude coordinates η Δ ≈ ⁢ sin η c ( − I S sin H c cos η c + H Δ tan β c )
In elevation latitude, as a function of spatial location in gaze-centered coordinates κ Δ = I W ( U ⁢ cos H c + S ⁢ sin H c ) ( U 2 + W 2 + S 2 ) U 2 + S 2
In elevation latitude, as a function of retinal location in azimuth-longitude/elevation-latitude coordinates κ Δ = I S sin κ c cos κ c cos α c sin ( α c − H c ) κ Δ = I R sin κ c sin ( α c − H c )
Acknowledgments
This research was supported by the Royal Society (University Research Fellowship UF041260 to JCAR), MRC (New Investigator Award 80154 to JCAR), the Wellcome Trust (Grant 086526/A/08/Z to AG), and the EPSRC (Neuroinformatics Doctoral Training Centre studentship to GPP). 
Commercial relationships: none. 
Corresponding author: Jenny C. A. Read. 
Email: j.c.a.read@ncl.ac.uk. 
Address: Henry Wellcome Building, Framlington Place, Newcastle upon Tyne NE2 4HH, UK. 
References
Adams, W. Frisby, J. P. Buckley, D. Garding, J. Hippisley-Cox, S. D. Porrill, J. (1996). Pooling of vertical disparities by the human visual system. Perception, 25, 165–176. [PubMed] [CrossRef] [PubMed]
Allison, R. S. Rogers, B. J. Bradshaw, M. F. (2003). Geometric and induced effects in binocular stereopsis and motion parallax. Vision Research, 43, 1879–1893. [PubMed] [CrossRef] [PubMed]
Backus, B. T. Banks, M. S. (1999). Estimator reliability and distance scaling in stereoscopic slant perception. Perception, 28, 217–242. [PubMed] [CrossRef] [PubMed]
Backus, B. T. Banks, M. S. van Ee, R. Crowell, J. A. (1999). Horizontal and vertical disparity, eye position, and stereoscopic slant perception. Vision Research, 39, 1143–1170. [PubMed] [CrossRef] [PubMed]
Banks, M. S. Backus, B. T. (1998). Extra-retinal and perspective cues cause the small range of the induced effect. Vision Research, 38, 187–194. [PubMed] [CrossRef] [PubMed]
Banks, M. S. Backus, B. T. Banks, R. S. (2002). Is vertical disparity used to determine azimuth? Vision Research, 42, 801–807. [PubMed] [CrossRef] [PubMed]
Banks, M. S. Hooge, I. T. Backus, B. T. (2001). Perceiving slant about a horizontal axis from stereopsis. Journal of Vision, 1, (2):1, 55–79, http://journalofvision.org/1/2/1/, doi:10.1167/1.2.1. [PubMed] [Article] [CrossRef] [PubMed]
Barlow, H. B. Blakemore, C. Pettigrew, J. D. (1967). The neural mechanisms of binocular depth discrimination. The Journal of Physiology, 193, 327–342. [PubMed] [Article] [CrossRef] [PubMed]
Berends, E. M. Erkelens, C. J. (2001). Strength of depth effects induced by three types of vertical disparity. Vision Research, 41, 37–45. [PubMed] [CrossRef] [PubMed]
Berends, E. M. van Ee, R. Erkelens, C. J. (2002). Vertical disparity can alter perceived direction. Perception, 31, 1323–1333. [PubMed] [CrossRef] [PubMed]
Bishop, P. O. (1989). Vertical disparity, egocentric distance and stereoscopic depth constancy: A new interpretation. Proceedings of the Royal Society of London B: Biological Sciences, 237, 445–469. [PubMed] [CrossRef]
Bishop, P. O. Kozak, W. Vakkur, G. J. (1962). Some quantitative aspects of the cat's eye: Axis and plane of reference, visual field co-ordinates and optics. The Journal of Physiology, 163, 466–502. [PubMed] [Article] [CrossRef] [PubMed]
Brenner, E. Smeets, J. B. Landy, M. S. (2001). How vertical disparities assist judgements of distance. Vision Research, 41, 3455–3465. [PubMed] [CrossRef] [PubMed]
Carpenter, R. H. (1988). Movements of the eyes. London, UK: Pion Ltd.
Clement, R. A. (1992). Gaze angle explanations of the induced effect. Perception, 21, 355–357. [PubMed] [CrossRef] [PubMed]
Cumming, B. G. (2002). An unexpected specialization for horizontal disparity in primate primary visual cortex. Nature, 418, 633–636. [PubMed] [CrossRef] [PubMed]
Cumming, B. G. Johnston, E. B. Parker, A. J. (1991). Vertical disparities and perception of three-dimensional shape. Nature, 349, 411–413. [PubMed] [CrossRef] [PubMed]
Duke, P. A. Howard, I. P. (2005). Vertical-disparity gradients are processed independently in different depth planes. Vision Research, 45, 2025–2035. [PubMed] [CrossRef] [PubMed]
Durand, J. B. Celebrini, S. Trotter, Y. (2007). Neural bases of stereopsis across visual field of the alert macaque monkey. Cerebral Cortex, 17, 1260–1273. [PubMed] [CrossRef] [PubMed]
Durand, J. B. Zhu, S. Celebrini, S. Trotter, Y. (2002). Neurons in parafoveal areas V1 and V2 encode vertical and horizontal disparities. Journal of Neurophysiology, 88, 2874–2879. [PubMed] [Article] [CrossRef] [PubMed]
Erkelens, C. J. van Ee, R. (1998). A computational model of depth perception based on head-centric disparity. Vision Research, 38, 2999–3018. [PubMed] [CrossRef] [PubMed]
Friedman, R. B. Kaye, M. G. Richards, W. (1978). Effect of vertical disparity upon stereoscopic depth. Vision Research, 18, 351–352. [PubMed] [CrossRef] [PubMed]
Frisby, J. P. Buckley, D. Grant, H. Garding, J. Horsman, J. M. Hippisley-Cox, S. D. (1999). An orientation anisotropy in the effects of scaling vertical disparities. Vision Research, 39, 481–492. [PubMed] [CrossRef] [PubMed]
Garding, J. Porrill, J. Mayhew, J. E. Frisby, J. P. (1995). Stereopsis, vertical disparity and relief transformations. Vision Research, 35, 703–722. [PubMed] [CrossRef] [PubMed]
Gillam, B. Chambers, D. Lawergren, B. (1988). The role of vertical disparity in the scaling of stereoscopic depth perception: An empirical and theoretical study. Perception & Psychophysics, 44, 473–483. [PubMed] [CrossRef] [PubMed]
Gillam, B. Lawergren, B. (1983). The induced effect, vertical disparity, and stereoscopic theory. Perception & Psychophysics, 340, 121–130. [PubMed] [CrossRef]
Gonzalez, F. Justo, M. S. Bermudez, M. A. Perez, R. (2003). Sensitivity to horizontal and vertical disparity and orientation preference in areas V1 and V2 of the monkey. Neuroreport, 14, 829–832. [PubMed] [CrossRef] [PubMed]
Gur, M. Snodderly, D. M. (1997). Visual receptive fields of neurons in primary visual cortex (V1 move in space with the eye movements of fixation. Vision Research, 37, 257–265. [PubMed] [CrossRef] [PubMed]
Hartley, R. Zisserman, A. (2000). Multiple view geometry in computer vision. Cambridge, UK: Cambridge University Press.
Helmholtz, H. v. (1925). Treatise on physiological optics. Rochester, NY: Optical Society of America.
Hibbard, P. B. (2007). A statistical model of binocular disparity. Visual Cognition, 15, 149–165. [CrossRef]
Howard, I. P. (2002). Seeing in depth, volume 1: Basic mechanisms. Ontario, Canada: I Porteous.
Howard, I. P. Allison, R. S. Zacher, J. E. (1997). The dynamics of vertical vergence. Experimental Brain Research, 116, 153–159. [PubMed] [CrossRef] [PubMed]
Howard, I. P. Rogers, B. J. (2002). Seeing in depth, volume 2: Depth perception. Ontario, Canada: I Porteous.
Kaneko, H. Howard, I. P. (1996). Relative size disparities and the perception of surface slant. Vision Research, 36, 1919–1930. [PubMed] [CrossRef] [PubMed]
Kaneko, H. Howard, I. P. (1997a). Spatial limitation of vertical-size disparity processing. Vision Research, 37, 2871–2878. [PubMed] [CrossRef]
Kaneko, H. Howard, I. P. (1997b). Spatial properties of shear disparity processing. Vision Research, 37, 315–323. [PubMed] [CrossRef]
Koenderink, J. J. van Doorn, A. J. (1976). Geometry of binocular vision and a model for stereopsis. Biological Cybernetics, 21, 29–35. [PubMed] [CrossRef] [PubMed]
Liu, L. Stevenson, S. B. Schor, C. M. (1994). A polar coordinate system for describing binocular disparity. Vision Research, 34, 1205–1222. [PubMed] [CrossRef] [PubMed]
Liu, Y. Bovik, A. C. Cormack, L. K. (2008). Disparity statistics in natural scenes. Journal of Vision, 8, (11):19, 1–14, http://journalofvision.org/8/11/19/, doi:10.1167/8.11.19. [PubMed] [Article] [CrossRef] [PubMed]
Longuet-Higgins, H. C. (1981). A computer algorithm for reconstructing a scene from two projections. Nature, 293, 133–135. [CrossRef]
Longuet-Higgins, H. C. (1982). The role of the vertical dimension in stereoscopic vision. Perception, 11, 377–386. [PubMed] [CrossRef] [PubMed]
Matthews, N. Meng, X. Xu, P. Qian, N. (2003). A physiological theory of depth perception from vertical disparity. Vision Research, 43, 85–99. [PubMed] [CrossRef] [PubMed]
Mayhew, J. E. (1982). The interpretation of stereo-disparity information: The computation of surface orientation and depth. Perception, 11, 387–403. [PubMed] [CrossRef] [PubMed]
Mayhew, J. E. Longuet-Higgins, H. C. (1982). A computational model of binocular depth perception. Nature, 297, 376–378. [PubMed] [CrossRef] [PubMed]
Minken, A. W. Van Gisbergen, J. A. (1994). A three-dimensional analysis of vergence movements at various levels of elevation. Experimental Brain Research, 101, 331–345. [PubMed] [CrossRef] [PubMed]
Mok, D. Ro, A. Cadera, W. Crawford, J. D. Vilis, T. (1992). Rotation of Listing's plane during vergence. Vision Research, 32, 2055–2064. [PubMed] [CrossRef] [PubMed]
Ogle, K. N. (1952). Space perception and vertical disparity. Journal of the Optical Society of America, 42, 145–146. [PubMed] [CrossRef] [PubMed]
Pierce, B. J. Howard, I. P. (1997). Types of size disparity and the perception of surface slant. Perception, 26, 1503–1517. [PubMed] [CrossRef] [PubMed]
Porrill, J. Mayhew, J. E. W. Frisby, J. P. Ullman, S. Richards, W. (1990). Cyclotorsion, conformal invariance and induced effects in stereoscopic vision. Image understanding 1989. –196). Norwood, NJ: Ablex Publishing.
Read, J. C. A. Cumming, B. G. (2003). Measuring V1 receptive fields despite eye movements in awake monkeys. Journal of Neurophysiology, 90, 946–960. [PubMed] [Article] [CrossRef] [PubMed]
Read, J. C. A. Cumming, B. G. (2004). Understanding the cortical specialization for horizontal disparity. Neural Computation, 16, 1983–2020. [PubMed] [Article] [CrossRef] [PubMed]
Read, J. C. A. Cumming, B. G. (2006). Does visual perception require vertical disparity detectors? Journal of Vision, 6, (12):1, 1323–1355, http://journalofvision.org/6/12/1/, doi:10.1167/6.12.1. [PubMed] [Article] [CrossRef] [PubMed]
Rogers, B. J. Bradshaw, M. F. (1993). Vertical disparities, differential perspective and binocular stereopsis. Nature, 361, 253–255. [PubMed] [CrossRef] [PubMed]
Rogers, B. J. Bradshaw, M. F. (1995). Disparity scaling and the perception of frontoparallel surfaces. Perception, 24, 155–179. [PubMed] [CrossRef] [PubMed]
Rogers, B. J. Cagenello, R. (1989). Disparity curvature and the perception of three-dimensional surfaces. Nature, 339, 135–137. [PubMed] [CrossRef] [PubMed]
Rogers, B. J. Koenderink, J. (1986). Monocular aniseikonia: A motion parallax analogue of the disparity-induced effect. Nature, 322, 62–63. [PubMed] [CrossRef] [PubMed]
Schreiber, K. Crawford, J. D. Fetter, M. Tweed, D. (2001). The motor side of depth vision. Nature, 410, 819–822. [PubMed] [CrossRef] [PubMed]
Serrano-Pedraza, I. Phillipson, G. P. Read, J. C. A. (in press). Journal of Vision.
Serrano-Pedraza, I. Read, J. C. A. (2009). Stereo vision requires an explicit encoding of vertical disparity. Journal of Vision, 9, (4):3, 1–13, http://journalofvision.org/9/4/3/, doi:10.1167/9.4.3. [PubMed] [Article] [CrossRef] [PubMed]
Somani, R. A. DeSouza, J. F. Tweed, D. Vilis, T. (1998). Visual test of Listing's law during vergence. Vision Research, 38, 911–923. [PubMed] [CrossRef] [PubMed]
Stenton, S. P. Frisby, J. P. Mayhew, J. E. (1984). Vertical disparity pooling and the induced effect. Nature, 309, 622–623. [PubMed] [CrossRef] [PubMed]
Stevenson, S. B. Schor, C. M. (1997). Human stereo matching is not restricted to epipolar lines. Vision Research, 37, 2717–2723. [PubMed] [CrossRef] [PubMed]
Tweed, D. Fetter,, M. Haslwanter,, T. P. Misslisch,, H. Tweed, D. (1997a). Kinematic principles of three-dimensional gaze control. Three-dimensional kinematics of eye, head and limb movements. Amsterdam: Harwood Academic Publishers.
Tweed, D. (1997b). Three-dimensional model of the human eye-head saccadic system. Journal of Neurophysiology, 77, 654–666. [PubMed] [Article]
Tweed, D. (1997c). Visual-motor optimization in binocular control. Vision Research, 37, 1939–1951. [PubMed] [CrossRef]
Van Rijn, L. J. Van den Berg, A. V. (1993). Binocular eye orientation during fixations: Listing's law extended to include eye vergence. Vision Research, 33, 691–708. [PubMed] [CrossRef] [PubMed]
Westheimer, G. (1978). Vertical disparity detection: Is there an induced size effect? Investigative Ophthalmology Vision Science, 17, 545–551. [PubMed] [Article]
Williams, T. D. (1970). Vertical disparity in depth perception. American Journal of Optometry and Archives of American Academy of Optometry, 47, 339–344. [PubMed] [CrossRef] [PubMed]
Figure 1
 
Two coordinate systems for describing head-centric or optic-array disparity. Red lines are drawn from the two nodal points L, R to an object P. (A) Helmholtz coordinates. Here, we first rotate up through the elevation angle λ to get us into the plane LRP, and the azimuthal coordinate ζ rotates the lines within this plane until they point to P. The elevation is thus the same for both eyes; no physical object can have a vertical disparity in optic-array Helmholtz coordinates. (B) Fick coordinates. Here, the azimuthal rotation ζ is applied within the horizontal plane, and the elevation λ then lifts each red line up to point at P. Thus, elevation is in general different for the two lines, meaning that object P has a vertical disparity in optic-array Fick coordinates.
Figure 1
 
Two coordinate systems for describing head-centric or optic-array disparity. Red lines are drawn from the two nodal points L, R to an object P. (A) Helmholtz coordinates. Here, we first rotate up through the elevation angle λ to get us into the plane LRP, and the azimuthal coordinate ζ rotates the lines within this plane until they point to P. The elevation is thus the same for both eyes; no physical object can have a vertical disparity in optic-array Helmholtz coordinates. (B) Fick coordinates. Here, the azimuthal rotation ζ is applied within the horizontal plane, and the elevation λ then lifts each red line up to point at P. Thus, elevation is in general different for the two lines, meaning that object P has a vertical disparity in optic-array Fick coordinates.
Figure 2
 
Different retinal coordinate systems. (A) Cartesian planar. Here, x and y refer to position on the virtual plane behind the retina; the “shadow” shows where points on the virtual plane correspond to on the retina, i.e., where a line drawn from the virtual plane to the center of the eyeball intersects the eyeball. (B) Azimuth longitude/elevation longitude. (C) Azimuth longitude/elevation latitude. (D) Azimuth latitude/elevation longitude. (E) Azimuth latitude/elevation latitude. For the (B–E) angular coordinate systems, lines of latitude/longitude are drawn at 15° intervals between ±90°. For the (A) Cartesian system, the lines of constant x and y are at intervals of 0.27 = tan15°. Lines of constant x are also lines of constant α, but lines that are equally spaced in x are not equally spaced in α. In this paper, we use the sign convention that positive x, α, β represent the left half of the retina, and positive y, η, κ represent the top. This figure was generated in Matlab by the program Fig_DiffRetCoords.m in the Supplementary material.
Figure 2
 
Different retinal coordinate systems. (A) Cartesian planar. Here, x and y refer to position on the virtual plane behind the retina; the “shadow” shows where points on the virtual plane correspond to on the retina, i.e., where a line drawn from the virtual plane to the center of the eyeball intersects the eyeball. (B) Azimuth longitude/elevation longitude. (C) Azimuth longitude/elevation latitude. (D) Azimuth latitude/elevation longitude. (E) Azimuth latitude/elevation latitude. For the (B–E) angular coordinate systems, lines of latitude/longitude are drawn at 15° intervals between ±90°. For the (A) Cartesian system, the lines of constant x and y are at intervals of 0.27 = tan15°. Lines of constant x are also lines of constant α, but lines that are equally spaced in x are not equally spaced in α. In this paper, we use the sign convention that positive x, α, β represent the left half of the retina, and positive y, η, κ represent the top. This figure was generated in Matlab by the program Fig_DiffRetCoords.m in the Supplementary material.
Figure 3
 
Two definitions of vertical retinal disparity. (AB) A point in space, P, projects to different positions I L and I R on the two retinas. (CD) The two retinas are shown superimposed, with the two half-images of P shown in red and blue for the left and right retinas, respectively. In (AC), the retinal coordinate system is azimuth longitude/elevation longitude. In (BD), it is azimuth longitude/elevation latitude. Point P and its images I L and I R are identical between (AC) and (BD); the only difference between left and right halves of the figure is the coordinate system drawn on the retinas. The eyes are converged 30° fixating a point on the midline: X = 0, Y = 0, Z = 11. The plane of gaze, the XZ plane, is shown in gray. Lines of latitude and longitude are drawn at 15° intervals. Point P is at X = −6, Y = 7, Z = 10. In elevation-longitude coordinates, the images of P fall at η L = −30°, η R = −38°, so the vertical disparity η Δ is −8°. In elevation latitude, κ L = −27°, κ R = −34°, and the vertical disparity κ Δ = −6°. This figure was generated by Fig_VDispDefinition.m in the Supplementary material.
Figure 3
 
Two definitions of vertical retinal disparity. (AB) A point in space, P, projects to different positions I L and I R on the two retinas. (CD) The two retinas are shown superimposed, with the two half-images of P shown in red and blue for the left and right retinas, respectively. In (AC), the retinal coordinate system is azimuth longitude/elevation longitude. In (BD), it is azimuth longitude/elevation latitude. Point P and its images I L and I R are identical between (AC) and (BD); the only difference between left and right halves of the figure is the coordinate system drawn on the retinas. The eyes are converged 30° fixating a point on the midline: X = 0, Y = 0, Z = 11. The plane of gaze, the XZ plane, is shown in gray. Lines of latitude and longitude are drawn at 15° intervals. Point P is at X = −6, Y = 7, Z = 10. In elevation-longitude coordinates, the images of P fall at η L = −30°, η R = −38°, so the vertical disparity η Δ is −8°. In elevation latitude, κ L = −27°, κ R = −34°, and the vertical disparity κ Δ = −6°. This figure was generated by Fig_VDispDefinition.m in the Supplementary material.
Figure 4
 
Helmholtz coordinates for eye position (A) shown as a gimbal, after Howard (2002, Figure 9.10) and (B) shown for the cyclopean eye. The sagittal YZ plane is shown in blue, the horizontal XZ plane in pink, and the gaze plane in yellow. There are two ways of interpreting Helmholtz coordinates: (1) Starting from primary position, the eye first rotates through an angle T about an axis through the nodal point parallel to Z, then through H about an axis parallel to Y, and finally through V about an axis parallel to X. Equivalently, (2) starting from primary position, the eye first rotates downward through V, bringing the optic axis into the desired gaze plane (shown in yellow) then rotates through H about an axis orthogonal to the gaze plane, and finally through T about the optic axis. Panel B was generated by the program Fig_HelmholtzEyeCoords.m in the Supplementary material.
Figure 4
 
Helmholtz coordinates for eye position (A) shown as a gimbal, after Howard (2002, Figure 9.10) and (B) shown for the cyclopean eye. The sagittal YZ plane is shown in blue, the horizontal XZ plane in pink, and the gaze plane in yellow. There are two ways of interpreting Helmholtz coordinates: (1) Starting from primary position, the eye first rotates through an angle T about an axis through the nodal point parallel to Z, then through H about an axis parallel to Y, and finally through V about an axis parallel to X. Equivalently, (2) starting from primary position, the eye first rotates downward through V, bringing the optic axis into the desired gaze plane (shown in yellow) then rotates through H about an axis orthogonal to the gaze plane, and finally through T about the optic axis. Panel B was generated by the program Fig_HelmholtzEyeCoords.m in the Supplementary material.
Figure 5
 
Different ways of measuring the distance to the object P. The two physical eyes are shown in gold; the cyclopean eye is in between them, in blue. F is the fixation point; the brown lines mark the optic axes, and the blue line marks the direction of the cyclopean gaze. Point P is marked with a red dot. It is at a distance R from the origin. Its perpendicular projection on the cyclopean gaze axis is also drawn in red (with a corner indicating the right angle); the distance of this projection from the origin is S, marked with a thick red line. This figure was generated by the program Fig_DistancesRS.m in the Supplementary material.
Figure 5
 
Different ways of measuring the distance to the object P. The two physical eyes are shown in gold; the cyclopean eye is in between them, in blue. F is the fixation point; the brown lines mark the optic axes, and the blue line marks the direction of the cyclopean gaze. Point P is marked with a red dot. It is at a distance R from the origin. Its perpendicular projection on the cyclopean gaze axis is also drawn in red (with a corner indicating the right angle); the distance of this projection from the origin is S, marked with a thick red line. This figure was generated by the program Fig_DistancesRS.m in the Supplementary material.
Figure 6
 
Definition of retinal eccentricity ξ: the eccentricity of Point E is the angle E Ĉ V, where C is the center of the eyeball and V is the center of the fovea.
Figure 6
 
Definition of retinal eccentricity ξ: the eccentricity of Point E is the angle E Ĉ V, where C is the center of the eyeball and V is the center of the fovea.
Figure 7
 
Expected vertical disparity in natural viewing, as a function of position in the cyclopean retina, for (a) elevation-longitude and (b) elevation-latitude definitions of vertical disparity. Vertical disparity is measured in units of 〈 H Δ〉, the mean convergence angle. Because the vertical disparity is small over much of the retina, we have scaled the pseudocolor as indicated in the color bar, so as to concentrate most of its dynamic range on small values. White contour lines show values in 0.1 steps from −1 to 1. This figure was generated by Fig_ExpectedVDisp.m in the Supplementary material.
Figure 7
 
Expected vertical disparity in natural viewing, as a function of position in the cyclopean retina, for (a) elevation-longitude and (b) elevation-latitude definitions of vertical disparity. Vertical disparity is measured in units of 〈 H Δ〉, the mean convergence angle. Because the vertical disparity is small over much of the retina, we have scaled the pseudocolor as indicated in the color bar, so as to concentrate most of its dynamic range on small values. White contour lines show values in 0.1 steps from −1 to 1. This figure was generated by Fig_ExpectedVDisp.m in the Supplementary material.
Figure 8
 
Vertical disparity field all over the retina, where the visual scene is a frontoparallel plane, i.e., constant head-centered coordinate Z. AB: Z = 60 cm; CD: Z = 10 m. The interocular distance was 6.4 cm, gaze angle H c = 15°, and convergence angle H Δ = 5.7°, i.e., such as to fixate the plane at Z = 60 cm. Vertical disparity is defined as difference in (AC) elevation longitude and (BD) elevation latitude. Lines of azimuth longitude and (AC) elevation longitude, (BD) elevation latitude are marked in black in 15° intervals. The white line shows where the vertical disparity is zero. The fovea is marked with a black dot. The same pseudocolor scale is used for all four panels. Note that the elevation-longitude disparity, η Δ, goes beyond the color scale at the edges of the retina, since it tends to infinity as | α c| tends to 90°. This figure was generated by DiagramOfVerticalDisparity_planes.m in the Supplementary material.
Figure 8
 
Vertical disparity field all over the retina, where the visual scene is a frontoparallel plane, i.e., constant head-centered coordinate Z. AB: Z = 60 cm; CD: Z = 10 m. The interocular distance was 6.4 cm, gaze angle H c = 15°, and convergence angle H Δ = 5.7°, i.e., such as to fixate the plane at Z = 60 cm. Vertical disparity is defined as difference in (AC) elevation longitude and (BD) elevation latitude. Lines of azimuth longitude and (AC) elevation longitude, (BD) elevation latitude are marked in black in 15° intervals. The white line shows where the vertical disparity is zero. The fovea is marked with a black dot. The same pseudocolor scale is used for all four panels. Note that the elevation-longitude disparity, η Δ, goes beyond the color scale at the edges of the retina, since it tends to infinity as | α c| tends to 90°. This figure was generated by DiagramOfVerticalDisparity_planes.m in the Supplementary material.
Figure 9
 
Epipolar line and how it differs from the “line of possible disparities” shown below in (D). (A) How an epipolar line is calculated: it is the set of all possible points on the right retina (heavy blue curve), which could correspond to the same point in space as a given point on the left retina (red dot). (B) Epipolar line plotted on the planar retina. Blue dots show 3 possible matches in the right eye for a fixed point in the left retina (red dot). The cyclopean location or visual direction (mean of left and right retinal positions, black dots) changes as one moves along the epipolar line. (C) Possible matches for a given cyclopean position (black dot). Here, we keep the mean location constant and consider pairs of left/right retinal locations with the same mean. (D) Line of possible disparities implied by the matches in (B). These are simply the vectors linking left to right retinal positions for each match (pink lines). Together, these build up a line of possible disparities (green line). Panel A was generated by Fig_EpipolarLine.m in the Supplementary material.
Figure 9
 
Epipolar line and how it differs from the “line of possible disparities” shown below in (D). (A) How an epipolar line is calculated: it is the set of all possible points on the right retina (heavy blue curve), which could correspond to the same point in space as a given point on the left retina (red dot). (B) Epipolar line plotted on the planar retina. Blue dots show 3 possible matches in the right eye for a fixed point in the left retina (red dot). The cyclopean location or visual direction (mean of left and right retinal positions, black dots) changes as one moves along the epipolar line. (C) Possible matches for a given cyclopean position (black dot). Here, we keep the mean location constant and consider pairs of left/right retinal locations with the same mean. (D) Line of possible disparities implied by the matches in (B). These are simply the vectors linking left to right retinal positions for each match (pink lines). Together, these build up a line of possible disparities (green line). Panel A was generated by Fig_EpipolarLine.m in the Supplementary material.
Figure 10
 
The thick green line shows the line of two-dimensional disparities that are physically possible for real objects, for the given eye posture (specified by convergence H Δ and gaze azimuth H c) and the given visual direction (specified by retinal azimuth α c and elevation κ c). The green dot shows where the line terminates on the abscissa. For any given object, where its disparity falls on the green line depends on the distance to the object at this visual direction. The white circle shows one possible distance. Although, for clarity, the green line is shown as having quite a steep gradient, in reality it is very shallow close to the fovea. Thus, it is often a reasonable approximation to assume that the line is flat in the vicinity of the distance one is considering (usually the fixation distance), as indicated by the horizontal green dashed line. This is considered in more detail in the next section.
Figure 10
 
The thick green line shows the line of two-dimensional disparities that are physically possible for real objects, for the given eye posture (specified by convergence H Δ and gaze azimuth H c) and the given visual direction (specified by retinal azimuth α c and elevation κ c). The green dot shows where the line terminates on the abscissa. For any given object, where its disparity falls on the green line depends on the distance to the object at this visual direction. The white circle shows one possible distance. Although, for clarity, the green line is shown as having quite a steep gradient, in reality it is very shallow close to the fovea. Thus, it is often a reasonable approximation to assume that the line is flat in the vicinity of the distance one is considering (usually the fixation distance), as indicated by the horizontal green dashed line. This is considered in more detail in the next section.
Figure 11
 
Partial differentiation on the retina. The cyclopean retina is shown colored to indicate the value of the vertical disparity field at each point. Differentiating with respect to elevation κ while holding azimuth constant means finding the rate at which vertical disparity changes as one moves up along a line of azimuth longitude, as shown by the arrow labeled ∂/∂ κ. Differentiating with respect to azimuth α, while holding elevation constant, means finding the rate of change as one moves around a line of elevation latitude. This figure was generated by Fig_DifferentiatingAtFovea.m in the Supplementary material.
Figure 11
 
Partial differentiation on the retina. The cyclopean retina is shown colored to indicate the value of the vertical disparity field at each point. Differentiating with respect to elevation κ while holding azimuth constant means finding the rate at which vertical disparity changes as one moves up along a line of azimuth longitude, as shown by the arrow labeled ∂/∂ κ. Differentiating with respect to azimuth α, while holding elevation constant, means finding the rate of change as one moves around a line of elevation latitude. This figure was generated by Fig_DifferentiatingAtFovea.m in the Supplementary material.
Figure 12
 
Scatterplots of estimated eye position parameters against actual values, both in degrees, for 1000 different simulated eye positions. Black lines show the identity line. Some points with large errors fall outside the range of the plots, but the quoted median absolute errors are for all 1000 simulations. On each simulation run, eye position was estimated as follows. First, the viewed surface was randomly generated. Head-centered X and Y coordinates were generated randomly near the fixation point ( X F, Y F, Z F). Surface Z-coordinates were generated from Z d = Σ ij a ij X d i Y d j, where X d is the X-position relative to fixation, X d = XX F ( Y d, Z d similarly, all in centimeters), i and j both run from 0 to 3, and the coefficients a ij are picked from a uniform random distribution between ±0.02 on each simulation run. This yielded a set of points on a randomly chosen smooth 3D surface near fixation. These points were then projected to the retinas, and the vertical disparity within 0.5° of the fovea was fitted with a parabolic surface. This simulation is Matlab program ExtractEyePosition.m in the Supplementary material.
Figure 12
 
Scatterplots of estimated eye position parameters against actual values, both in degrees, for 1000 different simulated eye positions. Black lines show the identity line. Some points with large errors fall outside the range of the plots, but the quoted median absolute errors are for all 1000 simulations. On each simulation run, eye position was estimated as follows. First, the viewed surface was randomly generated. Head-centered X and Y coordinates were generated randomly near the fixation point ( X F, Y F, Z F). Surface Z-coordinates were generated from Z d = Σ ij a ij X d i Y d j, where X d is the X-position relative to fixation, X d = XX F ( Y d, Z d similarly, all in centimeters), i and j both run from 0 to 3, and the coefficients a ij are picked from a uniform random distribution between ±0.02 on each simulation run. This yielded a set of points on a randomly chosen smooth 3D surface near fixation. These points were then projected to the retinas, and the vertical disparity within 0.5° of the fovea was fitted with a parabolic surface. This simulation is Matlab program ExtractEyePosition.m in the Supplementary material.
Figure A1
 
Head-centered coordinate system used throughout this paper. The origin is the point midway between the two eyes. The X-axis is defined by the nodal points of the two eyes and points leftward. The orientation of the XZ plane is defined by primary position but is approximately horizontal. The Y-axis points upward and the Z-axis points in front of the observer.
Figure A1
 
Head-centered coordinate system used throughout this paper. The origin is the point midway between the two eyes. The X-axis is defined by the nodal points of the two eyes and points leftward. The orientation of the XZ plane is defined by primary position but is approximately horizontal. The Y-axis points upward and the Z-axis points in front of the observer.
Table 1
 
Summary of the different properties of the two definitions of retinal vertical disparity in the absence of vertical vergence error and torsion.
Table 1
 
Summary of the different properties of the two definitions of retinal vertical disparity in the absence of vertical vergence error and torsion.
Vertical disparity defined as: Properties in the absence of vertical vergence error and torsion ( T c = T Δ = V Δ = 0)
Difference in retinal elevation longitude, η Δ Is zero for objects in plane of gaze.
Is zero when the eyes are in primary position, for objects at any distance anywhere on the retina.
Increases as eyes converge.
May be non-zero even for objects at infinity, if the eyes are converged.
Is proportional to sine of twice the elevation longitude.
Is not necessarily zero for objects on the midsagittal plane.
For fixation on midline, is independent of object distance for a given convergence angle.
Difference in retinal elevation latitude, κ Δ Is zero for objects in plane of gaze.
Is zero for objects at infinity.
Is inversely proportional to object's distance.
Is independent of convergence for objects at a given distance.
May be non-zero even when eyes are in primary position.
Is proportional to sine of elevation latitude.
Is zero for objects on the mid-sagittal plane.
Table A1
 
Meaning of subscripts.
Table A1
 
Meaning of subscripts.
L left eye
R right eye
Δ difference between left and right eye values, e.g., convergence angle H Δ = H RH L
δ half-difference between left and right eye values, e.g., half-convergence H δ = ( H RH L)/2
c cyclopean eye (mean of left and right eye values), e.g., cyclopean gaze angle H c = ( H R + H L)/2
Table A2
 
Definition of symbols.
Table A2
 
Definition of symbols.
I interocular distance
i half-interocular distance, i = I/2
k, l integer counters taking on values 1, 2, 3
M L, M R rotation matrix for left and right eyes, respectively
M c cyclopean rotation matrix, M c = ( M R + M L)/2
M δ half-difference rotation matrix, M δ = ( M RM L)/2
m vectors m j are the three columns of the corresponding rotation matrix M, e.g., m c1 = [ M c 11 M c 21 M c 31]; m δ2 = [ M δ 12 M δ 22 M δ 32] ( Equation A6)
H L,R,c gaze azimuth in Helmholtz system for left, right, and cyclopean eyes
V L,R,c gaze elevation in Helmholtz system for left, right, and cyclopean eyes
T L,R,c gaze torsion in Helmholtz system for left, right, and cyclopean eyes
H Δ horizontal convergence angle
V Δ vertical vergence misalignment (non-zero values indicate a failure of fixation)
T Δ cyclovergence
X, Y, Z position in space in Cartesian coordinates fixed with respect to the head ( Figure A1)
X ^ unit vector parallel to the X-axis
P vector representing position in space in head-centered coordinates: P = ( X, Y, Z)
U, W, S position in space in Cartesian coordinates fixed with respect to the cyclopean gaze. The S-axis is the optic axis of the cyclopean eye (see Figure 5)
R distance of an object from the origin. R 2 = X 2 + Y 2 + Z 2 = U 2 + W 2 + S 2 (see Figure 5)
R 0 distance of the fixation point from the origin (or distance to the point where the gaze rays most nearly intersect, if the eyes are misaligned so that no exact intersection occurs)
δ fractional difference between the fixation distance, R 0, and the distance to the object under consideration, R. That is, δ = ( RR 0)/ R 0
x horizontal position on the retina in Cartesian coordinate system ( Figure 2A)
y vertical position on the retina in Cartesian coordinate system ( Figure 2A)
α azimuth-longitude coordinate for horizontal position on the retina ( Figures 2B and 2C)
η elevation-longitude coordinate for vertical position on the retina ( Figures 2B and 2D)
β azimuth-latitude or declination coordinate for horizontal position on the retina ( Figures 2D and 2E)
κ elevation-latitude or inclination coordinate for vertical position on the retina ( Figures 2C and 2E)
ξ retinal eccentricity ( Equation 14)
Table A3
 
The relationship between the quantities ( U, W, S), giving an object's location in gaze-centered coordinates (cf. Figure 5), and that object's projection onto the cyclopean retina. The projection is given in planar Cartesian coordinates ( x c, y c) and as azimuth longitude α c, elevation longitude η c, azimuth latitude β c, and elevation latitude κ c. The object's head-centered coordinates ( X, Y, Z) will depend on eye position.
Table A3
 
The relationship between the quantities ( U, W, S), giving an object's location in gaze-centered coordinates (cf. Figure 5), and that object's projection onto the cyclopean retina. The projection is given in planar Cartesian coordinates ( x c, y c) and as azimuth longitude α c, elevation longitude η c, azimuth latitude β c, and elevation latitude κ c. The object's head-centered coordinates ( X, Y, Z) will depend on eye position.
U ≈ − Sx c W ≈ − Sy c R 2 = U 2 + W 2 + S 2 = X 2 + Y 2 + Z 2
U ≈ − Stan α c W ≈ − Stan κ csec α c S = Rcos α ccos κ c
U ≈ − Stan β csec η c W ≈ − Stan η csec α c S = Rcos β ccos η c
Table A4
 
Relationships between the different retinal coordinate systems shown in Figure 2.
Table A4
 
Relationships between the different retinal coordinate systems shown in Figure 2.
Cartesian ( x, y) ( Figure 2A) Azimuth longitude, elevation longitude ( α, η) ( Figure 2B) Azimuth longitude, elevation latitude: ( α, κ) (Fick; Figure 2C) Azimuth latitude, elevation longitude ( β, η) (Helmholtz; Figure 2D) Azimuth latitude, elevation latitude: ( β, κ) ( Figure 2E)
( x, y) x = tan ⁢ α y = tan ⁢ η x = tan ⁢ α y = tan ⁢ κ ⁢ sec ⁢ α x = tan ⁢ β ⁢ sec ⁢ η y = tan ⁢ η x = sin ⁢ β / √ ( cos 2 κ sin 2 β ) y = sin ⁢ κ / √ ( cos 2 β sin 2 κ )
( α, η) α = arctan ( x ) η = arctan ⁢ y α = α η = arctan ( tan ⁢ κ cos ⁢ α ) α = arctan ( tan ⁢ β · sec ⁢ η ) η = η α = arcsin ( sin ⁢ β ⁢ sec ⁢ κ ) η = arcsin ( sin ⁢ κ ⁢ sec ⁢ β )
( α, κ) α = arctan ( x ) κ = arctan ( y / x 2 + 1 ) α = α κ = arctan ( tan ⁢ η ⁢ cos ⁢ α ) α = arcsin ( sin ⁢ β cos ⁢ κ ) κ = κ α = arcsin ( sin ⁢ β cos ⁢ κ ) κ = κ
( β, η) β = arctan ( x / y 2 + 1 ) η = arctan ( y ) β = arctan ( tan ⁢ α ⁢ cos ⁢ η ) η = η β = arcsin ( sin ⁢ α ⁢ cos ⁢ κ ) η = arctan ( tan ⁢ κ ⁢ sec ⁢ α ) β = β η = arcsin ( sin ⁢ κ cos ⁢ β )
( β, κ) β = arctan ( x / √ ( 1 + y 2 ) ) κ = arctan ( y / √ ( 1 + x 2 ) ) β = arctan ( tan ( α ) cos ( η ) ) κ = arctan ( tan ( η ) cos ( α ) ) β = arcsin ( cos ⁢ κ ⁢ sin ⁢ α ) κ = κ β = β κ = arcsin ( cos ⁢ β ⁢ sin ⁢ η )
Table C1
 
Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c, elevation V c, or overall cycloversion T c.
Table C1
 
Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c, elevation V c, or overall cycloversion T c.
Horizontal disparity Most general expressions
In planar Cartesian retinal coordinates as a function of spatial position in gaze-centered coordinates x Δ ≈ I S ( U S H c sin − H c cos T c cos ) + [ ( U 2 S 2 + 1 ) T c cos − U ⁢ W S 2 T c sin ] H Δ − [ ( U 2 S 2 + 1 ) H c cos T c sin + U ⁢ W S 2 H c cos T c cos + W S H c sin ] V Δ − W S T Δ
In planar Cartesian retinal coordinates as a function of retinal location in planar Cartesian coordinates x Δ ≈ − ( x c H c sin + H c cos T c cos ) I S + [ ( x c 2 + 1 ) T c cos − x c y c T c sin ] H Δ + [ y c H c sin − ( x c 2 + 1 ) H c cos T c sin − x c y c H c cos T c cos ] V Δ + y c T Δ
In azimuth longitude, as a function of spatial location in gaze-centered coordinates α Δ ≈ 1 S 2 + U 2 { [ U H c sin − S H c cos T c cos ] I + [ ( S 2 + U 2 ) T c cos − U ⁢ W T c sin ] H Δ − [ ( S 2 + U 2 ) H c cos T c sin + U ⁢ W H c cos T c cos + W ⁢ S H c sin ] V Δ − W ⁢ S T Δ }
In azimuth longitude, as a function of retinal location in azimuth-longitude/ elevation-longitude coordinates α Δ ≈ − I S cos α c ( H c cos T c cos cos α c + H c sin sin α c ) + [ T c cos − sin α c cos α c tan η c T c sin ] H Δ − [ H c cos T c sin + sin α c cos α c tan η c H c cos T c cos − cos 2 α c tan η c H c sin ] V Δ + cos 2 α c tan η c T Δ
In azimuth longitude, as a function of retinal location in azimuth-longitude/ elevation-latitude coordinates α Δ ≈ − I S cos α c ( H c cos T c cos cos α c + H c sin sin α c ) + ( T c cos − sin α c tan κ c T c sin ) H Δ − ( H c cos T c sin + T c cos H c cos sin α c tan κ c − H c sin cos α c tan κ c ) V Δ + ( cos α c tan κ c ) T Δ α Δ ≈ − I R sec κ c ( H c cos T c cos cos α c + H c sin sin α c ) + ( T c cos − sin α c tan κ c T c sin ) H Δ − ( H c cos T c sin + T c cos H c cos sin α c tan κ c − H c sin cos α c tan κ c ) V Δ + ( cos α c tan κ c ) T Δ
In azimuth latitude, as a function of spatial location in gaze-centered coordinates β Δ = − I ( T c sin H c cos U ⁢ W − H c sin U ⁢ S + T c cos H c cos ( W 2 + S 2 ) ) ( U 2 + W 2 + S 2 ) W 2 + S 2 − ( − S T c cos H Δ + S H c cos T c sin V Δ + W H c sin V Δ + W T Δ ) W 2 + S 2
In azimuth latitude, as a function of retinal location in azimuth-latitude/ elevation-longitude coordinates β Δ = − I R ( T c sin H c cos sin β c sin η c + H c sin sin β c cos η c + T c cos H c cos cos β c ) + ( T c cos cos η c ) H Δ + cos η c ( tan η c H c sin − H c cos T c sin ) V Δ + ( sin η c ) T Δ
Table C2
 
Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c, elevation V c, or overall cycloversion T c.
Table C2
 
Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c, elevation V c, or overall cycloversion T c.
Vertical disparity Most general expressions
In planar Cartesian retinal coordinates as a function of spatial position in gaze-centered coordinates y Δ ≈ I S ( W S H c sin + T c sin H c cos ) + [ U ⁢ W S 2 T c cos − ( W 2 S 2 + 1 ) T c sin ] H Δ + [ U S H c sin − ( W 2 S 2 + 1 ) H c cos T c cos − U ⁢ W S 2 H c cos T c sin ] V Δ + U S T Δ
In planar Cartesian retinal coordinates as a function of retinal location in planar Cartesian coordinates y Δ ≈ − ( y c H c sin − T c sin H c cos ) I S + [ x c y c T c cos − ( y c 2 + 1 ) T c sin ] H Δ − [ x c H c sin + ( y c 2 + 1 ) H c cos T c cos + x c y c H c cos T c sin ] V Δ − x c T Δ
In elevation longitude, as a function of spatial location in gaze-centered coordinates η Δ ≈ 1 W 2 + S 2 { [ W H c sin + S T c sin H c cos ] I + [ − ( W 2 + S 2 ) T c sin + U ⁢ W T c cos ] H Δ − [ ( W 2 + S 2 ) H c cos T c cos + U ⁢ W H c cos T c sin − U ⁢ S H c sin ] V Δ + U ⁢ S T Δ }
In elevation longitude, as a function of retinal location in azimuth-longitude/ elevation-longitude coordinates η Δ ≈ cos 2 η c [ T c sin H c cos − H c sin tan η c ] I S + [ tan α c sin η c cos η c T c cos − T c sin ] H Δ − [ H c cos T c cos + tan α c sin η c cos η c H c cos T c sin + tan α c cos 2 η c H c sin ] V Δ − tan α c cos 2 η c T Δ
In elevation longitude, as a function of retinal location in azimuth-latitude/ elevation-longitude coordinates η Δ ≈ cos 2 η c [ T c sin H c cos − H c sin tan η c ] I S + [ tan β c sin η c T c cos − T c sin ] H Δ − [ H c cos T c cos + tan β c sin η c H c cos T c sin + tan β c cos η c H c sin ] V Δ − tan β c cos η c T Δ η Δ ≈ I R cos η c cos β c [ T c sin H c cos − H c sin tan η c ] + [ tan β c sin η c T c cos − T c sin ] H Δ − [ H c cos T c cos + tan β c sin η c H c cos T c sin + tan β c cos η c H c sin ] V Δ − tan β c cos η c T Δ
In elevation latitude, as a function of spatial location in gaze-centered coordinates κ Δ = I ( H c cos T c cos U ⁢ W + H c sin W ⁢ S + T c sin H c cos ( U 2 + S 2 ) ) ( U 2 + W 2 + S 2 ) U 2 + S 2 − ( S T c sin H Δ + S H c cos T c cos V Δ − U H c sin V Δ − U T Δ ) U 2 + S 2
In elevation latitude, as a function of retinal location in azimuth-longitude/ elevation-latitude coordinates κ Δ = I S cos α c cos κ c ( T c cos H c cos sin α c sin κ c − H c sin cos α c sin κ c + T c sin H c cos cos κ c ) − T c sin H Δ cos α c − ( H c cos T c cos cos α c + sin α c H c sin ) V Δ − sin α c T Δ κ Δ = I R ( T c cos H c cos sin α c sin κ c − H c sin cos α c sin κ c + T c sin H c cos cos κ c ) − T c sin H Δ cos α c − ( H c cos T c cos cos α c + sin α c H c sin ) V Δ − sin α c T Δ
Table C3
 
Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c or elevation V c, provided there is no overall cycloversion, T c = 0.
Table C3
 
Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c or elevation V c, provided there is no overall cycloversion, T c = 0.
Horizontal disparity With zero overall cycloversion, T c = 0
In planar Cartesian retinal coordinates as a function of spatial position in gaze-centered coordinates x Δ ≈ I S ( U S H c sin − H c cos ) + ( U 2 S 2 + 1 ) H Δ − [ U ⁢ W S 2 H c cos + W S H c sin ] V Δ − W S T Δ
In planar Cartesian retinal coordinates as a function of retinal location in planar Cartesian coordinates x Δ ≈ − ( x c H c sin + H c cos ) I S + ( x c 2 + 1 ) H Δ + ( y c H c sin − x c y c H c cos ) V Δ + y c T Δ
In azimuth longitude, as a function of spatial location in gaze-centered coordinates α Δ ≈ 1 S 2 + U 2 { [ U H c sin − S H c cos ] I + ( S 2 + U 2 ) H Δ − [ U ⁢ W H c cos + W ⁢ S H c sin ] V Δ − W ⁢ S T Δ }
In azimuth longitude, as a function of retinal location in azimuth-longitude/ elevation-longitude coordinates α Δ ≈ − I S cos α c cos ( H c − α c ) + H Δ + V Δ cos α c tan η c sin ( H c − α c ) + T Δ cos 2 α c tan η c
In azimuth longitude, as a function of retinal location in azimuth-longitude/ elevation-latitude coordinates α Δ ≈ − I S cos α c cos ( H c − α c ) + H Δ + V Δ tan κ c sin ( H c − α c ) + T Δ cos α c tan κ c α Δ ≈ − I R sec κ c cos ( H c − α c ) + H Δ + V Δ tan κ c sin ( H c − α c ) + ( cos α c tan κ c ) T Δ
In azimuth latitude, as a function of spatial location in gaze-centered coordinates β Δ = − I ( − H c sin U ⁢ S + H c cos ( W 2 + S 2 ) ) ( U 2 + W 2 + S 2 ) W 2 + S 2 − ( − S H Δ + W H c sin V Δ + W T Δ ) W 2 + S 2
In azimuth latitude, as a function of retinal location in azimuth-latitude/ elevation-longitude coordinates β Δ = − I R ( H c sin sin β c cos η c + H c cos cos β c ) + H Δ T c cos cos η c + V Δ H c sin cos η c tan η c + T Δ sin η c
Table C4
 
Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c or elevation V c, provided there is no overall cycloversion, T c = 0.
Table C4
 
Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I and in the vergence angles H Δ, V Δ, and T Δ. They hold all over the retina and for any cyclopean gaze H c or elevation V c, provided there is no overall cycloversion, T c = 0.
Vertical disparity With zero overall cycloversion, T c = 0
In planar Cartesian retinal coordinates as a function of spatial position in gaze-centered coordinates y Δ ≈ I ⁢ W S 2 H c sin + H Δ U ⁢ W S 2 + [ U S H c sin − ( W 2 S 2 + 1 ) H c cos ] V Δ + U S T Δ
In planar Cartesian retinal coordinates as a function of retinal location in planar Cartesian coordinates y Δ ≈ − I S y c H c sin + H Δ x c y c − [ x c H c sin + ( y c 2 + 1 ) H c cos ] V Δ − x c T Δ
In elevation longitude, as a function of spatial location in gaze-centered coordinates η Δ ≈ 1 W 2 + S 2 { [ W H c sin ] I + U ⁢ W H Δ − [ ( W 2 + S 2 ) H c cos − U ⁢ S H c sin ] V Δ + U ⁢ S T Δ }
In elevation longitude, as a function of retinal location in azimuth-longitude/elevation-longitude coordinates η Δ ≈ − I S H c sin sin η c cos η c + H Δ tan α c sin η c cos η c − V Δ ( H c cos + tan α c cos 2 η c H c sin ) − tan α c cos 2 η c T Δ
In elevation longitude, as a function of retinal location in azimuth-latitude/elevation-longitude coordinates η Δ ≈ − I S H c sin sin η c cos η c + H Δ tan β c sin η c − V Δ ( H c cos + tan β c cos η c H c sin ) − tan β c cos η c T Δ η Δ ≈ − I R H c sin sin η c cos β c + H Δ tan β c sin η c − V Δ ( H c cos + tan β c cos η c H c sin ) − tan β c cos η c T Δ
In elevation latitude, as a function of spatial location in gaze-centered coordinates κ Δ = I ( H c cos U ⁢ W + H c sin W ⁢ S ) ( U 2 + W 2 + S 2 ) U 2 + S 2 − ( S H c cos V Δ − U H c sin V Δ − U T Δ ) U 2 + S 2
In elevation latitude, as a function of retinal location in azimuth-longitude/elevation-latitude coordinates κ Δ = − I R sin κ c sin ( H c − α c ) − V Δ cos ( H c − α c ) − sin α c T Δ
Table C5
 
Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the convergence angle H Δ. They assume cycloversion, cyclovergence, and vertical vergence are all zero: T c = T Δ = V Δ = 0. They hold all over the retina and for any cyclopean gaze H c or elevation V c.
Table C5
 
Expressions for horizontal disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the convergence angle H Δ. They assume cycloversion, cyclovergence, and vertical vergence are all zero: T c = T Δ = V Δ = 0. They hold all over the retina and for any cyclopean gaze H c or elevation V c.
Horizontal disparity For zero torsion and vertical vergence error
In planar Cartesian retinal coordinates as a function of retinal location in planar Cartesian coordinates x Δ ≈ − ( x c H c sin + H c cos ) I S + ( x c 2 + 1 ) H Δ
In planar Cartesian retinal coordinates as a function of spatial position in gaze-centered coordinates x Δ ≈ I S 2 ( U H c sin − S H c cos ) + U 2 + S 2 S 2 H Δ
In azimuth longitude, as a function of spatial location in gaze-centered coordinates α Δ ≈ I S 2 + U 2 ( U H c sin − S H c cos ) + H Δ
In azimuth longitude, as a function of retinal location in azimuth-longitude/elevation-longitude coordinates α Δ ≈ − I S cos α c cos ( α c − H c ) + H Δ
In azimuth longitude, as a function of retinal location in azimuth-longitude/elevation-latitude coordinates (same as above since α Δ is then independent of retinal elevation)
In azimuth latitude, as a function of spatial location in gaze-centered coordinates β Δ = I ( H c sin U S − H c cos ( W 2 + S 2 ) ) ( U 2 + W 2 + S 2 ) W 2 + S 2 + S H Δ W 2 + S 2
In azimuth latitude, as a function of retinal location in azimuth-latitude/elevation-longitude coordinates β Δ = − I S cos β c cos η c ( H c sin sin β c cos η c + H c cos cos β c ) + cos η c H Δ
Table C6
 
Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the convergence angle H Δ. They assume cycloversion, cyclovergence, and vertical vergence are all zero: T c = T Δ = V Δ = 0. They hold all over the retina and for any cyclopean gaze H c or elevation V c.
Table C6
 
Expressions for vertical disparity in different coordinate systems. These are correct to first order in interocular distance I/ S ( I/ R) and in the convergence angle H Δ. They assume cycloversion, cyclovergence, and vertical vergence are all zero: T c = T Δ = V Δ = 0. They hold all over the retina and for any cyclopean gaze H c or elevation V c.
Vertical disparity For zero torsion and vertical vergence error
In planar Cartesian retinal coordinates as a function of retinal location in planar Cartesian coordinates y Δ ≈ y c ( − H c sin I S + x c H Δ )
In planar Cartesian retinal coordinates as a function of spatial position in gaze-centered coordinates y Δ ≈ W S 2 ( I H c sin + U H Δ )
In elevation longitude, as a function of spatial location in gaze-centered coordinates η Δ ≈ W W 2 + S 2 ( I ⁢ sin H c + U H Δ )
In elevation longitude, as a function of retinal location in azimuth-longitude/elevation-longitude coordinates η Δ ≈ ⁢ sin η c cos η c ( − I S sin H c + H Δ tan α c )
In elevation longitude, as a function of retinal location in azimuth-latitude/elevation-longitude coordinates η Δ ≈ ⁢ sin η c ( − I S sin H c cos η c + H Δ tan β c )
In elevation latitude, as a function of spatial location in gaze-centered coordinates κ Δ = I W ( U ⁢ cos H c + S ⁢ sin H c ) ( U 2 + W 2 + S 2 ) U 2 + S 2
In elevation latitude, as a function of retinal location in azimuth-longitude/elevation-latitude coordinates κ Δ = I S sin κ c cos κ c cos α c sin ( α c − H c ) κ Δ = I R sin κ c sin ( α c − H c )
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×