To reach for an object, one needs to know its egocentric distance (absolute depth). It remains an unresolved issue which signals are required by the brain to calculate this absolute depth information. We devised a geometric model of binocular 3D eye orientation and investigated the signals necessary to uniquely determine the depth of a non-foveated object accounting for naturalistic variations of eye and head orientations. Our model shows that, in the presence of noisy internal estimates of the ocular vergence angle, horizontal and vertical retinal disparities alone are insufficient to calculate the unique depth of a point-like target. Instead the brain must account for the 3D orientations of the eye and head. We tested the model in a behavioral experiment that involved reaches to targets in depth. Our analysis showed that a target with the same retinal disparity produced different estimates of reach depth that varied consistently with different eye and head orientations. The experimental results showed that subjects accurately account for this extraretinal information when they reach. In summary, when estimating the distance of point-like targets, all available signals about the object's location as well as body configuration are combined to provide accurate information about the object's distance.

*combined*eye and head movements, because head orientation influences 3D eye-in-head orientation (Crawford & Vilis, 1991) and thus the geometry of retinal projection (Misslisch, Tweed, & Hess, 2001). With the head upright, 3D eye rotations are behaviorally constrained to two dimensions, confining the eye rotation axis to a plane in space known as Listing's plane (Haslwanter, 1995; Hepp, 1990; Tweed, 1997a). Vergence causes the Listing's planes of the two eyes rotate outward like saloon doors as a function of vergence angle (Mok, Ro, Cadera, Crawford, & Vilis, 1992; Van Rijn & Van den Berg, 1993). The additional influence of head orientation relative to gravity comes—for static head orientations—from the static vestibulo-ocular reflex (sVOR; Bockisch & Haslwanter, 2001; Haslwanter, Straumann, Hess, & Henn, 1992). The sVOR is responsible for the ocular counter-roll during head roll (head rotations around the anterior–posterior axis) and also causes Listing's plane to tilt forward or backward as a function of head pitch angle. Therefore, 3D eye orientation for a specific cyclopean gaze direction depends not only on vergence but also on head pitch and roll angles. In particular, modulations of Listing's plane change the torsional state of both eyes and thus alter the location onto which a visual stimulus is projected (Schreiber, Crawford, Fetter, & Tweed, 2001). It is well established that these various modulations of binocular eye position alter retinal disparity and binocular correspondence (Schreiber, Tweed, & Schor, 2006; Tweed, 1997b) but it is not presently clear to what degree these various states are accounted for in calculating absolute depth.

*same*binocular retinal position (2D retinal position and horizontal and vertical retinal disparities) when paired with certain combinations of 3D eye and head orientations. We further show that when the vergence angle is noisy or inaccurate (Brenner & van Damme, 1998; Collewijn & Erkelens, 1990; Foley, 1980; Harwerth, Smith, & Siderov, 1995; Viguier et al., 2001) the visual system cannot solve for depth without accounting for 3D eye and head orientations. We then validate this prediction experimentally by means of a reaching task. By corollary, we show that in real-world situations the visual system has the capacity to use extraretinal copies of 3D eye and head orientations to decode the depth of a target from binocular retinal signals.

*T*):

*α*

_{0}is the tilt angle of Listing's plane for upright head orientations,

*c*

_{ P}is the gain for the gravity modulation of this tilt related to the pitch angle

*α*

_{ P},

*c*

_{ OCR}is the gain for the static ocular counter-roll of the head roll angle

*β*

_{ R},

*δ*is the gain for the rotation of Listing's plane due to vergence

*υ,*and

*E*

_{ H}and

*E*

_{ V}were horizontal and vertical versions, respectively. The results of this analysis are summarized in Table 1.

Subject | OCR gain c _{ OCR} | Pitch
gain c _{ P} | Vergence gain δ | Pitch
offset α _{ o} | IOD |
---|---|---|---|---|---|

GB | 0.0815 | 0.1407 | 0.5518 | 0.8312° | 6.4 cm |

GS | 0.1566 | 0.0741 | 0.3309 | 0.1911° | 6.3 cm |

JC | 0.0270 | 0.0307 | 0.2917 | 0.2536° | 6.1 cm |

KR | 0.0362 | 0.1150 | 0.2622 | 1.2554° | 6.5 cm |

LO | 0.1482 | 0.0380 | 0.2269 | 4.9700° | 6.7 cm |

- we calculated subjects' theoretical binocular projections of a 50-cm distant target that was presented 5 deg up and to the right (on the 45 deg oblique axis) with respect to straight-ahead fixation at 1-m fixation distance, as this would be the experimental condition in the following depth estimation experiment,
- we searched for possible solutions as described in the Methods section, and
- we chose a subset of solutions that yielded different depth estimations to perform the experiment.

*X*: lateral;

*Y*: forward;

*Z*: vertical axes relative to the head) for which a solution exists, i.e., a head roll angle could be found that made the retinal target projection rays intersect (black dots are identical for panels A and B).

*SD*) provides insight into vergence variability. For example, for the 80-cm fixation distance, the variability of depth estimation was 30 cm, which corresponds to more than 2° of vergence change.

Signal | Uncertainty | Depth range (cm) | Conditions |
---|---|---|---|

Horizontal version | 10 deg | 2.50 | Vergence = 5 deg |

Vertical version = cst. | |||

Vertical version | 10 deg | 0.18 | Vergence = 5 deg |

Horizontal version = cst. | |||

Head roll | 10 deg | 0.23 | Horizontal/vertical version = cst. |

Head pitch | 10 deg | 1.48 | Horizontal/vertical version = cst. |

*t*-test,

*p*< 0.01) different from 0 for all subjects and varied between 0.6 and 3.3 (mean slope = 1.37, Figure 11A). Although the individual values seem to be far from the ideal value of 1 (see Discussion section), subjects did use extraretinal signals to modulate their depth estimate. Most subjects also showed a global underestimation of depth (Mon-Williams & Tresilian, 1999; Tresilian et al., 1999; Van Pelt & Medendorp, 2008; Viguier et al., 2001).

*F*

_{ X},

*F*

_{ Y},

*F*

_{ Z}) (expressed in cyclopean-eye-centered, head-fixed coordinates), then the fixation lines

*L*

^{ G}of the right and left eyes can be written as simple geometrical expressions:

*s*and

*u*are parameters,

_{ R}and

_{ L}are the unit gaze direction vectors of the right and left eyes, respectively, and

_{ R,0}and

_{ L,0}are the locations of the right and left eyes in cyclopean-eye-centered, head-fixed coordinates, i.e.,

_{ T}and retinal disparity

_{ T}(both arbitrarily expressed in eye-centered, eye-fixed coordinates (also called retinal coordinates) using the Fick convention, i.e.,

_{ T}=

_{ T}=

*ϑ*is the horizontal and

*ϕ*is the vertical component) associated with a potential reach target at the cyclopean-eye-centered, eye-fixed location

*L*

^{ T}defined by the projection of the target onto both retinas can be written as

_{ i}(

*i*stands for

*R*or

*L*) are calculated as the cyclopean-eye-centered, head-fixed direction vector of the target (

_{ i,PP}, standing for Primary Position):

_{ i,PP}then has to be translated and rotated into the right and left eye orientations. This has to account for the 3D eye-in-head orientation and will be developed in the next section. Importantly, the two target rays

*L*

_{ R}

^{ T}and

*L*

_{ L}

^{ T}must intersect. This means that their spatial distance

*D*

_{ T}must be zero (see next section). If the two target rays

*L*

_{ R}

^{ T}and

*L*

_{ L}

^{ T}intersect, then the target position

_{ R,0}+

*v*·

_{ R}in head-fixed, eye-centered space can be found using Equation A4, where the parameter

*v*is written as

_{ i,PP}into a head-fixed representation, we need to account for the binocular version of Listing's law. Listing's law (Hepp, 1990) constraints the three-dimensional eye-in-head rotation vectors to a two-dimensional plane, called Listing's plane. In addition, Listing's law is also modulated by pitch and roll head orientations and this is known as the gravity pitch of Listing's plane and the ocular counter-roll, respectively.

_{i,PP}to account for the right and left eye's 3D position and orientation. The advantage of using dual quaternions over any other formalism is that it allows us to describe eye rotation independently of rotation sequences. In addition, dual quaternions provide a simple way of calculating the skew distance between two lines in 3D space. Finally, dual quaternions provide certain mathematical and numerical gains over possible alternatives (Aspragathos & Dimitros, 1998). However, the use of dual quaternions is an arbitrary choice and any other formalism would give the same results.

*Q*and

*Q*

_{0}, of which one is multiplied by a duality operator

*ɛ,*i.e.,

*Q*+

*ɛQ*

_{0}, where

*Q*describes the rotational component and

*Q*

_{0}implements the translation. A dual quaternion can also be represented as an eight-dimensional vector

*θ*around the axis

*d*along

_{i,PP}and passing through the center of the eyes

_{i,0}can be represented by the dual quaternion line

_{i,0}= [0

_{i,PP}0

_{i,0}×

_{i,PP}]

^{T}. Using the appropriate dual quaternion

_{eh,i}representing the 3D eye-in-head rotation, we can then rotate the lines

_{i,0}according to gaze direction and obtain the eye-centered, head-fixed target lines

_{i}=

_{eh,i}

_{i,0}

_{eh,i}

^{C}, where

^{C}=

*Q*

^{C}+

*Q*

_{0}

^{C}is the dual quaternion component conjugate and

*Q*

^{C}is the quaternion conjugate.

_{ eh,i}is composed by the binocular Listing's law (also called L2) quaternion

*Q*

_{ L2, i}and the ocular counter-roll operator

*Q*

_{ OCR}, i.e.,

_{ eh,i}=

*Q*

_{LP}accounted for the static vestibulo-ocular reflex (sVOR) by inducing a so-called gravity pitch (

*α*

_{P}) of the normal vector defining the Listing's plane (

*Q*

_{LP}). To compute the Listing's plane in the binocular extension (

*Q*

_{LP2,i}), we accounted for the ocular vergence angle

*υ*(cos

*υ*=

_{R}·

_{L}). Vergence then rotates the Listing's planes out “like saloon doors” (rotation

*Q*

_{V,i}). We hypothesize that this rotation is performed in head-fixed coordinates, but this is not known to date. This allowed us to compute the rotation quaternion

*Q*

_{L2,i}that brings the eyes from the primary position (

*Q*

_{PP,i}) into the appropriate Listing's plane as

*α*

_{0}= 5° and the gain for the gravity modulation of this tilt related to the pitch angle

*α*

_{ P}was

*c*

_{ P}= 0.05 (Bockisch & Haslwanter, 2001; Haslwanter et al., 1992). The gain for the static ocular counter-roll of the head roll angle

*β*

_{R}was

*c*

_{OCR}= 0.05 (Bockisch & Haslwanter, 2001; Haslwanter et al., 1992). The gain

*δ*

_{i}for the rotation of Listing's plane due to vergence was 1/4 and

*sign*(

*δ*

_{i}) = +1 for the left eye and −1 for the right eye. However, different values ranging between 1/6 and 1/2 have been reported in the literature (Mok et al., 1992; Tweed, 1997b; Van Gisbergen & Minken, 1994; Van Rijn & Van den Berg, 1993).

_{ R}

_{ L}

^{−1}. The first component of

*θ*between the two lines, i.e.,

*θ*= 2 · cos

^{−1}

*d*of these lines can be computed from the fifth component of

*d*= −2 ·

_{ C}projects onto the left and right retinas, we computed the (monocular) dual quaternion of cyclopean eye rotation

_{ eh,C}to position the target into a cyclopean-eye-centered, head-fixed reference frame. Then we projected this target onto both retinas to calculate the individual right and left eye retinal positions as well as the retinal disparity associated with the cyclopean retinal position. The point

_{ C}represented by the dual quaternion

_{ C}= [1 0 0 0 0

_{ C}]

^{ T}can then be transformed with

_{ eh,C}into the cyclopean-eye-centered, head-fixed reference frame, i.e.,

_{ H}=

_{ eh,C}

_{ C}

_{ eh,C}

^{ DC}, where the dual quaternion double conjugate is

^{ DC}=

*Q*

^{ C}−

*ɛQ*

_{0}

^{ C}with the quaternion conjugate

*Q*

^{ C}. Then the projection of the cyclopean-eye-centered, head-fixed target

_{ H}onto both retinal writes

_{ E,i}=

_{ eh,i}

^{ DC}

_{ H}

_{ eh,i}. To extract the translational part from a dual quaternion, one can use the following expression:

*H*_(

*Q*) is the negative Hamiltonian of the quaternion

*Q*defined as

*IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics*, 28, 135–145. [PubMed] [CrossRef]

*Vision Research*, 39, 1143–1170. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 1, (2):1, 55–79, http://journalofvision.org/1/2/1/, doi:10.1167/1.2.1. [PubMed] [Article] [CrossRef] [PubMed]

*Science*, 285, 257–260. [PubMed] [CrossRef] [PubMed]

*Cerebral Cortex*, 13, 1009–1022. [PubMed] [Article] [CrossRef] [PubMed]

*Proceedings of the Royal Society of London B: Biological Sciences*, 237, 445–469. [PubMed] [CrossRef]

*Journal of Comparative Neurology*, 299, 421–445. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 7, (5):4, 1–22, http://journalofvision.org/7/5/4/, doi:10.1167/7.5.4. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 41, 2127–2137. [PubMed] [CrossRef] [PubMed]

*Vector and tensor analysis*. London: John Wiley and Sons.

*Vision Research*, 38, 493–498. [PubMed] [CrossRef] [PubMed]

*Nature*, 375, 232–235. [PubMed] [CrossRef] [PubMed]

*Consciousness and Cognition*, 7, 438–453. [PubMed] [CrossRef] [PubMed]

*Proceedings of the London Mathematical Society*, 4, 381–395.

*Vision Research*, 37, 1049–1069. [PubMed] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 92, 10–19. [PubMed] [Article] [CrossRef] [PubMed]

*Annual Review of Neuroscience*, 24, 203–238. [PubMed] [CrossRef] [PubMed]

*Nature*, 394, 677–680. [PubMed] [CrossRef] [PubMed]

*Cerebral Cortex*, 12, 991–997. [PubMed] [Article] [CrossRef] [PubMed]

*Proceedings of the National Academy of Sciences of the United States of America*, 103, 1141–1146. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 38, 2999–3018. [PubMed] [CrossRef] [PubMed]

*Psychological Review*, 87, 411–434. [PubMed] [CrossRef] [PubMed]

*Journal of Neurophysiology*, 91, 2670–2684. [PubMed] [Article] [CrossRef] [PubMed]

*Perception*, 34,

*American Journal of Psychology*, 85, 477–497. [PubMed] [CrossRef] [PubMed]

*Progress in Neurobiology*, 55, 191–224. [PubMed] [CrossRef] [PubMed]

*Current Opinion in Neurobiology*, 14, 203–211. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 48, 1488–1496. [PubMed] [CrossRef] [PubMed]

*Elements of quaternions*. Cambridge, UK: Cambridge University Press.

*Vision Research*, 35, 1755–1770. [PubMed] [CrossRef] [PubMed]

*Neuroimage*, 10, 200–208. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 35, 1727–1739. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 32, 1341–1348. [PubMed] [CrossRef] [PubMed]

*Experimental Brain Research*, 132, 179–194. [PubMed] [CrossRef] [PubMed]

*Communications in Mathematical Physics*, 132, 285–292. [CrossRef]

*International Journal of Computer Vision*, 4, 59–78. [CrossRef]

*Binocular vision and stereopsis*. Oxford, UK: Oxford University Press.

*Experimental Brain Research*, 100, 509–514. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 31, 1351–1360. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 33, 813–826. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 45, 2339–2345. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 5, (2):2, 103–115, http://journalofvision.org/5/2/2/, doi:10.1167/5.2.2. [PubMed] [Article] [CrossRef]

*Journal of Neurophysiology*, 92, 1586–1596. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Vision*, 5, (10):5, 808–822, http://journalofvision.org/5/10/5/, doi:10.1167/5.10.5. [PubMed] [Article] [CrossRef]

*Mathematical Spectrum*, 17, 42–48.

*Perception*, 26, 1147–1158. [PubMed] [CrossRef] [PubMed]

*Nature*, 297, 376–378. [PubMed] [CrossRef] [PubMed]

*Journal of Neuroscience*, 21,

*Vision Research*, 32, 2055–2064. [PubMed] [CrossRef] [PubMed]

*Perception*, 28, 167–181. [PubMed] [CrossRef] [PubMed]

*Ergonomics*, 43, 391–404. [PubMed] [CrossRef] [PubMed]

*Experimental Brain Research*, 133, 407–413. [PubMed] [CrossRef] [PubMed]

*Zur vergleichenden physiologie des gesichtssinnes des menschen und der thiere*. Leipzig, Germany: Cnobloch.

*Journal of Neurophysiology*, 93, 1823–1826. [PubMed] [Article] [CrossRef] [PubMed]

*Archives of Ophthalmology*, 20, 604–623. [CrossRef]

*Current Opinion in Neurobiology*, 8, 509–515. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 42, 1307–1324. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 34, 1595–1604. [PubMed] [CrossRef] [PubMed]

*Perception*, 26, 599–612. [PubMed] [CrossRef] [PubMed]

*Cerebral Cortex*, 4, 314–329. [PubMed] [CrossRef] [PubMed]

*Experimental Brain Research*, 156, 212–223. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 5, 317–320. [CrossRef]

*Perception & Psychophysics*, 22, 400–407. [CrossRef]

*Perception*, 24, 155–179. [PubMed] [CrossRef] [PubMed]

*Neuron*, 33, 143–149. [PubMed] [Article] [CrossRef] [PubMed]

*Annals of the New York Academy of Sciences*, 956, 297–305. [PubMed] [CrossRef] [PubMed]

*Nature*, 410, 819–822. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 6, (1):6, 64–74, http://journalofvision.org/6/1/6/, doi:10.1167/6.1.6. [PubMed] [Article] [CrossRef]

*Current Opinion in Neurobiology*, 10, 747–754. [PubMed] [CrossRef] [PubMed]

*An elementary treatise on quaternions*. Cambridge, UK: Cambridge University Press.

*Proceedings of the Royal Society B: Biological Sciences*, 266, 39–44. [PubMed] [Article] [CrossRef]

*Neuroscience Research*, 51, 221–229. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 37, 1939–1951. [PubMed] [CrossRef]

*Current Biology*, 12, R764–R766. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 43, 307–319. [PubMed] [CrossRef] [PubMed]

*Information processing underlying gaze control*. Oxford, UK: Pergamon Press.

*Journal of Neurophysiology*, 99, 2281–2290. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 33, 691–708. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 32, 1875–1883. [PubMed] [CrossRef] [PubMed]

*Annalen der Physik*, 58, 233–253. [CrossRef]

*Perception*, 30, 115–124. [PubMed] [CrossRef] [PubMed]