Free
Research Article  |   May 2007
Computations for geometrically accurate visually guided reaching in 3-D space
Author Affiliations
  • Gunnar Blohm
    Centre for Vision Research, York University, Toronto, Canada
    Canadian Institutes of Health Research, Group for Action and Perception,
    Centre for Systems Engineering and Applied Mechanics, Université catholique de Louvain, Louvain-la-Neuve, Belgiumhttp://www.inma.ucl.ac.be/~blohmblohm@csam.ucl.ac.be
  • J. Douglas Crawford
    Centre for Vision Research, York University, Toronto, Canada
    Canadian Institutes of Health Research, Group for Action and Perception,
    Departments of Psychology, Biology and Kinesiology & Health Sciences, York University, Toronto, Canadahttp://www.yorku.ca/jdc/jdc@yorku.ca
Journal of Vision May 2007, Vol.7, 4. doi:https://doi.org/10.1167/7.5.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gunnar Blohm, J. Douglas Crawford; Computations for geometrically accurate visually guided reaching in 3-D space. Journal of Vision 2007;7(5):4. https://doi.org/10.1167/7.5.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A fundamental question in neuroscience is how the brain transforms visual signals into accurate three-dimensional (3-D) reach commands, but surprisingly this has never been formally modeled. Here, we developed such a model and tested its predictions experimentally in humans. Our visuomotor transformation model used visual information about current hand and desired target positions to compute the visual (gaze-centered) desired movement vector. It then transformed these eye-centered plans into shoulder-centered motor plans using extraretinal eye and head position signals accounting for the complete 3-D eye-in-head and head-on-shoulder geometry (i.e., translation and rotation). We compared actual memory-guided reaching performance to the predictions of the model. By removing extraretinal signals (i.e., eye–head rotations and the offset between the centers of rotation of the eye and head) from the model, we developed a compensation index describing how accurately the brain performs the 3-D visuomotor transformation for different head-restrained and head-unrestrained gaze positions as well as for eye and head roll. Overall, subjects did not show errors predicted when extraretinal signals were ignored. Their reaching performance was accurate and the compensation index revealed that subjects accounted for the 3-D visuomotor transformation geometry. This was also the case for the initial portion of the movement (before proprioceptive feedback) indicating that the desired reach plan is computed in a feed-forward fashion. These findings show that the visuomotor transformation for reaching implements an internal model of the complete eye-to-shoulder linkage geometry and does not only rely on feedback control mechanisms. We discuss the relevance of this model in predicting reaching behavior in several patient groups.

Introduction
Reaching out for a seen target requires a transformation of visual information into motor commands suitable for the arm. There are several stages involved in this transformation. First, the brain must construct an early internal representation of hand and target position relative to the line of sight and from binocular vision. Second, hand and target position in this early internal representation must be compared and transformed into a code that is independent of the line of sight and adapted to specify the desired reach relative to the effector. Third, the dynamic muscle contractions to move the arm have to be generated from this effector-related desired movement vector. Here, we will investigate the geometric issues related to the second stage, that is, the three-dimensional (3-D) aspects of the visuomotor transformation for reaching. 
It is thought that the posterior parietal cortex (PPC) represents the position of objects in space relative to their location on the retina (Batista, Buneo, Snyder, & Andersen, 1999; Crawford, Medendorp, & Marotta, 2004). This is called a gaze-centered representation of the visual environment and depends on the orientation of the eyes and the head (Medendorp, Goltz, & Vilis, 2005; Medendorp, Goltz, Vilis, & Crawford, 2003; Pouget, Ducom, Torri, & Bavelier, 2002). Substantial evidence has been amassed suggesting that these gaze-centered codes are spatially updated across eye movements (Henriques, Klier, Smith, Lowy, & Crawford, 1998; Khan, Pisella, Rossetti, Vighetto, & Crawford, 2005; Khan, Pisella, Vighetto, et al., 2005; Medendorp & Crawford, 2002; Medendorp, Tweed, & Crawford, 2003; Merriam, Genovese, & Colby, 2003) and play an important role in the memory of reach targets and movement planning (Batista et al., 1999; Buneo, Jarvis, Batista, & Andersen, 2002; Crawford et al., 2004). 
To generate a reach, it is necessary for the brain to compare the current hand and target positions within the same frame of reference ( Figure 1A) to compute the desired movement vector. Although many studies address this topic, not much is known about how gaze-centered parietal signals are transformed into shoulder-centered frontal signals or how the comparison between hand and target position occurs. If this comparison was done in shoulder-centered coordinates, the gaze-centered parietal signals would first have to be transformed into shoulder-centered coordinates by combining them with current eye and head positions (Henriques et al., 1998; Snyder, 2000; Soechting, Tillery, & Flanders, 1991). However, it has recently been shown that hand position signals—even in the absence of vision—are represented in gaze-centered coordinates within PPC (Figure 1B; Buneo et al., 2002). This has led to the suggestion that the hand-target comparison takes place in gaze-centered coordinates at the level of PPC (Andersen & Buneo, 2002; Batista et al., 1999). 
Figure 1
 
The visuomotor transformation problem. (A) The gaze-centered visual input has to be transformed into a shoulder-centered reach plan taking into account the complete body geometry, including eye and head orientation. (B) Schematic representation of the human brain areas involved in the visuomotor transformation process. VC: visual cortex; PPC: posterior parietal cortex; CS: central sulcus; PCS: precentral sulcus; S1: primary somatosensory area for arm movements (proprioception); M1: primary motor cortex area for arm movements; PMv: ventral premotor cortex; PMd: dorsal premotor cortex.
Figure 1
 
The visuomotor transformation problem. (A) The gaze-centered visual input has to be transformed into a shoulder-centered reach plan taking into account the complete body geometry, including eye and head orientation. (B) Schematic representation of the human brain areas involved in the visuomotor transformation process. VC: visual cortex; PPC: posterior parietal cortex; CS: central sulcus; PCS: precentral sulcus; S1: primary somatosensory area for arm movements (proprioception); M1: primary motor cortex area for arm movements; PMv: ventral premotor cortex; PMd: dorsal premotor cortex.
However, even if both the hand and target are represented and compared in gaze-centered coordinates, this gaze-centered code must still be transformed into a shoulder-centered plan for the desired reaching command in 3-D space to be accurate (Crawford et al., 2004). The projection of visual targets onto the retina depends not only on the direction of gaze, but also on behavioral constraints such as Listing's law (Hepp, 1990; von Helmholtz, 1867/1924; Westheimer, 1957) and Donders' law (Crawford, Martinez-Trujillo, & Klier, 2003; Donders, 1848; Tweed, 1997), which establish the complete 3-D orientation of the eyes and head for a given viewing direction (Tweed, 1997). These orientations determine both the spatial pattern of retinal stimulation for a given visual target and the geometric relationship between this retinal input and the desired hand movement vector required to reach toward the target. This is further complicated by the fact that the centers of rotation of the eye, head, and shoulder do not align and shift relative to each other with each head rotation (Henriques & Crawford, 2002; Henriques, Medendorp, Gielen, & Crawford, 2003). 
In an approach based on the geometry of translations, a “direct transformation model” that does not take eye–head rotations into account would render the remaining sensorimotor transformations somewhat trivial because movement vectors in eye-, head-, or shoulder-centered coordinates would all be equivalent (Buneo et al., 2002). However, in the real world where these bodies rotate as well as translate, the representation must be different in each of these frames in order to specify an invariant position in space (Crawford et al., 2004). To date, no one has modeled the implications of this for the neural control of hand movements, so it is not known how quantitatively important these transformations are for ordinary reach movements. 
An implicit or explicit shoulder-centered representation of the motor plan is eventually needed because the shoulder is the insertion point of the arm and the arm movement direction must be specified relative to the spatial position and orientation of the shoulder. Such a reach plan is possibly encoded in the premotor (PM) cortex ( Figure 1B). This hypothesis is consistent with findings that show a representation in PM of wrist movement direction in space independent of wrist orientation (Kakei, Hoffman, & Strick, 2001, 2003; Scott, 2001), a gradual transformation of shoulder-centered to muscle related coordinates between PM and the motor cortex (Crammond & Kalaska, 1996; Graziano, Hu, & Gross, 1997; Kalaska, Scott, Cisek, & Sergio, 1997), and an independence of many visual receptive fields with respect to eye position in PM (Fogassi et al., 1996). 
In the field of robotics, it is commonplace to model the 3-D transformations required to guide an effector by a sensory input. Surprisingly, the visuomotor transformation for reaching from gaze to shoulder-centered coordinates in human or nonhuman primates has never been modeled in its whole complexity. Comparable transformations have been modeled for the much simpler oculomotor system with considerable success for behavioral and physiological predictions (Crawford & Guitton, 1997; Crawford, Henriques, & Vilis, 2000; Dieterich, Glasauer, & Brandt, 2003; Glasauer, 2003; Glasauer, Dieterich, & Brandt, 1999; Klier, Wang, & Crawford, 2001; Quaia & Optican, 1998; Quaia, Optican, & Goldberg, 1998; Tweed, Haslwanter, & Fetter, 1998), but no such models exist in the arm movement control literature. Without these foundations, many of the assumptions and hypotheses held in current concepts of motor planning and control cannot be formally tested. In addition, none of the previous studies have examined the implications of the visuomotor transformation for reach depth, that is, the radial distance from the body. 
In natural reaching behavior, feed-forward computations of the desired movement are combined with feedback control during the motor execution stage (Desmurget & Grafton, 2000). Here, we propose a theoretical framework describing the early, feed-forward 3-D visuomotor transformation for reach planning before feedback control is implemented. We describe the nonlinear (Pouget & Sejnowski, 1997) geometrical computations that the brain has to perform in order to convert gaze-centered reach plans into shoulder-centered motor vectors. This model accounts for the spherical shape of the retina and the complete 3-D properties of eye and head rotations (including their interaction). We performed a reaching experiment to test the model's principal predictions, that is, that eye orientation, head orientation, eye–head–shoulder linkage, and related nonlinearities are taken into account by the brain. This first 3-D model of the early visuomotor transformation for reaching is potentially the starting point for more complex and physiologically realistic models and may also be useful to study the impaired system in patients. 
Methods
General mathematical approach
We modeled the early 3-D visuomotor transformation for reach movements with respect to different 3-D eye and head configurations. We assumed that both the reach target and the current hand position are coded in gaze-centered coordinates (Buneo et al., 2002; Vetter, Goodbody, & Wolpert, 1999). Therefore, we first projected the actual hand position and the desired reaching target position onto the spherical retina, and we compared these in retinal coordinates to compute the gaze-centered “motor error command” for the reach plan (Figure 2). Based on current knowledge from psychophysical studies, we used a single 3-D (cyclopean; e.g., Khokhotva, Ono, & Mapp, 2005; Ono & Barbeito, 1982) retinal “motor error” command. This 3-D retinal error was computed from a two-dimensional (2-D) angular difference between target and hand positions and includes a third component, that is, the radial distance between hand and target to specify depth. Including the radial distance into the retinal motor error allowed us to analyze the depth component of the visuomotor transformation for arm movements. 
Figure 2
 
The visuomotor transformation model. The retinal image of the reaching target and the hand is translated and rotated to construct the motor plan. Although rotations need extraretinal information of current 3-D eye and head posture, translations can be performed using learned (constant) lengths of body segments. Green solid arrows: the full model; dotted red arrows: the model ignoring the offset of rotation centers; dashed red arrows: the model ignoring eye–head rotation signals.
Figure 2
 
The visuomotor transformation model. The retinal image of the reaching target and the hand is translated and rotated to construct the motor plan. Although rotations need extraretinal information of current 3-D eye and head posture, translations can be performed using learned (constant) lengths of body segments. Green solid arrows: the full model; dotted red arrows: the model ignoring the offset of rotation centers; dashed red arrows: the model ignoring eye–head rotation signals.
The second, much more complex step consisted of using the gaze-centered reach plan to calculate the motor plan in shoulder-centered coordinates. Figure 2 shows schematically how we computed the visuomotor transformation. The model transforms the retinal desired movement vector into a shoulder-centered motor plan, using extraretinal signals of eye and head orientation and an internal model of the eye–head–shoulder linkage geometry. This transformation consisted of stepwise rotations and translations of the movement vector along the body joints ( Figure 2, green arrows). For example, the retinal desired movement vector was first rotated by the inverse of eye-in-head position to provide an eye-centered, head-fixed motor plan ( Figure 2). (Note that “centered” indicates the location of the origin of the reference frame, whereas “fixed” describes the orientation of the axes.) This motor plan was then translated to the head-on-neck attachment point. 
The offset between the centers of rotation of the eyes, head, and shoulder corresponds to the translational part of the reference frame transformation. The translational component, however, only has an effect if rotations are taken into account, for example, when the head rotates, the eyes translate in space. If such rotations did not occur, movement vectors would be invariant to translation, but this is not how our anatomy works. These operations were repeated until a shoulder-centered, shoulder-fixed motor plan was obtained ( Figure 2; we simply called this a shoulder-centered motor plan). Because we assume that both hand and targets locations are initially coded in gaze-centered coordinates, transforming the desired movement vector is equivalent to transforming the hand and target locations separately. 
In order to develop a quantitative index of compensation for extraretinal signals (i.e., to analyze to what extent the brain accounts for extraretinal signals), we also computed predictions when eye and head rotation signals were not used (i.e., rotation angles were set to 0°, Figure 2, red dashed arrows; predictions used in Figures 4, 5, 6, 7, and 10) or when the offset between the centers of eye and head rotation was ignored ( Figure 2, red dotted arrows; predictions used in Figure 9). This allowed us to create compensation indices ranging from zero (no compensation) to one (full compensation). The output of the visuomotor transformation computation was a hand movement vector in Cartesian shoulder-centered coordinates. We used these predictions to evaluate the importance of the different components and to assess to what extent extraretinal information was used in the 3-D visuomotor transformation. In the Discussion section, we will consider the possible significance of these variations in testing clinical populations. 
The model was implemented using Clifford's dual quaternion formalism for a unified mathematical description of translations and rotations (Clifford, 1873; Study, 1891). Dual quaternions are usually used in robotic control (Angeles, 1998; Perez & McCarthy, 2004) and image processing (Juttler, 1994) because of their advantageous mathematical (Clifford, 1873; Study, 1891) and numerical properties (Aspragathos & Dimitros, 1998). More important, the use of dual quaternions enabled us to combine the benefits of quaternions for the 3-D rotational geometry with a translation operation necessary in our model. In the dual quaternion formalism, any combined translation–rotation operation
Q^
i can be written as a rotation of angle θi around the rotation axis
r
i (applied in point
a
i) and the translation of length di along
r
i: 
Q^i=Q^(θi,ri,di,ai).
(1)
 
The translation length d i corresponds to the offset of the centers of rotation between eyes, head, neck, and shoulder. The detailed structure of the dual quaternion operator is described in 1. These computations allowed us to build any reference frame transformation as a combination of translations and rotations of body segments. A chain of serial transformations was performed by the dual quaternion multiplication. As a result, we computed the complete linkage geometry as the quaternion product of individual translations and rotations for each body link i:  
Q ^ = i Q ^ i .
(2)
 
It should be pointed out that our model describes the early aspects of the reach plan, that is, the visuomotor transformation that is performed to convert gaze-centered desired movement vectors into shoulder-centered motor plans. A later motor execution step is required to translate the shoulder-centered motor plan into the actual muscle-based arm movement commands dealing with the kinematic and dynamic properties of the arm and this has been described elsewhere (Todorov, 2000). 
Model details
Eye-in-head position followed Listing's law (Hepp, 1990), that is, the rotation vector
r
of the eye with respect to the head lies in a plane (Listing's plane), as described elsewhere (Tweed, 1997). In dual quaternion formulation, this is 
Listing'slaw:Q^L=[QLQPP0],
(3)
where QL is the Listing's law quaternion and QPP is a quaternion defining the primary position of the eyes in the orbit. These quaternions are (Tweed, 1997) 
QL=[0g]QLP,
(4)
 
andQPP=QLP1[0010]T,
(5)
where
g
is the direction of gaze in space and QLP is a quaternion that describes the gravity tilt of Listing's plane (see below). We also implemented the static vestibulo-ocular reflex (VOR) specifying that the eyes counterroll slightly for head rolls (rotation of the head toward the shoulder) and that the normal vector of Listing's plane may tilt with head tilt (rotation of the head around the ear-to-ear axis; Bockisch & Haslwanter, 2001; Haslwanter, Straumann, Hess, & Henn, 1992). The latter is called the gravity pitch of Listing's plane. 
Gravitypitch:QLP=[00cosαsinα],
(6)
 
withα=α0+cpβp,
(7)
 
Ocularcounterroll:QOCR=[cosβ0sinβ0],
(8)
 
withβ=cOCRβr.
(9)
 
The dual quaternion representation for the ocular counterroll is  
Q ^ O C R = [ Q O C R 0 ] .
(10)
 
In our simulations, we used a tilt angle of Listing's plane for the head-upright position that was α 0 = 5° and the gain for the gravity modulation of this tilt related to the pitch angle β p was c p = 0.05. The gain for the counterroll of the head-roll angle β r was c OCR = 0.05 (Bockisch & Haslwanter, 2001; Haslwanter et al., 1992). Listing's law is often believed to simplify the visual projections because it restrains the space of possible sensory inputs. However, this only applies to the generally considered much simplified version of Listing's law that does not take into account vergence or the static vestibulo-ocular reflex (sVOR). 
For head-unrestrained movements, the stereotyped contribution of the head movement to the final gaze position depends on the axis of rotation, a behavior called Donder's law (Crawford et al., 2003; Tweed, 1997). Donder's law can be expressed in the dual quaternion formalism as follows (Tweed, 1997): 
Donder'slaw:Q^D=[QD0],
(11)
 
withQD=y+[(1y·y)000]T1/2,
(12)
where the dot stands for the vector dot product and where 
y=[0γVx1γTx1x3γHx3],
(13)
 
withx=[0g][0010]T.
(14)
 
For the horizontal (H), torsional (T), and vertical (V) gain of the head contribution, we used the following values (Glenn & Vilis, 1992): 
(γH,γT,γV)=(0.7,0.1,0.3).
(15)
 
Retinal stimulus positions were presented in degrees of angular eccentricity with respect to the fovea and also incorporated a depth component (radial distance). The retinal hand and target positions (that form the desired movement vector) were then transformed into a shoulder-centered frame of reference, as described above. We used standard measurements for the linkage geometry (joint-to-joint lengths): a shoulder length of 20 cm, a neck length of 10 cm, and a head-to-eye translation length of 15 cm vertically and 17 cm forward. Please refer to 1 and Supplementary Methods for details about the dual quaternion formalism. 
Experimental procedures
To test to which extent extraretinal signals are taken into account by the brain, we asked seven human subjects to reach out in complete darkness to a remembered target position while fixating a small light-emitting diode (LED) with different head postures. Seven fixation LEDs were arranged at 0°, 25°, 35°, and 45° eccentricity on the oblique directions (45° from horizontal) of the first and second quadrant of the visual field with respect to straight-ahead viewing, that is, at upper-rightward and upper-leftward positions. Five potential reaching targets were vertically aligned (6.25 cm vertical offset between each) 25 cm left of straight-ahead and the initial hand position was 25 cm to the right. Therefore, reach targets were located such that accurate movements were confined in the frontal plane. At the beginning of each trial, we asked subjects to place their hand at the initial position by touching an LED. A finger-mounted LED was illuminated to allow them to align their index finger to the initial position LED. Next, one of the fixation LEDs was randomly illuminated and subjects were asked to maintain fixation on this LED until the end of the trial. Once the eyes were aligned with the fixation LED, one of the reaching targets was briefly flashed (200 ms) and we asked our subjects to reach out in darkness to the memorized position of the target as soon as the flash appeared. The finger-mounted LED was extinguished at the same time as the flashed reaching target. Thus, both the finger and the reach target LEDs were briefly presented together so that subjects could entirely rely on the visual (gaze-centered) desired movement vector. Meanwhile, the computer activated a motor that physically removed the extinguished target in order to prevent tactile feedback about the subject's performance. Subjects reached from the right to the left using their right hand. 
The experiment consisted of four different conditions. In the first condition, subjects reached with a head-restrained oblique gaze position. Their head was restrained in an upright position by a bite-bar. In the second condition, the subject's head was rolled in various positions; 30° counterclockwise (CCW), 30° clockwise (CW), 15° CCW, and 15° CW by rotating the bite-bar apparatus, and only the straight-ahead fixation target (0°) was presented. In the third (control) condition, the head was fixed upright and the only the straight-ahead fixation target was presented. The fourth condition was similar to the first condition but the head was unrestrained and subjects could move it freely. We tested both head-restrained and head-unrestrained conditions because eye-in-space orientation (which determines the retinal projection of the targets) obeys different constraints in both conditions and because we could test different head contributions to the visuomotor transformation. In this first study, we were only interested in the main properties resulting from changes in eye and head positions. Therefore, we did not change the neck and shoulder geometry in our experiment. The 3-D position of one eye was recorded using a scleral search coil (200 Hz; Skalar, Netherlands) and the 3-D orientation and position of the head and the position of the right index finger were recorded using an Optotrak motion analysis system (200 Hz; NDI, Waterloo, Ontario, Canada). 
Data analysis
In order to compare the model predictions to the behavioral results, we computed the predicted 3-D compensation from the model. The model used the measured eye and head positions from the experiments. The predicted 3-D compensation was the absolute amount of compensation needed, as predicted by the model (compared to the predictions when rotations of eye and head were ignored, i.e., set to 0°) using the measured eye and head orientations. Thus, the more extraretinal information used, the higher the 3-D compensation. We then compared the predicted 3-D compensation to the observed compensation from the behavioral experiment. The observed 3-D compensation was the dot product between the observed compensation vector and the predicted compensation unity vector. In order to ensure that this procedure was correct, we verified that there were no significant correlations between the movement errors and the gaze and/or head-roll angle in any condition (see Supplementary Figure 1). Similarly, we computed the remaining (orthogonal error) as the cross product between observed compensation vector and the predicted compensation unity vector. 
The predicted initial movement direction in Figure 10 was computed as the angle between the motor vector predicted by the model and the motor vector predicted when eye and head rotations were ignored. For the observed initial movement direction, we calculated the angle between the measured directions of the initial component of the reaching movement and the movement vector as predicted by the model without eye–head rotations. The sign of this angle reflects the direction of the observed initial movement relative to the predicted movement direction. The initial component of the reach movement was defined as the vector that connected the hand position at hand movement onset with the hand position 100 ms after hand movement onset. This ensured the absence of any proprioceptive feedback (Calton, Dickinson, & Snyder, 2002). The onset of the hand movement was detected by an absolute hand velocity threshold of 10 cm/s. 
For the analysis of the reaching endpoints, we computed 95% confidence ellipsoids. To do this, we performed a standard principal component analysis (PCA) to find the eigenvectors of the mean-corrected reach errors, that is, the directions of maximal variance in decreasing order that formed the directions of our confidence ellipsoid. We then projected the reach errors onto those eigenvectors to obtain the contribution of individual reach errors in every direction of the eigenspace. After adding the previously removed mean back to the data now represented in the new eigenspace, we calculated the mean and 95% confidence limits of the reach errors along each eigenvector and obtained the location, orientation, and size of the 95% confidence ellipsoid. 
Results
Model predictions
To gain better insight into the effects and nonlinearities involved in the early visuomotor transformation for reaching, we will describe the different components of the visuomotor transformation separately before making detailed predictions. First, it is usually assumed that gaze angle is the result of adding eye and head orientation angles. Although this is correct for targets at infinite distance, this is not the case for closer targets. As Figure 3A illustrates, the gaze angle for a fixation cross in space is different for different head positions, that is, eye + head ≠ constant. 
Figure 3
 
Geometry. (A) Head rotation translates the eyes in space, which result in different visual and gaze angles, that is, gaze ≠ eye + head angle. Solid orange target lines form Head = 30° condition translated into Head = 0° condition for visualization. (B) Effect of eye rotation on the visuomotor transformation. The same retinal vector corresponds to different spatial motor plans when the eyes change their position. (C) Retinal projection nonlinearity. The same spatial movement vector translated in oblique directions on the frontoparallel plane in space (e.g., the same spatial movement executed from different initial hand positions). Solid cross: retinal horizontal/vertical; dotted crosses: retinal obliques, 90° visual range. (D) Effect of eye-in-space torsion. Central rotated arrows show the effect of CW (dark green) to CCW (light green) head-roll positions (15° steps). The four 45° projection vectors correspond to the same movement vector but viewed under different eye positions, that is, 45° up-right (ur), up-left (ul), down-right (dr), and down-left (dl) positions.
Figure 3
 
Geometry. (A) Head rotation translates the eyes in space, which result in different visual and gaze angles, that is, gaze ≠ eye + head angle. Solid orange target lines form Head = 30° condition translated into Head = 0° condition for visualization. (B) Effect of eye rotation on the visuomotor transformation. The same retinal vector corresponds to different spatial motor plans when the eyes change their position. (C) Retinal projection nonlinearity. The same spatial movement vector translated in oblique directions on the frontoparallel plane in space (e.g., the same spatial movement executed from different initial hand positions). Solid cross: retinal horizontal/vertical; dotted crosses: retinal obliques, 90° visual range. (D) Effect of eye-in-space torsion. Central rotated arrows show the effect of CW (dark green) to CCW (light green) head-roll positions (15° steps). The four 45° projection vectors correspond to the same movement vector but viewed under different eye positions, that is, 45° up-right (ur), up-left (ul), down-right (dr), and down-left (dl) positions.
Therefore, to accurately reach out for a target viewed under different head orientations, the brain must take the translation between the centers of rotation into account. Second, eye (and head) rotations must be accounted for. This can easily be seen in Figure 3B, where the same retinal desired movement vector corresponds to two very different movements in space for different gaze angles. Note that these first two effects are intimately related to Listing's law because different head positions modulate Listing's law (through the static VOR) and different eye positions result in different amounts of “false torsion” around the line of sight, particularly in tertiary (oblique) positions. 
Third, the projection of a spatial desired movement vector onto the spherical retina is inherently nonlinear. Figure 3C shows how the same movement vector translated in the frontoparallel plane produces very different retinal images, both changing the length and tilt of the retinal vector. Finally, “false torsion” due to Listing's law and the sVOR modulation of Listing's law change the tilt of the retinal desired movement vector. This is illustrated in Figure 3D for different head-roll angles (center part of panel D) and for different eccentric eye positions (outer parts of panel D). Note that in our model, ocular torsion relative to the head can be up to 8°. This is much smaller than the range of head-roll angles but nevertheless results in a significant effect on the visuomotor transformation. 
The effect of the projection geometry has been analyzed previously (Crawford & Guitton, 1997; Henriques & Crawford, 2000). Here, we focus on the effect of eye and head rotations as well as the offset between the eye and head rotation centers. (Note, however, that the projection geometry will interact with the extraretinal signal effect.) We investigated the use of the 3-D eye and head positions separately in our head-restrained and head-unrestrained experimental conditions respectively. To specifically consider torsion/roll, we also tested subjects in a head-roll condition. 
Figure 4 shows predictions of the model with (green) and without (red) accounting for eye–head rotations for three typical situations. The model without rotation (red) uses a fixed mapping from hand motor error in eye coordinates to hand motor error in shoulder coordinates, that is, as if the eye and head orientations were at their straight-ahead “primary” positions. By definition, this produces the same reaching motor commands as the full model (green arrow) for a straight-ahead gaze direction and an upright head position ( Figure 4A). However, in any other viewing condition, for example, when gaze is offset from the primary position and/or the head moves, the model without rotations produces large reaching errors. This is because the same arm movement plan is predicted for the same (gaze-centered) motor error, irrespective of the actual eye–head position. Therefore, the spherical projection of the visual environment onto the retina gives rise to a 3-D tilt and scaling of the desired reaching vector during the direct gaze-centered to shoulder-centered transformation. 
Figure 4
 
Model predictions for visually guided reaching. In the retinal representation (right column), actual hand and target (disc) positions are compared to compute the retinal (gaze-centered) motor error (solid grey arrow). Solid lines represent the space horizontal and vertical axes. The left column shows the visual environment and the actual body configuration (gaze position: black dotted line). Predictions made by the model ignoring eye–head rotations (dotted red arrow) and the full model (green arrow) are shown. (A) Primary eye and head positions. (B) 35° oblique gaze with upright head position (the model without rotations predicts a depth component of reaching, i.e., as if the target was behind its actual location). (C) 20° rightward head-roll with straight-ahead eye-in-head position.
Figure 4
 
Model predictions for visually guided reaching. In the retinal representation (right column), actual hand and target (disc) positions are compared to compute the retinal (gaze-centered) motor error (solid grey arrow). Solid lines represent the space horizontal and vertical axes. The left column shows the visual environment and the actual body configuration (gaze position: black dotted line). Predictions made by the model ignoring eye–head rotations (dotted red arrow) and the full model (green arrow) are shown. (A) Primary eye and head positions. (B) 35° oblique gaze with upright head position (the model without rotations predicts a depth component of reaching, i.e., as if the target was behind its actual location). (C) 20° rightward head-roll with straight-ahead eye-in-head position.
The importance of taking extraretinal eye–head position signals into account can be observed in Figures 4B and 4C for reaching movements toward peripheral targets viewed in oblique gaze positions and CW head roll around the straight-ahead axis, respectively. For example, for a 50-cm desired reaching movement (as shown in Figure 4), the model produced almost 22 cm of error—mainly in depth—in the case of a 35° oblique gaze angle (around 27° up and 27° left) if rotations were ignored (red). For a 20° CW head roll, the error in the reaching plan expected when ignoring rotations for the same 50 cm movement was 17 cm ( Figure 4C). This illustrates the importance of taking 3-D eye and head rotations into account in the visuomotor transformation for reaching. 
A more complete set of predictions across a number of oblique gaze shifts and head rolls can be found in Figure 5. The model without rotations produced similar errors for head-restrained ( Figure 5A) and head-unrestrained( Figure 5B) oblique gaze shifts with contributions from all three spatial directions. Errors in reach distance (depth) arise because a desired gaze-centered movement vector is rotated as if the eyes and head were in primary position (when rotations are ignored in the model). As a consequence, the resulting shoulder-centered motor plan has a depth error. 
Figure 5
 
Detailed model predictions for oblique gaze and head roll. Reaching errors as predicted by the model when ignoring extraretinal rotation signals (solid lines are components along each axis, bold lines are the total error) and the full model (dotted green lines) in each direction. (A) Oblique gaze condition with head-restrained straight-ahead. (B) The same condition but with the head moving freely. (C) Straight-ahead gaze with different head-roll angles. The components in all directions and the total error are shown ( x axis: left-to-right; y axis: back-to-front; z axis: down-to-up).
Figure 5
 
Detailed model predictions for oblique gaze and head roll. Reaching errors as predicted by the model when ignoring extraretinal rotation signals (solid lines are components along each axis, bold lines are the total error) and the full model (dotted green lines) in each direction. (A) Oblique gaze condition with head-restrained straight-ahead. (B) The same condition but with the head moving freely. (C) Straight-ahead gaze with different head-roll angles. The components in all directions and the total error are shown ( x axis: left-to-right; y axis: back-to-front; z axis: down-to-up).
In the head-roll condition (rotation around the straight-ahead y axis; Figure 5C), only horizontal and vertical errors were predicted. The simulations of our model without rotations also point out the nonlinearity of the errors—errors that the brain must compensate for in order to optimize the reach plan. 
Accounting for eye and head rotation
We compared our model's predictions with the natural behavior of seven human subjects by conducting a series of reaching experiments toward targets viewed in the visual periphery. We began by investigating whether the brain does indeed account for rotations of the eye and head. Figure 6 shows the typical reaching performance of a subject in the four experimental conditions: control (panel A), 45° oblique gaze (rightwards and up) with head restrained (panel B), 45° oblique gaze head unrestrained (panel C), and head roll (30° CCW; panel D). 
Figure 6
 
Typical examples of human reaching performance (Subject 4). Left panels show the view from behind the subject; right panels are side views. Blue dots represent 10 different trials in each panel (2 trials to each target; dots are separated by 15 ms). Blue discs depict targets; red arrows are the movement vectors predicted by the model when ignoring rotations. (A) Straight-ahead gaze and head-restrained upright control. (B) First quadrant 45° fixation condition with head-restrained upright. (C) The same fixation condition but subjects could move their head freely. (D) Straight-ahead fixation with the head rolled 30° CCW.
Figure 6
 
Typical examples of human reaching performance (Subject 4). Left panels show the view from behind the subject; right panels are side views. Blue dots represent 10 different trials in each panel (2 trials to each target; dots are separated by 15 ms). Blue discs depict targets; red arrows are the movement vectors predicted by the model when ignoring rotations. (A) Straight-ahead gaze and head-restrained upright control. (B) First quadrant 45° fixation condition with head-restrained upright. (C) The same fixation condition but subjects could move their head freely. (D) Straight-ahead fixation with the head rolled 30° CCW.
The reaching movements predicted by the model when ignoring eye–head rotations toward the five potential targets are shown by the red arrows in the figure. These errors were—obviously—not observed in human behavior ( Figure 6, blue trajectories). Despite a slight consistent leftward overshoot of the targets as well as a small systematic downward offset of reaching endpoints in this particular subject, reaching performance was reasonably accurate. In other words, subjects compensated for the 3-D nonlinear linkage structure of the body to achieve correct reaching movements. This is in accord with everyday experience, but we normally have visual and proprioceptive feedback about our movements (see below). 
To further quantify the experimental data, Figure 7 compares the 3-D compensation (see Methods section) observed in subjects' reaching to the one predicted by the model (with and without taking eye–head rotations into account). The underlying rationale here is to quantitatively illustrate the importance of using extraretinal eye–head position signals and to investigate whether the brain sometimes uses an approximation of the complete 3-D reference frame transformation. If this was the case, we would expect the observed behavior to be intermediate between the model with and without rotation. 
Figure 7
 
Predicted versus observed 3-D compensation in the eye–head geometry. Gray dots: data pooled for all subjects. Red and green lines: model predictions for the full model (green, perfect compensation, slope = 1) and the model ignoring eye–head rotations (red, no compensation, slope = 0). Black dotted line: mean squares fit to data. 3-D compensation for the head-restrained straight-ahead condition (A), the head-unrestrained condition (B), and the head-roll condition (C).
Figure 7
 
Predicted versus observed 3-D compensation in the eye–head geometry. Gray dots: data pooled for all subjects. Red and green lines: model predictions for the full model (green, perfect compensation, slope = 1) and the model ignoring eye–head rotations (red, no compensation, slope = 0). Black dotted line: mean squares fit to data. 3-D compensation for the head-restrained straight-ahead condition (A), the head-unrestrained condition (B), and the head-roll condition (C).
By definition, not taking rotations into account produces no compensation for the 3-D components of the visuomotor transformation (red line = slope of 0). In other words, this is the behavior that would result from using a gaze-centered movement plan to control a shoulder-centered effector without further comparison with eye and head orientation. In contrast, our complete model accounts for those nonlinear components (green line = slope of 1). Thus, the slope of regression lines on actual data provides an index of compensation ranging from zero to one. The regression for the head-restrained condition ( Figure 7A) provided a slope of 0.961 ( R = .960, N = 490, p < .001; subject variability: slope = 0.92–1.03). In case of the head-unrestrained condition ( Figure 7B), the regression slope was 0.958 ( R = .981, N = 420, p < .001; subject variability: slope = 0.93–1.02). For the head-roll condition ( Figure 7C), we obtained a slope of 0.972 ( R = .917, N = 350, p < .001; subject variability: slope = 0.91–1.07). All the slopes between the predicted and observed values were undistinguishable from 1 ( t test, p > .05) and significantly different from zero ( t test, p < .001) for all reaching conditions. This suggests that as far as our data can show, the brain does not use an approximation of the complete 3-D eye–head–shoulder geometry but instead accounts for the actual linkage configuration of the body. 
Explaining the variability of reach
It is possible that the reach errors observed in the behavioral data reflect influences of task conditions. These in turn might reveal which reference frames were involved in movement planning. We performed two-way ANOVAs (Bonferroni post hoc test) on the reach errors of each subject. The two independent factors for the head-restrained/head-unrestrained and the head-roll conditions were reach target position and fixation position and reach target position and head-roll angle, respectively. Across all three spatial directions, there was no significant effect of these factors on reach errors ( p > .10 in all comparisons) for all but one subject (Subject 2) who showed a significant influence of fixation position in the head-restrained condition (but not in the head-unrestrained or head-roll condition), F(6, 45) = 4.45, p < .05. This was essentially due to the 45° fixation position data, where errors were significantly different from reach errors during the straight-ahead fixation position for this subject. 
To consolidate this detailed examination of the reaching endpoints, we fit 95% confidence ellipsoids onto the reaching errors (merging data for all reach targets) separately for each subject and each fixation or head-roll condition (see Methods section). The major axis of those ellipsoids was mostly oriented along the x axis (i.e., horizontally), but we did not observe any consistent modulation of the ellipsoid size, location, or orientation with fixation position or head-roll angle (see also Supplementary Figure 1). These analyses confirm that the brain accounts for the complete 3-D linkage geometry of the eye–head–shoulder system in computing the visuomotor transformation for reaching. 
Our model uses extraretinal eye and head position signals in order to transform the visual movement vector into the motor vector in shoulder-centered coordinates. Because it is believed that extraretinal signals are noisy (Gellman & Fletcher, 1992; Li & Matin, 1992), we investigated whether modifying our model to include noise might be able to explain some of the variability observed in the data. An extensive noise analysis is beyond the scope of this paper; however, to illustrate the principle we chose to analyze the effect of noisy eye position signals on the variability of the predicted reach and compare this variability to the data (Figure 8). 
Figure 8
 
Reaching variability explained by noise in extraretinal signals. We plot the SD of the reaching endpoint for the head-restrained condition as a function of gaze angle (similar to Figure 5) and for all three spatial direction separately (x axis: left–right; y axis: backward–forward; z axis: down–up). (A) Reaching variability as simulated by our model if extraretinal eye position is prone to signal-dependent noise (see text for details). For each eye position, we used N = 1,000 reaches. (B) Observed reaching variability across all seven subjects.
Figure 8
 
Reaching variability explained by noise in extraretinal signals. We plot the SD of the reaching endpoint for the head-restrained condition as a function of gaze angle (similar to Figure 5) and for all three spatial direction separately (x axis: left–right; y axis: backward–forward; z axis: down–up). (A) Reaching variability as simulated by our model if extraretinal eye position is prone to signal-dependent noise (see text for details). For each eye position, we used N = 1,000 reaches. (B) Observed reaching variability across all seven subjects.
We tried several hypothetical versions of a noisy visuomotor transformation (data not shown). Here, we only report the approach that proved most useful in modeling our data. For the simulations, we introduced two noise sources that were (1) a constant Gaussian noise level and (2) an eye-position-dependent noise where the noise level was scaled with the square root of the eye position angle independently for all three directions of rotation. We summed these two noise sources and then simulated the head-restrained experimental condition for the central reach target viewed under different eye positions (same oblique eye positions as in the experiment). For each eye position, we simulated N = 1,000 noisy visuomotor transformations and calculated the standard deviation of the predicted reach endpoint for all three spatial dimensions ( Figure 8A). The closest match with the data ( Figure 8B) was found under the assumption that initial hand position was not subject to this visuomotor transformation noise, perhaps due to a later comparison between hand and target position in shoulder-centered coordinates (see Discussion section). 
The simulation data from Figure 8A predict that the depth component (Y) should show the least reaching endpoint variability. In addition, the oblique eye positions should result in an asymmetry of the vertical reach error variability. Finally, there should be the least variability at central fixations because of the eye-position-dependent noise component. The trend in our data ( Figure 8B) confirms all three predictions. Thus, our analysis indicates that extraretinal signals in the brain are noisy and as a result part of the observed movement variability can be attributed to a noisy early visuomotor transformation. 
Accounting for the offset of rotation centers
In addition to accounting for the rotations of body segments, a strength of the framework we present here is that it allowed us to analyze whether the visuomotor transformation takes the complete 3-D translation between the centers of eye and head rotation into account ( Figure 2, dotted red arrow). This is different from previous studies (Henriques & Crawford, 2002; Henriques et al., 2003) in that we investigated reaching movements to peripherally viewed targets (compared to pointing to mostly foveated targets) and also varied all three dimensions of head movements (previously, only horizontal rotations were tested). To do so, we simulated the errors that were predicted when the offset between the centers of rotations was ignored in the gaze to shoulder-centered transformation of the reach target (head roll: Figure 9A; head-unrestrained gaze position: Figure 9C). 
Figure 9
 
Predicted versus observed errors when ignoring the translation between rotation centers. (A) Predictions of errors for the head-roll condition as a function of head-roll angle if translation between eye and head rotation centers is ignored. (B) Observed error parallel to the predicted error is represented as a function of the error predicted when the offset of rotation centers is ignored. Data from all subjects pooled into 2-cm bins (mean ± SE). Translation included: green; ignored: red. Dotted black lines and gray area: mean and SD of the remaining (perpendicular ⊥) error. (C) Error predictions for the head-unrestrained gaze condition as a function of gaze angle and using Donder's strategy for the head contribution to the gaze shift. (D) Same representation for the head-unrestrained condition.
Figure 9
 
Predicted versus observed errors when ignoring the translation between rotation centers. (A) Predictions of errors for the head-roll condition as a function of head-roll angle if translation between eye and head rotation centers is ignored. (B) Observed error parallel to the predicted error is represented as a function of the error predicted when the offset of rotation centers is ignored. Data from all subjects pooled into 2-cm bins (mean ± SE). Translation included: green; ignored: red. Dotted black lines and gray area: mean and SD of the remaining (perpendicular ⊥) error. (C) Error predictions for the head-unrestrained gaze condition as a function of gaze angle and using Donder's strategy for the head contribution to the gaze shift. (D) Same representation for the head-unrestrained condition.
We then plotted the magnitude of the observed reaching error parallel to the predicted error as a function of the predicted error's magnitude. This is shown in Figure 9B for the head-roll and in Figure 9D for the head-unrestrained gaze condition. In the head-roll condition, there was no significant correlation between the observed errors and the errors predicted by ignoring translation ( Figure 9B, dashed regression line: slope = 0.136, R = .063, p = .278, remaining error = 6.28 ± 2.89 cm). Similarly, the regression analysis did not show any significant relationship between the predicted and observed error in the head-unrestrained condition ( Figure 9D, dashed regression line: slope = 0.200, R = .074, p = .312, remaining error = 5.14 ± 3.68 cm). The observed reach errors were not significantly correlated with the errors predicted by the model when translation was ignored. Consequently, as suggested by Henriques and Crawford (2002) and Henriques et al. (2003), this demonstrates that the visuomotor transformation takes the offset of the centers of rotation of different body segments into account. 
Feed-forward versus feedback visuomotor transformation
We have shown that the movement endpoints were geometrically accurate, even without visual feedback. However, it is still unclear whether this resulted from a feed-forward visuomotor transformation because it is possible that the reaching movement could initially follow a model ignoring eye–head rotations and then use online proprioceptive feedback to correct for the errors in motor planning. This is physiologically plausible because PPC appears to be involved in online feedback guidance of reaching (Desmurget et al., 1999; Grea et al., 2002; Pisella et al., 2000) and receives proprioceptive feedback from the hand (Buneo et al., 2002). If this were true, we would expect that the initial movement direction would follow the reach trajectory predicted when ignoring rotation. 
To test this hypothesis, we computed ( Figure 10) the angle of the initial movement trajectory with respect to the movement direction when extraretinal eye–head signals are not used. This initial movement direction was computed before any proprioceptive feedback could reach the motor command (for details, see Methods section). It is apparent that despite the motor noise in movement direction, the initial movement direction did account for eye–head rotations (green line = slope of 1) and did not follow the prediction that extraretinal signals were ignored (red line = slope of 0). Here again, the regression line (dashed black line, slope = 0.929, R = 0.524, N = 1260, p < .001) was indistinguishable from a slope of 1 and significantly different from zero ( t test, p < .001). Thus, the internal model of the early visuomotor transformation accounts for both the rotations and translations of the eye–head–shoulder linkage when planning a hand movement. 
Figure 10
 
Initial movement direction presented in the same manner as in Figure 6, but plotted as relative directions. The measured initial movement angle relative to the model ignoring rotations was plotted as a function of the movement angle predicted by the full model (see Methods section).
Figure 10
 
Initial movement direction presented in the same manner as in Figure 6, but plotted as relative directions. The measured initial movement angle relative to the model ignoring rotations was plotted as a function of the movement angle predicted by the full model (see Methods section).
Discussion
We describe a quantitative geometrical model for the visuomotor conversion between a gaze-centered representation of the hand/reach target and a shoulder-centered desired reach plan. To demonstrate the potential and the necessity of such a model, we compared its predictions to simplified versions that ignore eye–head rotation or the translation between rotation centers. Our behavioral experiment showed that human subjects did not use such simplifications, but rather used a 3-D accurate visuomotor transformation. Their reaching trajectories accounted for nonlinearities resulting from the spherical projections of the targets onto the retina, from the different spatial locations of the rotation centers of the eyes and head, as well as from different eye and head rotations, which did not require visual and proprioceptive feedback control. This model describes the mathematical foundations to study pathologies that involve reaching impairments and for developing more complex physiological models. 
Accounting for the complete 3-D body geometry
The behavioral examples in Figure 6 reveal the basic experimental results. Although the memory-guided arm movement trajectories showed certain errors such as noise, constant biases, and the “retinal exaggeration effect” (Bock, 1986; Enright, 1995; Henriques et al., 1998), their accuracy was not affected by eye and head orientation. This means that the brain successfully transformed the gaze-centered desired movement vector into a shoulder-centered reach plan (Figure 7). These results confirm previous fragmentary results from visuomotor transformation and spatial updating experiments indicating that the brain completely accounts for the current geometrical state of the eye–head configuration (Crawford & Guitton, 1997; Henriques & Crawford, 2002; Henriques et al., 1998, 2003; Medendorp & Crawford, 2002; Soechting et al., 1991). Compared to these previous studies, we analyzed goal-directed reaching, which potentially involves different mechanisms than the alignment of the finger with the line of sight, as this is the case for pointing (Henriques & Crawford, 2002; Henriques et al., 1998, 2003; Medendorp & Crawford, 2002) or the control of eye movements (Crawford & Guitton, 1997). We also investigated the role of all degrees of freedom for 3-D eye and head movements instead of one-dimensional restrictions in those previous investigations. In addition, our study directly addresses the geometry of the visuomotor reference frame transformation problem without using remapping/updating. Overall, we were unable to find evidence for the use of any approximations of the complete 3-D body geometry. This was not only true for the rotational states but did also include translations between rotation centers (Figure 9). However, we draw these conclusions from a failure to disprove our null hypothesis and can therefore not exclude a small, undetected effect. 
To what extent (if any) are different extraretinal signals (Lewis, Gaymard, & Tamargo, 1998) accounted for? One might imagine different weightings for signals coding eye and head rotations, for example, due to the difference in accuracy between efference copy and proprioception. In addition, it was also unclear whether the translational component of the rotation axes among eye, head, and shoulder is accounted for, as is the case for the VOR (Medendorp, Van Gisbergen, Van Pelt, & Gielen, 2000). Here, comparing our head-restrained and head-unrestrained data, we showed that both eye and head position information was used accurately, that is, the correlation coefficients were the same for both conditions. This suggests that efference copy and proprioception provide equally accurate position signals to the visuomotor transformation pathway. Note, however, that eye and head position information might have different precision. A more precise head position signal might explain why the scatter of data in Figure 7 is larger for the head-restrained than for the head-unrestrained condition. Moreover, the similarities in reaching performance between the head-restrained and head-unrestrained conditions together with Figure 9 suggest that the translational component is correctly represented in the internal model. 
Depth component of reaching
Our model allowed us to analyze the depth component of reaching. Our implementation of gaze-centered distance assumes that depth encoding (and decoding) in the brain is accurate. However, as mentioned before, errors in depth can arise if one ignores eye and/or head rotations. The reaching endpoints in our behavioral experiment revealed no such depth errors, suggesting therefore that not only is object distance encoded correctly, but also that it is accurately transformed. In the brain, depth coding might be performed with respect to the cyclopean eye (Khokhotva et al., 2005; Ono & Barbeito, 1982). It has been proposed that angular eccentricity and radial distance information are computed separately and are recombined to form a single representation of the 3-D target (or hand) location in space in gaze-centered coordinates (Genovesio & Ferraina, 2004; Gnadt & Mays, 1995). We hypothesize that this provides the basic input to compute a 3-D desired movement vector (used in our model), which is then transformed from gaze to shoulder-centered coordinates. 
With regards to the encoding of depth, our model is based on the assumption that the brain possesses correct 3-D gaze-centered hand and target position information. As mentioned before, the 2-D cyclopean direction can be constructed by merging both retinal images (Ding & Sperling, 2006; Khokhotva et al., 2005; Ono & Barbeito, 1982). How the brain extracts depth information form retinal and extraretinal signals remains, however, still controversial. Binocular retinal signals alone (Mayhew & Longuet-Higgins, 1982) as well as in combination with ocular vergence (Collewijn & Erkelens, 1990; Mon-Williams, Tresilian, & Roberts, 2000; Richard & Miller, 1969; Ritter, 1977; Viguier, Clement, & Trotter, 2001) and/or accommodation and retinal blur (Mon-Williams & Tresilian, 1999, 2000) have been proposed to contribute to this calculation. However, none of these studies have considered the real eye–head geometry and spherical projection properties. In a recent theoretical study presented in abstract form, we suggested that early visual areas involved in the internal construction of the gaze-centered 3-D target/hand position require full 3-D eye and head position signals and vergence in order to uniquely infer depth from binocular retinal information (Blohm & Crawford, 2006). 
Reference frames in reaching
Unlike McIntyre, Stratta, and Lacquaniti (1997, 1998), we did not find any evidence for an influence of a specific reference frame on the reach endpoints. This might to be due to differences in the experimental design between our study and McIntyre's publications. For example, McIntyre et al. (1997) used dim lightning and thus allowed for movement feedback and the use of external, world-fixed reference frames. In addition, in their study, reach targets within a block of trials were confined in a 2.5-cm volume (the same order of magnitude as the pointing error in their study), which suggests the possibility that subjects just recalled previous motor plans instead of performing the complete visuomotor transformation in each trial. The reference frame analysis in McIntyre et al. (1998) was based on local errors within this 2.5-cm target volume and did not reveal any particular reference frame for pointing errors. In addition, this latter study did not test different eye–head orientations. They tested subjects in a dim lightning and complete darkness condition, but even in complete darkness the errors seemed to arise from the actual movement dynamics, that is, due to inertial anisotropy of the arm, and not form a gaze- to shoulder-centered reference frame transformation. 
Although we did not find a modulation of reaching accuracy for different eye and head positions ( Figure 6), we do report that extraretinal signals modulate reaching precision ( Figure 8). A simple signal-dependent noise model—similar to the noise properties of real neurons—was able to reproduce the main characteristics of the reaching endpoint variability across different eye positions. We can draw several conclusions from this observation. First, the extraretinal signals used by the brain to perform the visuomotor transformation seem to be prone to noise. Second, we could best reproduce the reaching variability pattern by assuming that initial hand position was not affected by noise. This suggests that the comparison between hand and target position could be performed within different parallel representations (e.g., gaze centered and shoulder centered). Because initial hand position can also be derived from proprioception, this position might not be affected by eye-position-dependent noise, but rather by a constant noise level related to proprioception (because initial hand position was constant in our experiment). This idea is consistent with findings reported elsewhere by Khan et al. (in press). Finally, the fact that we were able to explain part of the reaching endpoint variability contradicts the current general belief in the field that movement variability exclusively results from the motor control stage. Instead we show here that at least part (i.e., the signal-dependent part) of the noise results from a noisy visuomotor transformation. As to the constant noise component, this likely results from at least two sources, that is, motor execution noise and initial hand position noise. 
Theoretical considerations
In the model ( Figure 2), we compare hand and target positions in gaze-centered coordinates (Buneo et al., 2002). Theoretically, the only important constraint for this target-hand comparison is the use of a common frame of reference (Shadmehr & Wise, 2005). Whether the vector of motor error is computed in gaze-centered (Buneo et al., 2002), head-centered (Duhamel, Bremmer, BenHamed, & Graf, 1997), or shoulder-centered (Kalaska et al., 1997) coordinates, the same amount of information is required for the 3-D visuomotor transformation. Therefore, in real 3-D space, the complexity of the visuomotor transformation cannot be reduced by the choice of the reference frame for this comparison. In addition, the comparison could theoretically be done at multiple redundant stages, which could serve distinct purposes in different eye–hand coordination tasks. 
A crucial issue for motor planning is that, due to the nonlinear properties of the linkage geometry (Pouget & Sejnowski, 1997), a desired retinal movement vector cannot be treated independently of its point of attachment. This is, the neural properties do not only depend on the desired 3-D vector (e.g., movement direction and amplitude) but also on the location of the vector. Even 2-D representations of a motor plan in different reference frames are affected by 3-D geometry (Crawford & Guitton, 1997) and are therefore not sufficient to describe the complete early visuomotor transformation. Our model takes all these nonlinear issues into account. 
Previous studies have shown that gaze-centered movement plans are accurately updated across eye movements because they take into account the rotational (Henriques et al., 1998; Khan, Pisella, Rossetti, et al., 2005; Khan, Pisella, Vighetto et al., 2005; Medendorp, Goltz, et al., 2003) and translational components (Li & Angelaki, 2005) of body geometry. But this by itself does not produce an accurate shoulder-centered motor plan. Even if correctly updated, gaze-centered movement plans nevertheless still have to undergo a complete 3-D visuomotor transformation to convert them into shoulder-centered motor commands. Although the brain may acquire an internal geometrical representation of the distances between the body joints through learning (Davidson & Wolpert, 2003; Shadmehr, 2004; Shadmehr & Wise, 2005), the current eye and head positions have to be deduced from extraretinal signals (Lewis et al., 1998). 
Our model used dual quaternions to describe the complete 3-D visuomotor transformation for reaching. However, it is unlikely that the brain actually makes use of the same formalism. We have recently shown that this visuomotor transformation can be computed with great accuracy in a feed-forward neural network that employs a distributed mechanism (Blohm, Keith, & Crawford, 2006, 2007). However, the underlying computations must remain the same to provide the same results. Another limitation of our model is that at its current stage we only implement feed-forward operations and do not consider feedback control mechanisms. However, this is only a first step in modeling the visuomotor transformation for reaching; incorporating feedback loops will be an important future step. 
Feed-forward versus feedback control
Our analysis of the initial movement direction ( Figure 10) supports the existence of a feed-forward visuomotor transformation for reach planning. This does of course not exclude the presence of proprioceptive feedback mechanisms later in the movement (Pelisson, Prablanc, Goodale, & Jeannerod, 1986; Sabes, 2000) but demonstrates the early visuomotor transformation for reaching is geometrically correct (note that in our experiment, subjects did not have any visual or tactile feedback about their performance). Also, deviations from the perfect trajectory might of course arise from other factors (like the stiffness and anisotropy of the arm or cognitive aspects) that are independent of the visuomotor transformation geometry considered in this study. A geometrically accurate feed-forward visuomotor transformation is advantageous even in the presence of online movement adjustments because it reduces the computational requirements of the feedback control loop. 
In our experiment, subjects could not use visual feedback of the hand during the reach, which is unusual in a more natural context. So the question arises, what is the need for such a complex transformation, if visual or proprioceptive feedback could normally be used to correct the errors in an imperfect transformation? First, there may be situations where visual feedback is indeed not possible, or is not optimal, for example when the image of the hand stimulates the peripheral retina and attention is entirely engaged on a foveated target. In this situation, a complete 3D transformation model frees the visual/oculomotor system to move on to other tasks instead of being tied to continuously comparing hand and target position. Second, visual and proprioceptive feedback are slow compared to fast arm movements—by the time a rapid ballistic movement like a punch was over, it would be too late to correct. Third, even for slow movements, a nonoptimal transformation would constantly require corrections on top of other corrections, with only asymptotic convergence toward accuracy. Of course, visual feedback is needed in reach movements—especially when unexpected changes or obstacles occur (Hamilton & Wolpert, 2002; Sabes & Jordan, 1997; Schindler et al., 2004)—and we believe that a completely optimal system uses both an optimal transformation and visual/proprioceptive feedback. 
In order to investigate how different feedback loops are used in the dynamic control of a reach, one needs to build an integrated model that incorporates our geometrical transformations into the visual feedback control loop and that combines this with models that describe the generation of reach trajectories from shoulder-centered desired movement vectors and implementing lower level proprioceptive/efference copy feedback control (Todorov, 2000). Testing such an integrated model under different geometrical configurations of the body in a dynamical environment should provide valuable insight into the relative contributions of these different control loops. 
Neurophysiology and clinical relevance
Brain regions involved in the visuomotor transformation should be found in the visuomotor pathway between the parietal and frontal lobe ( Figure 1B). The PPC appears to encode early motor commands in mostly gaze-centered coordinates (Batista et al., 1999; Crawford et al., 2004) whereas PM displays neural activity consistent with our notion of shoulder-centered coordinates (Caminiti, Johnson, Galli, Ferraina, & Burnod, 1991; Crammond & Kalaska, 1996; Fogassi et al., 1996; Graziano et al., 1997; Kakei et al., 2001, 2003; Kalaska et al., 1997; Scott, Gribble, Graham, & Cabel, 2001). Furthermore, the parietal–frontal neural network has all the necessary signals (including 3-D visual inputs), and they are known to be modified by eye (Batista et al., 1999; Battaglia-Mayer, Caminiti, Lacquaniti, & Zago, 2003; Boussaoud, Jouffrais, & Bremmer, 1998; Mushiake, Tanatsugu, & Tanji, 1997) and head (Brotchie, Andersen, Snyder, & Goodman, 1995) orientation. One way the 3-D visuomotor transformation could take place is through eye and head position “gain fields” (Salinas & Abbott, 1995; Zipser & Andersen, 1988) that adjust the relative contributions of individual units so that the gaze-centered motor error command encoded in the PPC population is scaled and rotated (Pouget & Sejnowski, 1997; Salinas & Abbott, 2001) to provide the appropriate shoulder-centered motor error in the PM population. Alternatively, hand and target positions could be transformed independently into shoulder-centered coordinates and then combined at the level of PM, although this is not what the recent literature suggests (Buneo et al., 2002). A combination of both mechanisms with multiple distributed comparisons of hand and target positions at different levels of the visuomotor transformation could also be a possibility and has been reported (Khan et al., in press). 
The visuomotor transformation could be coded implicitly through a distributed population mechanism (Pouget & Sejnowski, 1997; Salinas & Abbott, 2001), as has been proposed for 3-D eye movements (Smith & Crawford, 2005). This could explain the large networks that were found to be active during reaching movements (but not saccades) in functional magnetic resonance imaging studies (Connolly, Andersen, & Goodale, 2003; Diedrichsen, Hashambhoy, Rane, & Shadmehr, 2005; Medendorp et al., 2005; Medendorp, Goltz, et al., 2003). Indeed, the role of many of those activated areas remains unknown. If they are indeed involved in the computation of the complete 3-D visuomotor transformation, we would expect neurons in these areas to show modulation for a series of signals related to the encoding of hand/target direction and distance. 
Our model may aid in explaining reaching impairments that have been shown in neurologically damaged patients with lesions to the PPC (Battaglia-Mayer & Caminiti, 2002; Khan, Pisella, Rossetti, et al., 2005; Khan, Pisella, Vighetto, et al., 2005; Perenin & Vighetto, 1988). In these studies, patients made systematic reaching errors toward targets presented in the hemi-field contralateral to the damage. Khan et al. (2005, 2005) showed that the errors were due to a damaged visuomotor transformation process and were unrelated to the visual memory of the targets, that is, these patients could accurately update memorized target positions though saccades, even if the target was presented in the contralesional hemi-field. Our model predicts that the combination of retinal target position and extraretinal eye–head orientation signals in the visuomotor transformation process is impaired in these cases where the reach target is represented in the damaged hemisphere; hence, we expect reaching errors to be biased toward a version of the model that ignores extraretinal eye–head positions. 
More global error patterns would occur in patients with cerebellar damage (Battaglia et al., 2006; Haggard, Jenner, & Wing, 1994; Ranalli & Sharpe, 1986) that would prevent extraretinal signals about eye–head orientation from the cerebellum to reach the visuomotor transformation pathway in the parietal cortex. Similar global error patterns are expected in degenerative diseases involving PPC, for example, Alzheimer is known to affect the visuomotor transformation (Ghilardi et al., 1999, 2000; Tippett & Sergio, 2006). Finally, we predict that very young subjects (Konczak, 2004; Konczak & Dichgans, 1997) and especially patients with global or motor learning disabilities (McCloskey, 2004) will show some characteristics predicted by a version of the model without translation and/or rotation. In these cases, the brain might not have learned (yet) the visuomotor transformation for reaching in its whole complexity. Note that the errors simulated here when ignoring extraretinal signals represented the lack of several rotational transformations; in these various patient populations there may be a lack of certain specific components, and these errors can be predicted by simulating specific forms of our model. These cases of brain damage and developmental disorders provide an interesting field of application for our model. 
Table 1 summarizes our predictions of the possible effects or deficits of several neuropathologies on the early visuomotor transformations described by our model. In most of these cases, we do not expect simple effects—like completely missing parameters—but rather that the precision or accuracy of certain variables will be degraded, for example, the head position signal might be erroneous or very noisy in the case of damage to the vestibular system. 
Table 1
 
Predicted consequences of certain pathologies on the visuomotor transformation.
Table 1
 
Predicted consequences of certain pathologies on the visuomotor transformation.
Pathology Deficit/predicted effect
Damage to vestibular system Head orientation signals missing or incorrect
Strabismus Inaccurate eye position efference copy
Cerebellar patients Degraded efference copy signal for eye and/or head position
PPC damage Position-dependent visuomotor transformations affected (hemi-field effects if unilateral damage)
Alzheimer or other degenerative diseases involving PPC Increased noise in various parts of the visuomotor transformation
Motor learning disorders Poorly calibrated visuomotor transformation
Finally, recent research has attempted to directly use signals from neural recordings in the parietal cortex to drive a prosthetic arm in people with paralysis or who have brain damage (Chapin, 2004; Musallam, Corneil, Greger, Scherberger, & Andersen, 2004; Schwartz, 2004). In the case of a gaze-centered encoding of hand and target position in PPC, our data show that in order to compute an accurate “prosthesis-centered” motor command (i.e., a shoulder-centered motor plan), it is necessary to account for current eye and head position. Therefore, a control algorithm for neural prosthetic devices needs to implement a model of the early visuomotor transformation that includes the complete eye–head geometry and will need to receive eye/head signals either intrinsically, from the recorded brain area itself (Brotchie et al., 1995; Snyder, Grieve, Brotchie, & Andersen, 1998), or extrinsically, that is, from eye position recordings. 
Conclusions
In summary, we have developed a model that describes the complete 3-D properties of the early visuomotor transformation for reaching. Extraretinal eye and head position signals are used in an internal model of the body geometry to compute an accurate reach plan. The behavioral reaching experiment confirmed that extraretinal eye and head rotation signals as well as an internal model of the linkage structure are geometrically accurately used by the brain. The experimental data supported a feed-forward computation of the desired reach vector that was evident even for the initial movement component of the reach trajectory, before feedback was available. We make suggestions about how and where this could be implemented in the brain. This paper provides a framework for future work uncovering the neurophysiology of this complex 3-D reference frame transformation that our brain constantly performs in everyday life. 
Supplementary Materials
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Supplementary File - Supplementary File 
Figure 1 - Figure 1 
Figure S1. Comparison between detailed model predictions in data. The figure shows same representation as the Article Figure 4, but one panel is shown for each error component along all three cardinal directions. Gray dots represent the experimental data. Black squares and line shows the mean and SD for each gaze or head roll angle. A–C. Oblique gaze condition with head fixed upright. D–F. The same condition but with the head unrestrained. G–I. Straight-ahead gaze with different head roll angles. 
Appendix A
A dual quaternion
Q ^
can be written as the sum of two quaternions Q and Q 0, of which one is multiplied by a duality operator ɛ, that is,
Q ^
= Q + ɛQ 0, where Q describes the rotational component and Q 0 implements the translation operation. A dual quaternion can also be represented as an eight-dimensional vector, that is,  
Q ^ = [ Q Q 0 ] .
(A1)
 
For a rotation θ around the axis
r
applied in
a
and a translation d along
r
, the dual quaternion components are  
Q = [ cos θ 2 r · sin θ 2 ] ,
(A2)
 
a n d Q 0 = [ d 2 · sin θ 2 d 2 · r cos θ 2 + ( r × a ) · sin θ 2 ] .
(A3)
 
A chain of translations and rotations can be expressed in the dual quaternion product  
Q ^ = i Q ^ i .
(A4)
 
Dual quaternion multiplication has the following property:  
A ^ B ^ = ( A + ɛ A 0 ) ( B + ɛ B 0 ) = A B + ɛ ( A 0 B + A B 0 )
(A5)
where XY describes the simple quaternion multiplication ( ɛ 2 = 0). Simple quaternion algebra for rotations has been described elsewhere (Haslwanter, 1995; Tait, 1890; see Supplementary Methods). A point in space
p
represented by the dual quaternion
P^
= [1 0 0 0 0
p
T]T can then be transformed with
Q^
into another frame of reference, that is,
P^
′ =
Q^
P^
Q^c
, where the dual quaternion conjugate is
Q^
c = QcɛQ0c with the quaternion conjugate Qc. Using this formalism, we can describe the complete 3-D linkage between the eyes and the shoulder. 
The Clifford's dual quaternion formalism is only one of a number of formalisms describing translations and rotations in a unified framework. Other known alternative methods include Euler rotations and the homogenous coordinates formalism with the Denavit–Hartenberg convention used in robotic control. The choice of the actual mathematical framework is not crucial and the same results can be obtained using these alternative methods. However, the major advantage of the use of dual quaternions is that they are based on the quaternion description of rotations. Using quaternions over, say Euler rotations, we avoid mathematical problems such as the sequence dependence of rotations around different axes or the loss of degrees of freedom when some rotation axes align, a phenomenon known as Gimbal lock. We would however like to emphasize that we do not claim that the brain actually uses dual quaternions. 
Acknowledgments
We would like to thank Dr. X. Yan and S. Sun for technical support and Drs. A. Z. Khan and D. Y. P. Henriques for helpful comments on the manuscript. This work was supported by the Canadian Institutes of Health Research (CIHR). GB is supported by a Marie Curie International fellowship within the 6th European Community Framework Program, FSR (UCL, Belgium), and CIHR (Canada). JDC holds a Canada Research Chair. 
Commercial relationships: none. 
Corresponding author: J. Douglas Crawford. 
Email: jdc@yorku.ca. 
Address: York University-Centre for Vision Research, 4700 Keele Street, Toronto, Ontario, Canada M3J 1P3. 
References
Andersen, R. A. Buneo, C. A. (2002). Intentional maps in posterior parietal cortex. Annual Review of Neuroscience, 25, 189–220. [PubMed] [CrossRef] [PubMed]
Angeles, J. Angeles, J. Zakhariev, E. (1998). The application of dual algebra to kinematic analysis. Computational methods in mechanisms. (161, pp. 3–31). Heidelberg: Springer-Verlag.
Aspragathos, N. A. Dimitros, J. K. (1998). A comparative study of three methods for robot kinematics. IEEE Transactions on Systems, Man, and Cybernetics Part B, 28, 135–145. [CrossRef]
Batista, A. P. Buneo, C. A. Snyder, L. H. Andersen, R. A. (1999). Reach plans in eye-centered coordinates. Science, 285, 257–260. [PubMed] [CrossRef] [PubMed]
Battaglia, F. Quartarone, A. Ghilardi, M. F. Dattola, R. Bagnato, S. Rizzo, V. (2006). Unilateral cerebellar stroke disrupts movement preparation and motor imagery. Clinical Neurophysiology, 117, 1009–1016. [PubMed] [CrossRef] [PubMed]
Battaglia-Mayer, A. Caminiti, R. (2002). Optic ataxia as a result of the breakdown of the global tuning fields of parietal neurones. Brain, 125, 225–237. [PubMed] [Article] [CrossRef] [PubMed]
Battaglia-Mayer, A. Caminiti, R. Lacquaniti, F. Zago, M. (2003). Multiple levels of representation of reaching in the parieto-frontal network. Cerebral Cortex, 13, 1009–1022. [PubMed] [Article] [CrossRef] [PubMed]
Blohm, G. Crawford, J. D. (2006). Egocentric distance estimation requires eye–head position signals [Abstract]. Journal of Vision, 6, (6):734, [CrossRef]
(2006). A possible neural basis of the 3D reference frame transformation for reaching.
(2007). The 3 D visuomotor transformation of reaching depth in a neural network model.
Bock, O. (1986). Contribution of retinal versus extraretinal signals towards visual localization in goal-directed movements. Experimental Brain Research, 64, 476–482. [PubMed] [CrossRef] [PubMed]
Bockisch, C. J. Haslwanter, T. (2001). Three-dimensional eye position during static roll and pitch in humans. Vision Research, 41, 2127–2137. [PubMed] [CrossRef] [PubMed]
Boussaoud, D. Jouffrais, C. Bremmer, F. (1998). Eye position effects on the neuronal activity of dorsal premotor cortex in the macaque monkey. Journal of Neurophysiology, 80, 1132–1150. [PubMed] [Article] [PubMed]
Brotchie, P. R. Andersen, R. A. Snyder, L. H. Goodman, S. J. (1995). Head position signals used by parietal neurons to encode locations of visual stimuli. Nature, 375, 232–235. [PubMed] [CrossRef] [PubMed]
Buneo, C. A. Jarvis, M. R. Batista, A. P. Andersen, R. A. (2002). Direct visuomotor transformations for reaching. Nature, 416, 632–636. [PubMed] [CrossRef] [PubMed]
Calton, J. L. Dickinson, A. R. Snyder, L. H. (2002). Non-spatial, motor-specific activation in posterior parietal cortex. Nature Neuroscience, 5, 580–588. [PubMed] [CrossRef] [PubMed]
Caminiti, R. Johnson, P. B. Galli, C. Ferraina, S. Burnod, Y. (1991). Making arm movements within different parts of space: The premotor and motor cortical representation of a coordinate system for reaching to visual targets. Journal of Neuroscience, 11, 1182–1197. [PubMed] Article] [PubMed]
Chapin, J. K. (2004). Using multi-neuron population recordings for neural prosthetics. Nature Neuroscience, 7, 452–455. [PubMed] [Article] [CrossRef] [PubMed]
Clifford, W. (1873). Preliminary sketch of bi-quaternions. Proceedings of the London Mathematical Society, 4, 381–395.
Collewijn, H. Erkelens, C. J. (1990). Binocular eye movements and the perception of depth. Reviews of Oculomotor Research, 4, 213–261. [PubMed] [PubMed]
Connolly, J. D. Andersen, R. A. Goodale, M. A. (2003). FMRI evidence for a ‘parietal reach region’ in the human brain. Experimental Brain Research, 153, 140–145. [PubMed] [CrossRef] [PubMed]
Crammond, D. J. Kalaska, J. F. (1996). Differential relation of discharge in primary motor cortex and premotor cortex to movements versus actively maintained postures during a reaching task. Experimental Brain Research, 108, 45–61. [PubMed] [CrossRef] [PubMed]
Crawford, J. D. Guitton, D. (1997). Visual–motor transformations required for accurate and kinematically correct saccades. Journal of Neurophysiology, 78, 1447–1467. [PubMed] [Article] [PubMed]
Crawford, J. D. Henriques, D. Y. Vilis, T. (2000). Curvature of visual space under vertical eye rotation: Implications for spatial vision and visuomotor control. Journal of Neuroscience, 20, 2360–2368. [PubMed] [Article] [PubMed]
Crawford, J. D. Martinez-Trujillo, J. C. Klier, E. M. (2003). Neural control of three-dimensional eye and head movements. Current Opinion in Neurobiology, 13, 655–662. [PubMed] [CrossRef] [PubMed]
Crawford, J. D. Medendorp, W. P. Marotta, J. J. (2004). Spatial transformations for eye–hand coordination. Journal of Neurophysiology, 92, 10–19. [PubMed] [Article] [CrossRef] [PubMed]
Davidson, P. R. Wolpert, D. M. (2003). Motor learning and prediction in a variable environment. Current Opinion in Neurobiology, 13, 232–237. [PubMed] [CrossRef] [PubMed]
Desmurget, M. Epstein, C. M. Turner, R. S. Prablanc, C. Alexander, G. E. Grafton, S. T. (1999). Role of the posterior parietal cortex in updating reaching movements to a visual target. Nature Neuroscience, 2, 563–567. [PubMed] [Article] [CrossRef] [PubMed]
Desmurget, M. Grafton, S. (2000). Forward modeling allows feedback control for fast reaching movements. Trends in Cognitive Science, 4, 423–431. [PubMed] [CrossRef]
Diedrichsen, J. Hashambhoy, Y. Rane, T. Shadmehr, R. (2005). Neural correlates of reach errors. Journal of Neuroscience, 25, 9919–9931. [PubMed] [Article] [CrossRef] [PubMed]
Dieterich, M. Glasauer, S. Brandt, T. (2003). Mathematical model predicts clinical ocular motor syndromes. Annals of the New York Academy of Sciences, 1004, 142–157. [PubMed] [CrossRef] [PubMed]
Ding, J. Sperling, G. (2006). A gain-control theory of binocular combination. Proceedings of the National Academy of Sciences of the United States of America, 103, 1141–1146. [PubMed] [Article] [CrossRef] [PubMed]
Donders, F. C. (1848). Beiträge zur Lehre von den Bewegungen des menschlichen Auges. Holländische Beiträge Anat. Physiol. Wiss., 1, 104–145.
Duhamel, J. R. Bremmer, F. BenHamed, S. Graf, W. (1997). Spatial invariance of visual receptive fields in parietal cortex neurons. Nature, 389, 845–848. [PubMed] [CrossRef] [PubMed]
Enright, J. T. (1995). The non-visual impact of eye orientation on eye–hand coordination. Vision Research, 35, 1611–1618. [PubMed] [CrossRef] [PubMed]
Fogassi, L. Gallese, V. Fadiga, L. Luppino, G. Matelli, M. Rizzolatti, G. (1996). Coding of peripersonal space in inferior premotor cortex (area F4. Journal of Neurophysiology, 76, 141–157. [PubMed] [PubMed]
Gellman, R. S. Fletcher, W. A. (1992). Eye position signals in human saccadic processing. Experimental Brain Research, 89, 425–434. [PubMed] [CrossRef] [PubMed]
Genovesio, A. Ferraina, S. (2004). Integration of retinal disparity and fixation-distance related signals toward an egocentric coding of distance in the posterior parietal cortex of primates. Journal of Neurophysiology, 91, 2670–2684. [PubMed] [Article] [CrossRef] [PubMed]
Ghilardi, M. F. Alberoni, M. Marelli, S. Rossi, M. Franceschi, M. Ghez, C. (1999). Impaired movement control in Alzheimer's disease. Neuroscience Letters, 260, 45–48. [PubMed] [CrossRef] [PubMed]
Ghilardi, M. F. Alberoni, M. Rossi, M. Franceschi, M. Mariani, C. Fazio, F. (2000). Visual feedback has differential effects on reaching movements in Parkinson's and Alzheimer's disease. Brain Research, 876, 112–123. [PubMed] [CrossRef] [PubMed]
Glasauer, S. (2003). Cerebellar contribution to saccades and gaze holding: A modeling approach. Annals of the New York Academy of Sciences, 1004, 206–219. [PubMed] [CrossRef] [PubMed]
Glasauer, S. Dieterich, M. Brandt, T. (1999). Simulation of pathological ocular counter-roll and skew-torsion by a 3-D mathematical model. Neuroreport, 10, 1843–1848. [PubMed] [CrossRef] [PubMed]
Glenn, B. Vilis, T. (1992). Violations of Listing's law after large eye and head gaze shifts. Journal of Neurophysiology, 68, 309–318. [PubMed] [PubMed]
Gnadt, J. W. Mays, L. E. (1995). Neurons in monkey parietal area LIP are tuned for eye-movement parameters in three-dimensional space. Journal of Neurophysiology, 73, 280–297. [PubMed] [PubMed]
Graziano, M. S. Hu, X. T. Gross, C. G. (1997). Visuospatial properties of ventral premotor cortex. Journal of Neurophysiology, 77, 2268–2292. [PubMed] [Article] [PubMed]
Grea, H. Pisella, L. Rossetti, Y. Desmurget, M. Tilikete, C. Grafton, S. (2002). A lesion of the posterior parietal cortex disrupts on-line adjustments during aiming movements. Neuropsychologia, 40, 2471–2480. [PubMed] [CrossRef] [PubMed]
Haggard, P. Jenner, J. Wing, A. (1994). Coordination of aimed movements in a case of unilateral cerebellar damage. Neuropsychologia, 32, 827–846. [PubMed] [CrossRef] [PubMed]
Hamilton, A. F. Wolpert, D. M. (2002). Controlling the statistics of action: Obstacle avoidance. Journal of Neurophysiology, 87, 2434–2440. [PubMed] [Article] [PubMed]
Haslwanter, T. (1995). Mathematics of three-dimensional eye rotations. Vision Research, 35, 1727–1739. [PubMed] [CrossRef] [PubMed]
Haslwanter, T. Straumann, D. Hess, B. J. Henn, V. (1992). Static roll and pitch in the monkey: Shift and rotation of Listing's plane. Vision Research, 32, 1341–1348. [PubMed] [CrossRef] [PubMed]
Henriques, D. Y. Crawford, J. D. (2000). Direction-dependent distortions of retinocentric space in the visuomotor transformation for pointing. Experimental Brain Research, 132, 179–194. [PubMed] [CrossRef] [PubMed]
Henriques, D. Y. Crawford, J. D. (2002). Role of eye, head, and shoulder geometry in the planning of accurate arm movements. Journal of Neurophysiology, 87, 1677–1685. [PubMed] [Article] [PubMed]
Henriques, D. Y. Klier, E. M. Smith, M. A. Lowy, D. Crawford, J. D. (1998). Gaze-centered remapping of remembered visual space in an open-loop pointing task. Journal of Neuroscience, 18, 1583–1594. [PubMed] [Article] [PubMed]
Henriques, D. Y. Medendorp, W. P. Gielen, C. C. Crawford, J. D. (2003). Geometric computations underlying eye–hand coordination: Orientations of the two eyes and the head. Experimental Brain Research, 152, 70–78. [PubMed] [CrossRef] [PubMed]
Hepp, K. (1990). On Listing's law. Communications on Mathematical Physics, 132, 285–292. [CrossRef]
Juttler, B. (1994). Visualization of moving objects using dual quaternion curves. Comp Graph, 18, 315–326. [CrossRef]
Kakei, S. Hoffman, D. S. Strick, P. L. (2001). Direction of action is represented in the ventral premotor cortex. Nature Neuroscience, 4, 1020–1025. [PubMed] [Article] [CrossRef] [PubMed]
Kakei, S. Hoffman, D. S. Strick, P. L. (2003). Sensorimotor transformations in cortical motor areas. Neuroscience Research, 46, 1–10. [PubMed] [CrossRef] [PubMed]
Kalaska, J. F. Scott, S. H. Cisek, P. Sergio, L. E. (1997). Cortical control of reaching movements. Current Opinion in Neurobiology, 7, 849–859. [PubMed] [CrossRef] [PubMed]
Khan, A. Z. Pisella, L. (in press). Journal of Vision.
Khan, A. Z. Pisella, L. Rossetti, Y. Vighetto, A. Crawford, J. D. (2005). Impairment of gaze-centered updating of reach targets in bilateral parietal–occipital damaged patients. Cerebral Cortex, 15, 1547–1560. [PubMed] [Article] [CrossRef] [PubMed]
Khan, A. Z. Pisella, L. Vighetto, A. Cotton, F. Luaute, J. Boisson, D. (2005). Optic ataxia errors depend on remapped, not viewed, target location. Nature Neuroscience, 8, 418–420. [PubMed] [PubMed]
Khokhotva, M. Ono, H. Mapp, A. P. (2005). The cyclopean eye is relevant for predicting visual direction. Vision Research, 45, 2339–2345. [PubMed] [CrossRef] [PubMed]
Klier, E. M. Wang, H. Crawford, J. D. (2001). The superior colliculus encodes gaze commands in retinal coordinates. Nature Neuroscience, 4, 627–632. [PubMed] [Article] [CrossRef] [PubMed]
Konczak, J. (2004). Neural development and sensorimotor control. Lund University Cognitive Studies, 117, 11–14.
Konczak, J. Dichgans, J. Fetter,, M. Haslwanter,, T. Misslisch,, H. Tweed, D. (1997). The development of hand trajectory formation and joint kinematics during reaching in infancy. Three-dimensional kinematics of eye– head, and limb movements. (pp. 313–318). Amsterdam: Harwood Academic Publishers.
Lewis, R. F. Gaymard, B. M. Tamargo, R. J. (1998). Efference copy provides the eye position information required for visually guided reaching. Journal of Neurophysiology, 80, 1605–1608. [PubMed] [Article] [PubMed]
Li, N. Angelaki, D. E. (2005). Updating visual space during motion in depth. Neuron, 48, 149–158. [PubMed] [Article] [CrossRef] [PubMed]
Li, W. Matin, L. (1992). Visual direction is corrected by a hybrid extraretinal eye position signal. Annals of the New York Academy of Sciences, 656, 865–867. [PubMed] [CrossRef] [PubMed]
Mayhew, J. E. Longuet-Higgins, H. C. (1982). A computational model of binocular depth perception. Nature, 297, 376–378. [PubMed] [CrossRef] [PubMed]
McCloskey, M. (2004). Spatial representations and multiple-visual-systems hypotheses: Evidence from a developmental deficit in visual location and orientation processing. Cortex, 40, 677–694. [PubMed] [CrossRef] [PubMed]
McIntyre, J. Stratta, F. Lacquaniti, F. (1997). Viewer-centered frame of reference for pointing to memorized targets in three-dimensional space. Journal of Neurophysiology, 78, 1601–1618. [PubMed] [Article] [PubMed]
McIntyre, J. Stratta, F. Lacquaniti, F. (1998). Short-term memory for reaching to visual targets: Psychophysical evidence for body-centered reference frames. Journal of Neuroscience, 18, 8423–8435. [PubMed] [Article] [PubMed]
Medendorp, W. P. Crawford, J. D. (2002). Visuospatial updating of reaching targets in near and far space. Neuroreport, 13, 633–636. [PubMed] [CrossRef] [PubMed]
Medendorp, W. P. Goltz, H. C. Vilis, T. (2005). Remapping the remembered target location for anti-saccades in human posterior parietal cortex. Journal of Neurophysiology, 94, 734–740. [PubMed] [Article] [CrossRef] [PubMed]
Medendorp, W. P. Goltz, H. C. Vilis, T. Crawford, J. D. (2003). Gaze-centered updating of visual space in human parietal cortex. Journal of Neuroscience, 23, 6209–6214. [PubMed] [Article] [PubMed]
Medendorp, W. P. Tweed, D. B. Crawford, J. D. (2003). Motion parallax is computed in the updating of human spatial memory. Journal of Neuroscience, 23, 8135–8142. [PubMed] [Article] [PubMed]
Medendorp, W. P. Van Gisbergen, J. A. Van Pelt, S. Gielen, C. C. (2000). Context compensation in the vestibuloocular reflex during active head rotations. Journal of Neurophysiology, 84, 2904–2917. [PubMed] [Article] [PubMed]
Merriam, E. P. Genovese, C. R. Colby, C. L. (2003). Spatial updating in human parietal cortex. Neuron, 39, 361–373. [PubMed] [Article] [CrossRef] [PubMed]
Mon-Williams, M. Tresilian, J. R. (1999). Some recent studies on the extraretinal contribution to distance perception. Perception, 28, 167–181. [PubMed] [CrossRef] [PubMed]
Mon-Williams, M. Tresilian, J. R. (2000). Ordinal depth information from accommodation? Ergonomics, 43, 391–404. [PubMed] [CrossRef] [PubMed]
Mon-Williams, M. Tresilian, J. R. Roberts, A. (2000). Vergence provides veridical depth perception from horizontal retinal image disparities. Experimental Brain Research, 133, 407–413. [PubMed] [CrossRef] [PubMed]
Musallam, S. Corneil, B. D. Greger, B. Scherberger, H. Andersen, R. A. (2004). Cognitive control signals for neural prosthetics. Science, 305, 258–262. [PubMed] [CrossRef] [PubMed]
Mushiake, H. Tanatsugu, Y. Tanji, J. (1997). Neuronal activity in the ventral part of premotor cortex during target-reach movement is modulated by direction of gaze. Journal of Neurophysiology, 78, 567–571. [PubMed] [Article] [PubMed]
Ono, H. Barbeito, R. (1982). The cyclopean eye vs the sighting-dominant eye as the center of visual direction. Perception and Psychophysics, 32, 201–210. [PubMed] [CrossRef] [PubMed]
Pelisson, D. Prablanc, C. Goodale, M. A. Jeannerod, M. (1986). Visual control of reaching movements without vision of the limb: II Evidence of fast unconscious processes correcting the trajectory of the hand to the final position of a double-step stimulus. Experimental Brain Research, 62, 303–311. [PubMed] [CrossRef] [PubMed]
Perenin, M. T. Vighetto, A. (1988). Optic ataxia: A specific disruption in visuomotor mechanisms: I Different aspects of the deficit in reaching for objects. Brain, 111, 643–674. [PubMed] [CrossRef] [PubMed]
Perez, A. McCarthy, J. M. (2004). Dual quaternion synthesis of constrained robotic systems. Journal of Mechanical Design, 126, 425–435. [CrossRef]
Pisella, L. Grea, H. Tilikete, C. Vighetto, A. Desmurget, M. Rode, G. (2000). An ‘automatic pilot’ for the hand in human posterior parietal cortex: Toward reinterpreting optic ataxia. Nature Neuroscience, 3, 729–736. [PubMed] [Article] [CrossRef] [PubMed]
Pouget, A. Ducom, J. C. Torri, J. Bavelier, D. (2002). Multisensory spatial representations in eye-centered coordinates for reaching. Cognition, 83, B1–B11. [PubMed] [CrossRef] [PubMed]
Pouget, A. Sejnowski, T. J. (1997). Spatial Transformations in the Parietal Cortex Using Basis Functions. Journal of Cognitive Neuroscience, 9, 222–237. [CrossRef] [PubMed]
Quaia, C. Optican, L. M. (1998). Commutative saccadic generator is sufficient to control a 3-D ocular plant with pulleys. Journal of Neurophysiology, 79, 3197–3215. [PubMed] [Article] [PubMed]
Quaia, C. Optican, L. M. Goldberg, M. E. (1998). The maintenance of spatial accuracy by the perisaccadic remapping of visual receptive fields. Neural Networks, 11, 1229–1240. [PubMed] [CrossRef] [PubMed]
Ranalli, P. J. Sharpe, J. A. (1986). Contrapulsion of saccades and ipsilateral ataxia: A unilateral disorder of the rostral cerebellum. Annals of Neurology, 20, 311–316. [PubMed] [CrossRef] [PubMed]
Richard, W. Miller, J. F. (1969). Convergence as a cue to depth. Perception & Psychophysics, 5, 317–320. [CrossRef]
Ritter, M. (1977). Effect of disparity and viewing distance on perceived depth. Perception & Psychophysics, 22, 400–407. [CrossRef]
Sabes, P. N. (2000). The planning and control of reaching movements. Current Opinion in Neurobiology, 10, 740–746. [PubMed] [CrossRef] [PubMed]
Sabes, P. N. Jordan, M. I. (1997). Obstacle avoidance and a perturbation sensitivity model for motor planning. Journal of Neuroscience, 17, 7119–7128. [PubMed] [Article] [PubMed]
Salinas, E. Abbott, L. F. (1995). Transfer of coded information from sensory to motor networks. Journal of Neuroscience, 15, 6461–6474. [PubMed] [Article] [PubMed]
Salinas, E. Abbott, L. F. (2001). Coordinate transformations in the visual system: How to generate gain fields and what to compute with them. Progress in Brain Research, 130, 175–190. [PubMed] [PubMed]
Schindler, I. Rice, N. J. McIntosh, R. D. Rossetti, Y. Vighetto, A. Milner, A. D. (2004). Automatic avoidance of obstacles is a dorsal stream function: Evidence from optic ataxia. Nature Neuroscience, 7, 779–784. [PubMed] [CrossRef] [PubMed]
Schwartz, A. B. (2004). Cortical neural prosthetics. Annual Review of Neuroscience, 27, 487–507. [PubMed] [CrossRef] [PubMed]
Scott, S. H. (2001). Vision to action: New insights from a flip of the wrist. Nature Neuroscience, 4, 969–970. [PubMed] [Article] [CrossRef] [PubMed]
Scott, S. H. Gribble, P. L. Graham, K. M. Cabel, D. W. (2001). Dissociation between hand motion and population vectors from neural activity in motor cortex. Nature, 413, 161–165. [PubMed] [CrossRef] [PubMed]
Shadmehr, R. (2004). Generalization as a behavioral window to the neural mechanisms of learning internal models. Human Movement Science, 23, 543–568. [PubMed] [CrossRef] [PubMed]
Shadmehr, R. Wise, S. P. (2005). The computational neurobiology of reaching and pointing. Cambridge, MA: MIT Press.
Smith, M. A. Crawford, J. D. (2005). Distributed population mechanism for the 3-D oculomotor reference frame transformation. Journal of Neurophysiology, 93, 1742–1761. [PubMed] [Article] [CrossRef] [PubMed]
Snyder, L. H. (2000). Coordinate transformations for eye and arm movements in the brain. Current Opinion in Neurobiology, 10, 747–754. [PubMed] [CrossRef] [PubMed]
Snyder, L. H. Grieve, K. L. Brotchie, P. Andersen, R. A. (1998). Separate body- and world-referenced representations of visual space in parietal cortex. Nature, 394, 887–891. [PubMed] [CrossRef] [PubMed]
Soechting, J. F. Tillery, S. I. H. Flanders, M. (1991). Transformation from head- to shoulder-centered representation of target direction in arm movements. Journal of Cognitive Neuroscience, 2, 32–43. [CrossRef]
Study, E. (1891). Von den Bewegungen und Umlegungen. Mathematische Annalen, 39, 441–566. [CrossRef]
Tait, P. G. (1890). An elementary treatise on quaternions. Cambridge, UK: Cambridge University Press.
Tippett, W. J. Sergio, L. E. (2006). Visuomotor integration is impaired in early stage Alzheimer's disease. Brain Research, 1102, 92–102. [PubMed] [CrossRef] [PubMed]
Todorov, E. (2000). Direct cortical control of muscle activation in voluntary arm movements: A model. Nature Neuroscience, 3, 391–398. [PubMed] [Article] [CrossRef] [PubMed]
Tweed, D. (1997). Three-dimensional model of the human eye–head saccadic system. Journal of Neurophysiology, 77, 654–666. [PubMed] [Article] [PubMed]
Tweed, D. Haslwanter, T. Fetter, M. (1998). Optimizing gaze control in three dimensions. Science, 281, 1363–1366. [PubMed] [CrossRef] [PubMed]
Vetter, P. Goodbody, S. J. Wolpert, D. M. (1999). Evidence for an eye-centered spherical representation of the visuomotor map. Journal of Neurophysiology, 81, 935–939. [PubMed] [Article] [PubMed]
Viguier, A. Clement, G. Trotter, Y. (2001). Distance perception within near visual space. Perception, 30, 115–124. [PubMed] [CrossRef] [PubMed]
von Helmholtz, H. (1924). Handbuch der physiologischen Optik [Treatise on physiological optics]. Handbuch der physiologischen Optik. New York: The Optical Society of America (Original work published 1867).
Westheimer, G. (1957). Kinematics of the eye. Journal of the Optical Society of America, 47, 967–974. [PubMed] [CrossRef] [PubMed]
Zipser, D. Andersen, R. A. (1988). A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331, 679–684. [PubMed] [CrossRef] [PubMed]
Figure 1
 
The visuomotor transformation problem. (A) The gaze-centered visual input has to be transformed into a shoulder-centered reach plan taking into account the complete body geometry, including eye and head orientation. (B) Schematic representation of the human brain areas involved in the visuomotor transformation process. VC: visual cortex; PPC: posterior parietal cortex; CS: central sulcus; PCS: precentral sulcus; S1: primary somatosensory area for arm movements (proprioception); M1: primary motor cortex area for arm movements; PMv: ventral premotor cortex; PMd: dorsal premotor cortex.
Figure 1
 
The visuomotor transformation problem. (A) The gaze-centered visual input has to be transformed into a shoulder-centered reach plan taking into account the complete body geometry, including eye and head orientation. (B) Schematic representation of the human brain areas involved in the visuomotor transformation process. VC: visual cortex; PPC: posterior parietal cortex; CS: central sulcus; PCS: precentral sulcus; S1: primary somatosensory area for arm movements (proprioception); M1: primary motor cortex area for arm movements; PMv: ventral premotor cortex; PMd: dorsal premotor cortex.
Figure 2
 
The visuomotor transformation model. The retinal image of the reaching target and the hand is translated and rotated to construct the motor plan. Although rotations need extraretinal information of current 3-D eye and head posture, translations can be performed using learned (constant) lengths of body segments. Green solid arrows: the full model; dotted red arrows: the model ignoring the offset of rotation centers; dashed red arrows: the model ignoring eye–head rotation signals.
Figure 2
 
The visuomotor transformation model. The retinal image of the reaching target and the hand is translated and rotated to construct the motor plan. Although rotations need extraretinal information of current 3-D eye and head posture, translations can be performed using learned (constant) lengths of body segments. Green solid arrows: the full model; dotted red arrows: the model ignoring the offset of rotation centers; dashed red arrows: the model ignoring eye–head rotation signals.
Figure 3
 
Geometry. (A) Head rotation translates the eyes in space, which result in different visual and gaze angles, that is, gaze ≠ eye + head angle. Solid orange target lines form Head = 30° condition translated into Head = 0° condition for visualization. (B) Effect of eye rotation on the visuomotor transformation. The same retinal vector corresponds to different spatial motor plans when the eyes change their position. (C) Retinal projection nonlinearity. The same spatial movement vector translated in oblique directions on the frontoparallel plane in space (e.g., the same spatial movement executed from different initial hand positions). Solid cross: retinal horizontal/vertical; dotted crosses: retinal obliques, 90° visual range. (D) Effect of eye-in-space torsion. Central rotated arrows show the effect of CW (dark green) to CCW (light green) head-roll positions (15° steps). The four 45° projection vectors correspond to the same movement vector but viewed under different eye positions, that is, 45° up-right (ur), up-left (ul), down-right (dr), and down-left (dl) positions.
Figure 3
 
Geometry. (A) Head rotation translates the eyes in space, which result in different visual and gaze angles, that is, gaze ≠ eye + head angle. Solid orange target lines form Head = 30° condition translated into Head = 0° condition for visualization. (B) Effect of eye rotation on the visuomotor transformation. The same retinal vector corresponds to different spatial motor plans when the eyes change their position. (C) Retinal projection nonlinearity. The same spatial movement vector translated in oblique directions on the frontoparallel plane in space (e.g., the same spatial movement executed from different initial hand positions). Solid cross: retinal horizontal/vertical; dotted crosses: retinal obliques, 90° visual range. (D) Effect of eye-in-space torsion. Central rotated arrows show the effect of CW (dark green) to CCW (light green) head-roll positions (15° steps). The four 45° projection vectors correspond to the same movement vector but viewed under different eye positions, that is, 45° up-right (ur), up-left (ul), down-right (dr), and down-left (dl) positions.
Figure 4
 
Model predictions for visually guided reaching. In the retinal representation (right column), actual hand and target (disc) positions are compared to compute the retinal (gaze-centered) motor error (solid grey arrow). Solid lines represent the space horizontal and vertical axes. The left column shows the visual environment and the actual body configuration (gaze position: black dotted line). Predictions made by the model ignoring eye–head rotations (dotted red arrow) and the full model (green arrow) are shown. (A) Primary eye and head positions. (B) 35° oblique gaze with upright head position (the model without rotations predicts a depth component of reaching, i.e., as if the target was behind its actual location). (C) 20° rightward head-roll with straight-ahead eye-in-head position.
Figure 4
 
Model predictions for visually guided reaching. In the retinal representation (right column), actual hand and target (disc) positions are compared to compute the retinal (gaze-centered) motor error (solid grey arrow). Solid lines represent the space horizontal and vertical axes. The left column shows the visual environment and the actual body configuration (gaze position: black dotted line). Predictions made by the model ignoring eye–head rotations (dotted red arrow) and the full model (green arrow) are shown. (A) Primary eye and head positions. (B) 35° oblique gaze with upright head position (the model without rotations predicts a depth component of reaching, i.e., as if the target was behind its actual location). (C) 20° rightward head-roll with straight-ahead eye-in-head position.
Figure 5
 
Detailed model predictions for oblique gaze and head roll. Reaching errors as predicted by the model when ignoring extraretinal rotation signals (solid lines are components along each axis, bold lines are the total error) and the full model (dotted green lines) in each direction. (A) Oblique gaze condition with head-restrained straight-ahead. (B) The same condition but with the head moving freely. (C) Straight-ahead gaze with different head-roll angles. The components in all directions and the total error are shown ( x axis: left-to-right; y axis: back-to-front; z axis: down-to-up).
Figure 5
 
Detailed model predictions for oblique gaze and head roll. Reaching errors as predicted by the model when ignoring extraretinal rotation signals (solid lines are components along each axis, bold lines are the total error) and the full model (dotted green lines) in each direction. (A) Oblique gaze condition with head-restrained straight-ahead. (B) The same condition but with the head moving freely. (C) Straight-ahead gaze with different head-roll angles. The components in all directions and the total error are shown ( x axis: left-to-right; y axis: back-to-front; z axis: down-to-up).
Figure 6
 
Typical examples of human reaching performance (Subject 4). Left panels show the view from behind the subject; right panels are side views. Blue dots represent 10 different trials in each panel (2 trials to each target; dots are separated by 15 ms). Blue discs depict targets; red arrows are the movement vectors predicted by the model when ignoring rotations. (A) Straight-ahead gaze and head-restrained upright control. (B) First quadrant 45° fixation condition with head-restrained upright. (C) The same fixation condition but subjects could move their head freely. (D) Straight-ahead fixation with the head rolled 30° CCW.
Figure 6
 
Typical examples of human reaching performance (Subject 4). Left panels show the view from behind the subject; right panels are side views. Blue dots represent 10 different trials in each panel (2 trials to each target; dots are separated by 15 ms). Blue discs depict targets; red arrows are the movement vectors predicted by the model when ignoring rotations. (A) Straight-ahead gaze and head-restrained upright control. (B) First quadrant 45° fixation condition with head-restrained upright. (C) The same fixation condition but subjects could move their head freely. (D) Straight-ahead fixation with the head rolled 30° CCW.
Figure 7
 
Predicted versus observed 3-D compensation in the eye–head geometry. Gray dots: data pooled for all subjects. Red and green lines: model predictions for the full model (green, perfect compensation, slope = 1) and the model ignoring eye–head rotations (red, no compensation, slope = 0). Black dotted line: mean squares fit to data. 3-D compensation for the head-restrained straight-ahead condition (A), the head-unrestrained condition (B), and the head-roll condition (C).
Figure 7
 
Predicted versus observed 3-D compensation in the eye–head geometry. Gray dots: data pooled for all subjects. Red and green lines: model predictions for the full model (green, perfect compensation, slope = 1) and the model ignoring eye–head rotations (red, no compensation, slope = 0). Black dotted line: mean squares fit to data. 3-D compensation for the head-restrained straight-ahead condition (A), the head-unrestrained condition (B), and the head-roll condition (C).
Figure 8
 
Reaching variability explained by noise in extraretinal signals. We plot the SD of the reaching endpoint for the head-restrained condition as a function of gaze angle (similar to Figure 5) and for all three spatial direction separately (x axis: left–right; y axis: backward–forward; z axis: down–up). (A) Reaching variability as simulated by our model if extraretinal eye position is prone to signal-dependent noise (see text for details). For each eye position, we used N = 1,000 reaches. (B) Observed reaching variability across all seven subjects.
Figure 8
 
Reaching variability explained by noise in extraretinal signals. We plot the SD of the reaching endpoint for the head-restrained condition as a function of gaze angle (similar to Figure 5) and for all three spatial direction separately (x axis: left–right; y axis: backward–forward; z axis: down–up). (A) Reaching variability as simulated by our model if extraretinal eye position is prone to signal-dependent noise (see text for details). For each eye position, we used N = 1,000 reaches. (B) Observed reaching variability across all seven subjects.
Figure 9
 
Predicted versus observed errors when ignoring the translation between rotation centers. (A) Predictions of errors for the head-roll condition as a function of head-roll angle if translation between eye and head rotation centers is ignored. (B) Observed error parallel to the predicted error is represented as a function of the error predicted when the offset of rotation centers is ignored. Data from all subjects pooled into 2-cm bins (mean ± SE). Translation included: green; ignored: red. Dotted black lines and gray area: mean and SD of the remaining (perpendicular ⊥) error. (C) Error predictions for the head-unrestrained gaze condition as a function of gaze angle and using Donder's strategy for the head contribution to the gaze shift. (D) Same representation for the head-unrestrained condition.
Figure 9
 
Predicted versus observed errors when ignoring the translation between rotation centers. (A) Predictions of errors for the head-roll condition as a function of head-roll angle if translation between eye and head rotation centers is ignored. (B) Observed error parallel to the predicted error is represented as a function of the error predicted when the offset of rotation centers is ignored. Data from all subjects pooled into 2-cm bins (mean ± SE). Translation included: green; ignored: red. Dotted black lines and gray area: mean and SD of the remaining (perpendicular ⊥) error. (C) Error predictions for the head-unrestrained gaze condition as a function of gaze angle and using Donder's strategy for the head contribution to the gaze shift. (D) Same representation for the head-unrestrained condition.
Figure 10
 
Initial movement direction presented in the same manner as in Figure 6, but plotted as relative directions. The measured initial movement angle relative to the model ignoring rotations was plotted as a function of the movement angle predicted by the full model (see Methods section).
Figure 10
 
Initial movement direction presented in the same manner as in Figure 6, but plotted as relative directions. The measured initial movement angle relative to the model ignoring rotations was plotted as a function of the movement angle predicted by the full model (see Methods section).
Table 1
 
Predicted consequences of certain pathologies on the visuomotor transformation.
Table 1
 
Predicted consequences of certain pathologies on the visuomotor transformation.
Pathology Deficit/predicted effect
Damage to vestibular system Head orientation signals missing or incorrect
Strabismus Inaccurate eye position efference copy
Cerebellar patients Degraded efference copy signal for eye and/or head position
PPC damage Position-dependent visuomotor transformations affected (hemi-field effects if unilateral damage)
Alzheimer or other degenerative diseases involving PPC Increased noise in various parts of the visuomotor transformation
Motor learning disorders Poorly calibrated visuomotor transformation
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Supplementary File
Figure 1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×