A fundamental question in neuroscience is how the brain transforms visual signals into accurate three-dimensional (3-D) reach commands, but surprisingly this has never been formally modeled. Here, we developed such a model and tested its predictions experimentally in humans. Our visuomotor transformation model used visual information about current hand and desired target positions to compute the visual (gaze-centered) desired movement vector. It then transformed these eye-centered plans into shoulder-centered motor plans using extraretinal eye and head position signals accounting for the complete 3-D eye-in-head and head-on-shoulder geometry (i.e., translation and rotation). We compared actual memory-guided reaching performance to the predictions of the model. By removing extraretinal signals (i.e., eye–head rotations and the offset between the centers of rotation of the eye and head) from the model, we developed a compensation index describing how accurately the brain performs the 3-D visuomotor transformation for different head-restrained and head-unrestrained gaze positions as well as for eye and head roll. Overall, subjects did not show errors predicted when extraretinal signals were ignored. Their reaching performance was accurate and the compensation index revealed that subjects accounted for the 3-D visuomotor transformation geometry. This was also the case for the initial portion of the movement (before proprioceptive feedback) indicating that the desired reach plan is computed in a feed-forward fashion. These findings show that the visuomotor transformation for reaching implements an internal model of the complete eye-to-shoulder linkage geometry and does not only rely on feedback control mechanisms. We discuss the relevance of this model in predicting reaching behavior in several patient groups.

_{i}can be written as a rotation of angle

*θ*

_{i}around the rotation axis

_{i}(applied in point

_{i}) and the translation of length

*d*

_{i}along

_{i}:

*d*

_{ i}corresponds to the offset of the centers of rotation between eyes, head, neck, and shoulder. The detailed structure of the dual quaternion operator is described in 1. These computations allowed us to build any reference frame transformation as a combination of translations and rotations of body segments. A chain of serial transformations was performed by the dual quaternion multiplication. As a result, we computed the complete linkage geometry as the quaternion product of individual translations and rotations for each body link

*i*:

*Q*

_{L}is the Listing's law quaternion and

*Q*

_{PP}is a quaternion defining the primary position of the eyes in the orbit. These quaternions are (Tweed, 1997)

*Q*

_{LP}is a quaternion that describes the gravity tilt of Listing's plane (see below). We also implemented the static vestibulo-ocular reflex (VOR) specifying that the eyes counterroll slightly for head rolls (rotation of the head toward the shoulder) and that the normal vector of Listing's plane may tilt with head tilt (rotation of the head around the ear-to-ear axis; Bockisch & Haslwanter, 2001; Haslwanter, Straumann, Hess, & Henn, 1992). The latter is called the gravity pitch of Listing's plane.

*α*

_{0}= 5° and the gain for the gravity modulation of this tilt related to the pitch angle

*β*

_{ p}was

*c*

_{ p}= 0.05. The gain for the counterroll of the head-roll angle

*β*

_{ r}was

*c*

_{OCR}= 0.05 (Bockisch & Haslwanter, 2001; Haslwanter et al., 1992). Listing's law is often believed to simplify the visual projections because it restrains the space of possible sensory inputs. However, this only applies to the generally considered much simplified version of Listing's law that does not take into account vergence or the static vestibulo-ocular reflex (sVOR).

*y*axis; Figure 5C), only horizontal and vertical errors were predicted. The simulations of our model without rotations also point out the nonlinearity of the errors—errors that the brain must compensate for in order to optimize the reach plan.

*R*= .960,

*N*= 490,

*p*< .001; subject variability: slope = 0.92–1.03). In case of the head-unrestrained condition ( Figure 7B), the regression slope was 0.958 (

*R*= .981,

*N*= 420,

*p*< .001; subject variability: slope = 0.93–1.02). For the head-roll condition ( Figure 7C), we obtained a slope of 0.972 (

*R*= .917,

*N*= 350,

*p*< .001; subject variability: slope = 0.91–1.07). All the slopes between the predicted and observed values were undistinguishable from 1 (

*t*test,

*p*> .05) and significantly different from zero (

*t*test,

*p*< .001) for all reaching conditions. This suggests that as far as our data can show, the brain does not use an approximation of the complete 3-D eye–head–shoulder geometry but instead accounts for the actual linkage configuration of the body.

*p*> .10 in all comparisons) for all but one subject (Subject 2) who showed a significant influence of fixation position in the head-restrained condition (but not in the head-unrestrained or head-roll condition),

*F*(6, 45) = 4.45,

*p*< .05. This was essentially due to the 45° fixation position data, where errors were significantly different from reach errors during the straight-ahead fixation position for this subject.

*x*axis (i.e., horizontally), but we did not observe any consistent modulation of the ellipsoid size, location, or orientation with fixation position or head-roll angle (see also Supplementary Figure 1). These analyses confirm that the brain accounts for the complete 3-D linkage geometry of the eye–head–shoulder system in computing the visuomotor transformation for reaching.

*N*= 1,000 noisy visuomotor transformations and calculated the standard deviation of the predicted reach endpoint for all three spatial dimensions ( Figure 8A). The closest match with the data ( Figure 8B) was found under the assumption that initial hand position was not subject to this visuomotor transformation noise, perhaps due to a later comparison between hand and target position in shoulder-centered coordinates (see Discussion section).

*R*= .063,

*p*= .278, remaining error = 6.28 ± 2.89 cm). Similarly, the regression analysis did not show any significant relationship between the predicted and observed error in the head-unrestrained condition ( Figure 9D, dashed regression line: slope = 0.200,

*R*= .074,

*p*= .312, remaining error = 5.14 ± 3.68 cm). The observed reach errors were not significantly correlated with the errors predicted by the model when translation was ignored. Consequently, as suggested by Henriques and Crawford (2002) and Henriques et al. (2003), this demonstrates that the visuomotor transformation takes the offset of the centers of rotation of different body segments into account.

*R*= 0.524,

*N*= 1260,

*p*< .001) was indistinguishable from a slope of 1 and significantly different from zero (

*t*test,

*p*< .001). Thus, the internal model of the early visuomotor transformation accounts for both the rotations and translations of the eye–head–shoulder linkage when planning a hand movement.

*complete*early visuomotor transformation. Our model takes all these nonlinear issues into account.

Pathology | Deficit/predicted effect |
---|---|

Damage to vestibular system | Head orientation signals missing or incorrect |

Strabismus | Inaccurate eye position efference copy |

Cerebellar patients | Degraded efference copy signal for eye and/or head position |

PPC damage | Position-dependent visuomotor transformations affected (hemi-field effects if unilateral damage) |

Alzheimer or other degenerative diseases involving PPC | Increased noise in various parts of the visuomotor transformation |

Motor learning disorders | Poorly calibrated visuomotor transformation |

*Q*and

*Q*

_{0}, of which one is multiplied by a duality operator

*ɛ*, that is,

*Q*+

*ɛQ*

_{0}, where

*Q*describes the rotational component and

*Q*

_{0}implements the translation operation. A dual quaternion can also be represented as an eight-dimensional vector, that is,

*θ*around the axis

*d*along

*XY*describes the simple quaternion multiplication (

*ɛ*

^{2}= 0). Simple quaternion algebra for rotations has been described elsewhere (Haslwanter, 1995; Tait, 1890; see Supplementary Methods). A point in space

^{T}]

^{T}can then be transformed with

^{c}=

*Q*

^{c}−

*ɛQ*

_{0}

^{c}with the quaternion conjugate

*Q*

^{c}. Using this formalism, we can describe the complete 3-D linkage between the eyes and the shoulder.

*Handbuch der physiologischen Optik*[Treatise on physiological optics]. Handbuch der physiologischen Optik. New York: The Optical Society of America (Original work published 1867).