Free
Article  |   May 2012
Accurate planning of manual tracking requires a 3D visuomotor transformation of velocity signals
Author Affiliations
Journal of Vision May 2012, Vol.12, 6. doi:https://doi.org/10.1167/12.5.6
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Guillaume Leclercq, Gunnar Blohm, Philippe Lefèvre; Accurate planning of manual tracking requires a 3D visuomotor transformation of velocity signals. Journal of Vision 2012;12(5):6. https://doi.org/10.1167/12.5.6.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Humansoften perform visually guided arm movements in a dynamic environment. To accurately plan visually guided manual tracking movements, the brain should ideally transform the retinal velocity input into a spatially appropriate motor plan, taking the three-dimensional (3D) eye-head-shoulder geometry into account. Indeed, retinal and spatial target velocity vectors generally do not align because of different eye-head postures. Alternatively, the planning could be crude (based only on retinal information) and the movement corrected online using visual feedback. This study aims to investigate how accurate the motor plan generated by the central nervous system is. We computed predictions about the movement plan if the eye and head position are taken into account (spatial hypothesis) or not (retinal hypothesis). For the motor plan to be accurate, the brain should compensate for the head roll and resulting ocular counterroll as well as the misalignment between retinal and spatial coordinates when the eyes lie in oblique gaze positions. Predictions were tested on human subjects who manually tracked moving targets in darkness and were compared to the initial arm direction, reflecting the motor plan. Subjects spatially accurately tracked the target, although imperfectly. Therefore, the brain takes the 3D eye-head-shoulder geometry into account for the planning of visually guided manual tracking.

Introduction
Many objects in our environment are in motion, and catching, hitting, or tracking these objects is crucial in our everyday life activities. In this study, we consider visually guided manual tracking movements as an example of interaction with moving objects. To carry out such movements, the brain must transform the retinal input about hand and target velocities into a desired motor command to accurately drive the arm's muscles (Crawford, Medendorp, & Marotta, 2004; Shadmehr & Wise, 2005). But visual input is not sufficient to obtain an accurate motor plan. Indeed, a change in eye-head posture affects the retinal information about the target velocity, but it does not affect the spatial target motion (Blohm & Crawford, 2007; Blohm & Lefèvre, 2010; Crawford et al., 2004; Klier & Crawford, 1998). Theoretically, the most efficient way for the CNS to plan the tracking movement would be to recover target motion in space from retinal velocity by taking the three-dimensional (3D) eye and head postures into account. Alternatively, manual tracking could be directly driven by retinal velocity, resulting in a crude motor plan, and rely on delayed sensory feedback corrections to adjust for the mismatch between the retinal and spatial coordinates. The goal of this article was to investigate if the visuomotor velocity transformation for the planning of manual tracking movements accurately recovered spatial target motion from retinal velocity as reflected in the initiation of the arm movement. 
The visuomotor transformation is not trivial because generally, the visual and spatial reference frames do not coincide. Indeed, this happens, for example, when the head is rolled toward one shoulder or when the eyes lie in eccentric positions. For example, Figure 1b represents a shooting clays situation in which the shooter must react rapidly to track the clay before firing. If the shooter has the head tilted toward the left shoulder, the retinal projection of the clay velocity vector will be tilted on the retina (Figure 1b, inset on the left, black arrow). If this retinal velocity vector was used directly as a motor plan, the tracking motor command would be initiated in the wrong direction (Figure 1b, orange arrow) and may be corrected later using sensory (visual and proprioceptive) feedback. The drawback of this alternative is that sensory feedback is available only after a delay typically greater than 100 ms, which prevents the brain from reacting rapidly and thus from tracking the clay fast and accurately enough. However, in many everyday life situations, a delay of about 100 ms would not prevent us from performing the task adequately. Thus, it could be that we rely only on retinal information to plan manual tracking movements and then use visual feedback to correct for the initiation error. Indeed, the initiation of a motor response is not always accurate, and feedback is then used to adjust/correct the movement (Blohm, Missal, & Lefèvre, 2005; Engel, Anderson, & Soechting, 2000; Henriques, Klier, Smith, Lowy, & Crawford, 1998; Van Pelt & Medendorp, 2008). In most behavioral situations, there should not be any major differences between a spatially accurate or a crude retinal motor plan for the manual tracking movement (because feedback would come into play to correct). But knowing which motor plan is used has a wide impact on our understanding of brain functioning. Has the brain network that is involved in computing the motor plan access to an internal model of the 3D eye-head-shoulder geometry? What are the available input signals to this neural network? 
Figure 1
 
Velocity visuomotor transformation for manual tracking. (a) Schematic representation of the central nervous system areas implicated in the visuomotor velocity transformation. V1: primary visual cortex. MT: middle temporal area. MST: medial superior temporal area. PPC: posterior parietal cortex. M1: primary motor cortex. PMv: ventral premotor cortex. PMd: dorsal premotor cortex. (b) Example of manual tracking in a challenging dynamic environment: clays shooting. Before shooting, the shooter must react rapidly and track the target as accurately as possible. If the head is slightly tilted, projection of the velocity vector onto the retina will be tilted (black arrow on the retinal projection, inset on the left). If the brain does not take the head posture into account, the tracking movement will start in the direction indicated by the orange arrow. If it is taken into account, the initiation of the movement will be correct (blue arrow).
Figure 1
 
Velocity visuomotor transformation for manual tracking. (a) Schematic representation of the central nervous system areas implicated in the visuomotor velocity transformation. V1: primary visual cortex. MT: middle temporal area. MST: medial superior temporal area. PPC: posterior parietal cortex. M1: primary motor cortex. PMv: ventral premotor cortex. PMd: dorsal premotor cortex. (b) Example of manual tracking in a challenging dynamic environment: clays shooting. Before shooting, the shooter must react rapidly and track the target as accurately as possible. If the head is slightly tilted, projection of the velocity vector onto the retina will be tilted (black arrow on the retinal projection, inset on the left). If the brain does not take the head posture into account, the tracking movement will start in the direction indicated by the orange arrow. If it is taken into account, the initiation of the movement will be correct (blue arrow).
The brain has been shown to recover the target position in space using 3D eye-head orientations in different contexts. Crawford and Guitton (1997) developed a model showing that for saccades to be spatially accurate, the 3D eye position had to be taken into account, and they made predictions for the saccadic error if only the retinal information was taken into account. In a behavioral experiment, Klier and Crawford (1998) tested these model predictions with memory-guided saccades performed in complete darkness to check if they were accurate or not. They showed that saccades were spatially accurate. Similarly, Blohm and Crawford (2007) developed a model for reaching movements, showing that the 3D eye and head positions were necessary to reconstruct the spatial location of the target. They tested their predictions with a memory-guided reaching task and showed that subjects accurately reached toward the spatial location of the target, indicating that 3D eye-head-shoulder geometry had been taken into account. However, retinal velocity is processed by dedicated brain areas (Figure 1a) that are largely segregated from regions processing purely positional information (Krauzlis, 2005; Leigh & Zee, 2006). Therefore, a separate visuomotor velocity transformation is required to generate a geometrically accurate motor plan for the initiation of manual tracking. Recently, such a visuomotor velocity transformation has been shown to exist for velocity-driven smooth pursuit eye movements (Blohm & Lefèvre, 2010), where 3D eye position had to be taken into account to initiate a spatially accurate smooth pursuit eye movement. What remains unknown is whether the reach system also transforms retinal velocity information into a spatially accurate reference frame. 
To test whether the brain computes a spatially accurate motor plan using velocity information or rather relies on feedback corrections, we designed a manual tracking task under different eye-head-shoulder configurations. The goal of the task was to initiate a manual tracking movement in the direction of the target motion. The direction of the arm movement during the initiation phase (first 100 ms of the movement) was then compared with the direction predicted if the brain used only retinal information (retinal hypothesis) and to the target-in-space direction (which is also the direction predicted if taking the 3D eye-head-shoulder geometry into account, spatial hypothesis). Such a comparison between behavioral results and the predictions made in the spatial or retinal case has been proved very useful in prior similar studies (Blohm & Lefèvre, 2010; Crawford & Guitton, 1997; Klier & Crawford, 1998; as well as Crawford et al., 2004). Results showed that the brain transformed retinal velocity into accurate motor plans, as reflected in the earliest components of the tracking movements, thus taking the 3D eye-head-shoulder geometry into account. 
Materials and methods
We will consider two main hypotheses about how the brain takes the 3D eye-head-shoulder geometry into account in the visuomotor transformation of velocity signals for manual tracking movement planning. The spatial hypothesis states that the brain perfectly takes the 3D eye and head posture signals into account in addition to the retinal input. Thus, this hypothesis predicts that the arm movement starts in the target-in-space direction. Another hypothesis, the retinal hypothesis, assumes that the brain uses the retinal input directly as a motor plan and has no access to internal eye and head position signals. In this case, the predicted direction for the arm movement will depend on the 3D eye and head posture signals, in addition to the visual input. Other situations may also be considered in which the eye and/or head positions signals are biased and therefore affect the transformation of the retinal velocity signal. In the following, the difference between the retinal and spatial hypotheses will be used extensively to measure how much subjects take the extraretinal signals into account to plan a manual tracking movement. In our experiments (described later), the spatial target velocity is known to the experimenter but not the retinal velocity. Therefore, we need to estimate the retinal velocity. To do so, we developed in the appendix the kinematic model linking the spatial and retinal velocity, which depends on the 3D eye and head positions and used it to estimate the target retinal velocity and its direction. 
Experimental setup
Seven right-handed healthy human subjects (aged 23–43 years, 4 naive) participated in the experiments after giving informed consent. All subjects had normal or corrected-to-normal vision and were without any known sensory or motor anomalies. All procedures were conducted with approval of the Université catholique de Louvain Ethics Committee. 
The experiments were performed in complete darkness. Subjects were seated in front of a 1-m distant translucent flat screen that covered about ±40° of their visual field in both vertical and horizontal directions. Two targets—a green and a red 0.2° LASER spot—were back-projected onto the screen using two pairs of mirror galvanometers. A dedicated real-time computer running LabView RT (National Instruments, Austin, TX) controlled illumination and position of each target with a refresh rate of 1,000 Hz. This computer also controlled the illumination of a green LED attached to the tip of the subjects' right index finger, above the nail. The LED was directed toward the subjects to allow them to see the light when the LED was switched on. 
Each subject's left eye was patched during the experiment to avoid switching the dominant eye when aligning their fingertip with the gaze directed to different hemifields (Khan & Crawford, 2003). Patching one eye also prevented subjects from using binocular information that could inform them about 3D eye position. Each subject's right eye was approximately located in front of the screen center. Right eye movements were recorded (200 Hz) using a head-mounted 3D video eye-tracking device (Chronos Vision GmbH, Berlin, Germany). Arm- and head-in-space positions were measured (200 Hz) using a Codamotion active infrared marker tracking device (Charnwood Dynamics Ltd., Leicestershire, UK). We used a bite-bar attached to the eye tracker to ensure that the helmet did not move on the head during a session. 
Paradigms
We designed a manual tracking task to assess if subjects took the 3D eye-head-shoulder geometry into account in the visuomotor transformation of velocity. More specifically, we tested separately the effect of head-roll angle and gaze direction on the direction of manual tracking initiation. There were two paradigms: the head-roll paradigm and the oblique gaze paradigm. 
During a head-roll trial (Figure 2a), subjects first rolled their head in the direction indicated by the orientation of a bar displayed on the screen (i.e., toward left shoulder, right shoulder, or upright position). Head-roll indication varied randomly across trials. After subjects oriented their gaze toward the fixation point (FP; red dot), always located at the screen center, subjects were required to align the finger LED with the tracking target (TT; green dot), also located at the screen center. Subjects were instructed to align the fingertip LED just below the TT such that the TT was still visible. The TT and the FP were slightly offset from each other such that the subjects were able to see them distinctly. Subjects maintained the gaze fixation until the end of the trial. The finger LED was then switched off to prevent the use of any visual feedback of the arm. At this time, the TT started to move and subjects were asked to track the TT with their arm. The TT moved for 1200 ms at constant velocity along a straight line in a given direction. After 300 ms, the target was extinguished for 450 ms (see Figure 2a, lower panel) so that the initial part of the hand movement generally occurred in the absence of the target. Trials ended with TT being static for 500 ms at the location reached after its movement. This part is not emphasized in the protocol because we are focusing only on the initial part of the movement. 
Figure 2
 
Experimental paradigms. All the experiments were carried out in complete darkness. (a) Head-roll paradigm. Subjects first rolled their head either toward the left shoulder or right shoulder or kept it upright, as indicated by the gray bar, and maintained this posture (Part 1). Then they fixated the fixation point (red disk), located at screen center, and pointed their arm toward the tracking target, denoted TT (green disk), located at the same location (Parts 2 and 3). In Part 4, the TT started to move at a constant velocity (10°, 20°, or 30°/s) in one of six directions (dotted green lines). Subjects were required to track the TT with their right arm while maintaining fixation. When moving, the TT was first visible for 300 ms (Part 4a), then became invisible for 450 ms (Part 4b), before reappearing for another 450 ms. (b) Oblique gaze paradigm. Subjects first pointed to the TT, initially located at screen center (Part 1), before orienting their gaze on one (red disk) of the five possible fixation points (red dotted circles), located on the two upper diagonals, at 0°, 15°, or 30° eccentricity (Part 2). Then they tracked the TT with their arm while maintaining gaze fixation, as for Part 4 of the head-roll paradigm.
Figure 2
 
Experimental paradigms. All the experiments were carried out in complete darkness. (a) Head-roll paradigm. Subjects first rolled their head either toward the left shoulder or right shoulder or kept it upright, as indicated by the gray bar, and maintained this posture (Part 1). Then they fixated the fixation point (red disk), located at screen center, and pointed their arm toward the tracking target, denoted TT (green disk), located at the same location (Parts 2 and 3). In Part 4, the TT started to move at a constant velocity (10°, 20°, or 30°/s) in one of six directions (dotted green lines). Subjects were required to track the TT with their right arm while maintaining fixation. When moving, the TT was first visible for 300 ms (Part 4a), then became invisible for 450 ms (Part 4b), before reappearing for another 450 ms. (b) Oblique gaze paradigm. Subjects first pointed to the TT, initially located at screen center (Part 1), before orienting their gaze on one (red disk) of the five possible fixation points (red dotted circles), located on the two upper diagonals, at 0°, 15°, or 30° eccentricity (Part 2). Then they tracked the TT with their arm while maintaining gaze fixation, as for Part 4 of the head-roll paradigm.
Both TT velocity and direction varied randomly across trials. The TT velocity changed between 10, 20, and 30°/s. The TT direction was to the left (170°, 180°, or 190°) or to the right (−10°, 0°, or 10°; see Figure 2). The small angular difference (10°) between directions from each group (left, right) prevented subjects from making stereotyped movements after some repetitions. 
In the oblique gaze paradigm (Figure 2b), subjects started the trial with aligning the finger LED with the TT, initially located at the screen center. Then, an FP appeared on one of the two upper diagonals with an eccentricity of 0°, 15°, or 30°. The FP location varied randomly across trials. Subjects were required to orient their gaze to the FP within a 500-ms period and to maintain their gaze fixated on the FP until the end of the trial. The remaining part of the trial was similar to the head-roll paradigm. Subjects tracked the moving target with their finger. TT velocity was 20°/s for all trials and did not vary across trials. However, target direction was randomly chosen from the same angles as during the head-roll paradigm. In the oblique gaze paradigm, a chin rest was used to maintain the head in an upright position. 
Experimental sessions started with a gaze calibration block in which subjects fixated different known positions without moving the head. A so-called aligning calibration block followed, in which subjects had to look at and align their arm toward different known positions. Three or four blocks of 30 trials were then carried out before repeating the calibration procedure and so on. One session lasted 50 minutes maximum. In the oblique gaze paradigm, there were five different gaze positions and six different TT directions, leading to 30 conditions. Each subject performed 300 trials chosen randomly among the 30 conditions. In the head-roll paradigm, there were six different directions, three different indications of head roll and three different velocities, resulting in 54 conditions. Each subject performed 540 trials chosen randomly among the 54 conditions. 
Data analysis
Collected data (eye, target, head, and arm position) were stored on a hard disc for further offline analysis using MATLAB (The MathWorks, Inc., Natick, MA). Position signals were low-pass filtered using a zero-phase digital filter (autoregressive forward-backward filter, cutoff frequency: 50 Hz). Velocity signals were estimated from position signals by using a central difference algorithm. Eye torsion was extracted using the IRIS software (Chronos Vision GmbH, Berlin, Germany). The operator defines several segments on the iris, and the software uses a cross-correlation algorithm to estimate ocular torsion (Schreiber & Haslwanter, 2004). IRIS extraction software provided 3D eye position in Fick coordinates. The Chronos eye tracker had a tracking resolution better than 0.05° along all three axes. Calibration was accurate to 0.5° in position. Torsion does not need calibration; however, it requires a reference position that was selected at the start of each block when the head was straight and eyes straight ahead. 
Before further processing, 3D Fick eye position was converted into an angular vector representing the 3D eye position (Haslwanter, 1995; Haustein, 1989), specifying a unique angle and rotation axis. However, the reader should be aware that in the following text, 3D eye position is represented in Fick coordinates in the figures in order to visualize the amount of misalignment between retinal and spatial axes (formerly called “false torsion”) in the oblique gaze experiment. 
To compute head position, three infrared markers were placed on the eye tracker helmet. At each time step, 3D head angular position was defined by the axis and angle of the rotation of the head compared with a reference head position. This reference position was computed at each gaze calibration (when the head was upright). The arrangement of the Codamotion markers on the eye tracker helmet resulted in an estimated resolution (and accuracy) of 0.1° for head movements. 
The location of the eyeball with respect to the head markers was measured before each session. To this end, we temporarily put an infrared marker on the eyelid when the eye was closed and recorded the marker's positions for 10 seconds. Using this information, we were able to reconstruct the position of the eyeball in space by using this head-fixed constant relationship. The position of the tip of the pointing finger was measured by attaching an infrared marker to the right index. As the manual tracking task consists in aligning the fingertip on the line joining the eye to TT (which is the line of sight when TT is also foveated), we computed the eye-fingertip vector at each time step. The tracking position (2D because the fingertip position is invariant to the arm rotation around itself) was defined by the rotation of the eye-fingertip vector (at the current time step) compared with a reference eye-fingertip vector. The reference vector was computed for each trial as the eye-fingertip vector at the time of TT movement onset. 
Onset of the arm movement was automatically detected using the following procedure: the algorithm selected the groups of consecutive time steps for more than 200 ms occurring after TT onset where the arm vectorial velocity (the norm of the velocity vector) was above a threshold of 3°/s for each element of the group. Then the algorithm chose the first such group (which corresponded to the initial movement of the arm) and fitted a linear regression between the vectorial velocity and the time. The onset was defined as the time when the fitted regression crossed a zero-velocity (Badler & Heinen, 2006; Carl & Gellman, 1987; Krauzlis & Miles, 1996). All trials were visually inspected. For each trial, the arm latency was computed. 
Some arm movements were clearly anticipatory because the arm latency was about 0 ms (trials being aligned on TT onset) for these trials. Therefore, we determined a minimum arm latency to make sure the arm movement was triggered by the visual input above this threshold. This minimum arm latency was set to 150 ms for both oblique gaze and head-roll conditions. Indeed, the percentage of trials that start in the correct direction (left vs. right) is greater than 95% after this 150-ms threshold, whereas subjects behaved at chance for anticipatory trials with latency about 0 ms. 
For each trial, the initial arm movement direction was estimated over the first 100 ms after arm movement onset. This short time period prevented proprioceptive feedback (because there was no visual feedback of the arm) from correcting the hand movement during this period (Desmurget & Grafton, 2000). Initial arm movement direction was determined from the ratio between the mean vertical arm acceleration and the mean horizontal arm acceleration over this 100-ms period. 
In the head-roll condition, we carried out our analysis with 3221 trials out of 4230 trials (76% were valid trials). We first removed 580 trials (14%) because of eye blinks or eye movements in the 300-ms period after TT onset (period when subjects receive the visual information for movement planning) or problems with the acquisition of infrared markers (markers not seen). Second, we removed 429 anticipatory trials (10%; i.e., those trials whose latency was lower than 150 ms). These trials were discarded to avoid any potential confound in the results with the anticipation mechanism. 
In the oblique gaze condition, we carried out our analysis with 1,445 trials out of 2,610 trials (55% were valid trials). We first removed 512 trials (20%) because of eye blinks or eye movements in the 300-ms period after TT onset or problems with the acquisition of infrared markers. Second, we removed 653 anticipatory trials (25%), that is, those trials whose latency was lower than 150 ms. 
To quantitatively compare the observed initial arm directions to the directions predicted by the two hypotheses and therefore see if the brain takes the 3D eye-head-shoulder geometry into account, we defined a transformation index:  where retinal direction is the (estimated) TT retinal direction (in degrees), initial arm direction is the measured initial arm movement direction (in degrees), and spatial direction is the TT direction in space (in degrees). Observed transformation is the difference between the initial arm direction and the retinal direction, whereas predicted 3D transformation is the difference we should observe if the brain perfectly compensated for the 3D eye-head-shoulder geometry. 
If the initial arm direction is equal to TT retinal direction, then transformation = 0. If initial arm direction is the TT spatial direction, then transformation = 1, which means full compensation for the 3D eye-head-shoulder geometry. 
The predicted 3D transformation depends on the amount of eye and/or gaze rotations the brain has to compensate for. For each trial, we computed the predicted 3D transformation (and therefore the retinal direction) using the measured eye and head positions (mean 3D eye and 3D head angular positions measured over the first 100 ms after TT movement onset). We also computed the observed transformation, using the measured initial arm movement direction in addition to the eye and head position measurements (necessary to compute the retinal direction). 
Practically, we measured the average compensation by fitting a linear regression between observed transformation (dependent variable) and predicted 3D transformation, because the transformation defined for each trial was very sensitive when the predicted 3D transformation was close to zero.  where a0 is the independent term. The slope of the regression, a1, represents the average amount of transformation (in percentage). 
We normalized the initial arm direction measures by comparison to a situation in which the predicted 3D transformation equals zero (baseline situation), in order to get rid of biases in a direction that are not correlated to the task. To do so, we computed the offset in direction, defined as the mean observed transformation when the head was upright and the gaze straight ahead (situation in which predicted 3D transformation = 0). Then we subtracted this offset from each observed transformation measure. In the following sections, we will refer to this modified observed transformation as the standardized observed transformation
Results
Model predictions
Which motor plans do the two hypotheses predict for various eye-head-shoulder configurations? Let us consider that the TT moves horizontally to the right in the shoulder reference frame. Figure 3 represents the directions predicted by the spatial hypothesis (in blue) and the retinal hypothesis (in orange) in three situations: (a) head upright and gaze in its primary position, (b) head roll toward the left shoulder and gaze straight ahead, and (c) head upright and gaze in an eccentric oblique position. The left part of each panel shows the retinal projection of the TT velocity vector onto the retina. 
Figure 3
 
Model predictions for visually guided manual tracking. A target (green disk, denoted TT, for tracking target) moved at constant velocity to the right (from the subject's perspective) on a flat screen. Panels A–C describe the same manual tracking task but for different static geometrical eye-head-shoulder configurations. The projection of the TT velocity vector onto the retina (black arrow) is represented on the left side of each panel. Black solid lines are the projections of the horizontal and vertical axes. Predictions for the initial direction of the arm from two visuomotor hypotheses (not extensive) are represented on each panel: the spatial hypothesis (blue arrow) and the retinal hypothesis (orange arrow) (a) Primary position. The head was upright, and the fixation point (red disk) was located at the screen center, such that the gaze was in the primary position. (b) Head roll. The head was rolled toward the left shoulder, and the gaze was still directed onto the screen center. (c) Oblique gaze. The head was upright, and the gaze was directed at an eccentric position on the upper left diagonal (red disk on line). Panels (d) and (e) describe the predicted tracking direction errors as a function of head-roll angle and oblique gaze angle for the represented hypotheses. For the spatial hypothesis, there is no error (blue line). For the retinal hypothesis, we represented the average retinal error (orange line) across subjects and the 95% confidence interval (orange area). Gray lines and gray area show the retinal error for each subject and the related 95% confidence interval (see text). The black lines in panels (d) and (e) show the retinal error we should observe if (d) the OCR gain was equal to 0 or if (e) the primary position of the eye is orthogonal to the screen.
Figure 3
 
Model predictions for visually guided manual tracking. A target (green disk, denoted TT, for tracking target) moved at constant velocity to the right (from the subject's perspective) on a flat screen. Panels A–C describe the same manual tracking task but for different static geometrical eye-head-shoulder configurations. The projection of the TT velocity vector onto the retina (black arrow) is represented on the left side of each panel. Black solid lines are the projections of the horizontal and vertical axes. Predictions for the initial direction of the arm from two visuomotor hypotheses (not extensive) are represented on each panel: the spatial hypothesis (blue arrow) and the retinal hypothesis (orange arrow) (a) Primary position. The head was upright, and the fixation point (red disk) was located at the screen center, such that the gaze was in the primary position. (b) Head roll. The head was rolled toward the left shoulder, and the gaze was still directed onto the screen center. (c) Oblique gaze. The head was upright, and the gaze was directed at an eccentric position on the upper left diagonal (red disk on line). Panels (d) and (e) describe the predicted tracking direction errors as a function of head-roll angle and oblique gaze angle for the represented hypotheses. For the spatial hypothesis, there is no error (blue line). For the retinal hypothesis, we represented the average retinal error (orange line) across subjects and the 95% confidence interval (orange area). Gray lines and gray area show the retinal error for each subject and the related 95% confidence interval (see text). The black lines in panels (d) and (e) show the retinal error we should observe if (d) the OCR gain was equal to 0 or if (e) the primary position of the eye is orthogonal to the screen.
In the configuration (a), TT direction is the same in the shoulder and eye reference frames, so the two hypotheses predict a correct direction for TT to the right. In the head-roll case (b), the velocity vector is tilted on the retina by an angle equal to the opposite of the head-roll angle (diminished by the resulting static ocular counterroll [OCR]). The spatial hypothesis takes the head and eye rotations into account and predicts by definition the correct movement direction. On the contrary, the retinal hypothesis uses only the retinal information; thus, the predicted TT direction is tilted like the retinal information. In the oblique gaze case (c), the projection of the velocity vector is slightly tilted onto the retina. Here, the eyes obey Listing's law, which restricts the rotation axis to be in a plane with no torsional component. As a consequence of the 3D projection geometry onto the spherical retina, the projection of the spatially horizontal vector is slightly tilted counterclockwise onto the retina. Therefore, predictions from the retinal and spatial hypotheses are different, and this difference grows with oblique gaze angle. For more details, see, for instance, figure 5 of Blohm and Lefèvre (2010). 
Figure 3 shows the error in the initial tracking direction predicted by the two hypotheses as a function of (d) the head-roll angle and (e) the gaze eccentricity. If the brain behaved like the spatial hypothesis, then we should expect no error, whatever the head-roll angle or gaze position, as revealed by the blue line with zero slope. However, if the brain takes only the retinal information into account, then we should expect performance errors like those shown by the orange curves (retinal hypothesis across subjects) for the measured arm directions. As each subject has his or her own characteristic parameters (OCR gain, primary position), the predicted error curve for the retinal hypothesis is different for each subject. Therefore, we first estimated for each subject these parameters and the related 95% confidence interval. Then we used these estimated parameters to estimate the error curve for the retinal hypothesis as well as a 95% confidence interval around the curve. Figure 3d and 3e show these curves for each subject (seven gray curves in each panel) as well as the 95% confidence intervals. We mixed all the intervals together for the clarity of the two panels. Black lines indicate the hypothetical retinal error if (d) the OCR gain was 0 or if (e) the primary position was aligned to the torsional axis. Other predictions could also be represented with other kinds of hypotheses (biased eye and/or head signals). Here we want to illustrate the differences in the predicted directions according to different simple hypotheses. 
By measuring the initial tracking direction of the arm during our experiments and comparing it to the retinal and spatial directions, we may discriminate between those hypotheses. In the following sections, we first analyzed how subjects behaved in the head-roll paradigm. Next, we investigated how subjects performed the task in the oblique gaze paradigm. 
Head-roll experiment typical trial
Figure 4 shows a typical valid trial in the head-roll condition. In the following, we will refer to the eye-finger vector trajectory as the pointing or finger trajectory (because the eyeball does not move in space during the manual tracking part). Figure 4a shows that finger trajectory (black line) follows TT trajectory (blue line), especially in the initial part of the arm movement (first 100 ms after arm movement onset). As indicated by the tilted dashed black line, the subject was first indicated to tilt the head toward the left shoulder and to maintain fixation at the FP (red dot). Despite the head being tilted in space, finger trajectory followed TT trajectory in the initial part of the movement. Therefore, in this particular trial, the subject seemed to compensate for the head rotation. If not, the subject would have started the arm movement by following the direction predicted by the retinal information only (orange line) and would probably have corrected the arm trajectory online in the closed-loop part of the movement, when sensory feedback would have been available. But that was not what we observed for this particular trial. Figure 4b represents eye, head, and finger positions over time. The subject first rolled the head by about 45° toward the left shoulder and maintained this posture. TT onset is represented by the vertical black line. TT position and velocity are represented with dotted lines. The arm started to move approximately 250 ms after TT onset, just before TT occlusion onset. Across all valid trials and all subjects, mean arm latency was 232 ms (SD = 57 ms). Mean arm latencies ranged from 191 to 273 ms from one subject to another. Looking at 3D eye-in-head position, we observed (magenta trace) the static OCR. We also noticed changes in horizontal (blue trace) eye-in-head position. This was mainly due to the translation of the eyeball in space (resulting from the head rotation), as well as a compensation for the horizontal (yaw) component of the head rotation (blue trace). Given that subjects were given only a head-roll indication, they were free to move their head to an eccentricity corresponding to their individual comfort level, on average around 30°, and sometimes some subjects had a vertical (pitch) or horizontal (yaw) component when rotating their head. However, this did not affect our results because pitch and yaw angles were generally small (mean absolute yaw angle: 4.5° [SD = 4.2°], mean absolute pitch angle: 4.4° [SD = 3°]), and all this information was taken into account when estimating the retinal velocity using the model. 
Figure 4
 
Typical trial in the head-roll paradigm. Panel (a) represents trajectories in the screen coordinates after the TT movement onset: TT trajectory (blue line) and projection of the eye-finger vector trajectory onto the screen (black line). Gaze fixation is also represented (red dot) as well as the head-roll indication (black dashed line) for this particular trial. Orange line shows the direction predicted using retinal information only. Panel (b) shows head-in-space 3D angular position and eye-in-head 3D Fick position, as well as the horizontal and vertical components of tracking position and velocity (thick traces for the tracking, dotted lines for the target) as a function of time during the trial. TT onset is indicated by the black vertical line, whereas TT occlusion is represented by the gray area.
Figure 4
 
Typical trial in the head-roll paradigm. Panel (a) represents trajectories in the screen coordinates after the TT movement onset: TT trajectory (blue line) and projection of the eye-finger vector trajectory onto the screen (black line). Gaze fixation is also represented (red dot) as well as the head-roll indication (black dashed line) for this particular trial. Orange line shows the direction predicted using retinal information only. Panel (b) shows head-in-space 3D angular position and eye-in-head 3D Fick position, as well as the horizontal and vertical components of tracking position and velocity (thick traces for the tracking, dotted lines for the target) as a function of time during the trial. TT onset is indicated by the black vertical line, whereas TT occlusion is represented by the gray area.
Mean trajectories
TT direction was either to the subject's left or right, with an angular orientation of −10°, 0°, or 10°, compared with the left or right directions. The purpose of these small angular deviations from horizontal was to prevent subjects from making automatic responses, either to the left or to the right. For each raw direction (left or right), the initial arm movement direction (over the first 100 ms after arm movement onset) depended on TT direction (one-way analysis of variance, initial arm direction vs. TT direction, main effect of TT direction, right directions: F(2, 1599) = 118.25, p < 0.001, left directions: F(2, 1616) = 133, p < 0.001). The difference in initial arm movement direction was significant for each pair of TT directions (post hoc HSD Tukey test, p < 0.001). It shows that subjects did not just make stereotyped arm movements either toward the left or toward the right without taking the vertical component of TT trajectory into account. 
Mean pointing trajectories show that subjects compensate for the head roll. Figure 5a shows mean (±SEM) pointing trajectories (black lines, during the first 200 ms after arm movement onset) for each of the six TT directions when head-roll indication was toward the left shoulder. It is clear that the trajectories followed the directions predicted by the spatial hypothesis (blue lines) and not the directions predicted by the retinal information only (orange lines). To quantify this effect, we defined a transformation index (see the Materials and Methods section) that measures how much subjects compensated for the eye-head-shoulder geometry effects. If the transformation is zero, subjects do not compensate for the head-roll rotation and instead use retinal information only. If people fully transform, then the observed transformation would be equal to the predicted 3D transformation, and the transformation would be equal to 1. In the head-roll paradigm, the predicted 3D transformation is equal to the amount of head-roll rotation if we assume no OCR and no other rotations around other axis (pitch, yaw). But, in practice, such assumptions are not met. Therefore, we fully took all effects into account, computing the predicted 3D transformation, including pitch/roll and, importantly, OCR. 
Figure 5
 
Quantitative results for the head-roll paradigm. (a) Mean arm trajectories for the first 200 ms following movement onset for trials with the head rolled toward the left shoulder, for each TT direction. In blue, the directions predicted by the spatial hypothesis (i.e., the TT directions onto the screen). In orange, the trajectories predicted by the retinal hypothesis. In black, the mean initial trajectories of the arm across subjects and in gray the standard error of the mean. Model predictions and mean observed trajectories are represented with dottedlines for a TT direction of 10° (right part) or 190° (left part). Similarly, we have solid (dashed) lines for 0° (−10) and 180° (170) TT directions. The black curve represents the direction (in degrees). (b) Standardized observed transformation as a function of the predicted 3D transformation (5° bins). Predictions for the spatial (blue) and retinal (orange) hypotheses are represented. A linear regression line (dashed black line) was fitted to the data collected across all subjects. The slope of the regression line represents the amount of transformation. (c) Transformation for each subject and for all subjects pooled together. Linear regression lines were fitted like in Panel (b), for each subject. The transformation indices (corresponding slopes) and corresponding 95% confidence intervals are represented in black. (d) Standardized observed OCR transformation as a function of predicted 3D OCR transformation. (e) OCR transformation (slope of the regression lines in [D]) for each subject, similar to (c).
Figure 5
 
Quantitative results for the head-roll paradigm. (a) Mean arm trajectories for the first 200 ms following movement onset for trials with the head rolled toward the left shoulder, for each TT direction. In blue, the directions predicted by the spatial hypothesis (i.e., the TT directions onto the screen). In orange, the trajectories predicted by the retinal hypothesis. In black, the mean initial trajectories of the arm across subjects and in gray the standard error of the mean. Model predictions and mean observed trajectories are represented with dottedlines for a TT direction of 10° (right part) or 190° (left part). Similarly, we have solid (dashed) lines for 0° (−10) and 180° (170) TT directions. The black curve represents the direction (in degrees). (b) Standardized observed transformation as a function of the predicted 3D transformation (5° bins). Predictions for the spatial (blue) and retinal (orange) hypotheses are represented. A linear regression line (dashed black line) was fitted to the data collected across all subjects. The slope of the regression line represents the amount of transformation. (c) Transformation for each subject and for all subjects pooled together. Linear regression lines were fitted like in Panel (b), for each subject. The transformation indices (corresponding slopes) and corresponding 95% confidence intervals are represented in black. (d) Standardized observed OCR transformation as a function of predicted 3D OCR transformation. (e) OCR transformation (slope of the regression lines in [D]) for each subject, similar to (c).
Transformation
For each trial, the observed transformation was computed using the measured initial tracking, 3D eye-in-head and 3D head-on-shoulder directions. The predicted 3D transformation was computed as well. Figure 5b represents a summary graph of the observed transformation as a function of the predicted 3D transformation. Data across all subjects are represented (black lines). The observed transformation is represented as the mean (±SEM) observed transformation for 5° bins of the predicted 3D transformation. If the brain takes only the retinal information into account, then the observed transformation should be zero, whatever the predicted 3D transformation (retinal hypothesis, orange line, slope = 0). However, if the brain takes the 3D eye-head-shoulder geometry into account, then the observed transformation should be equal to the predicted 3D transformation (spatial hypothesis, blue line, slope = 1). A simple linear regression was fitted to the data (R = 0.91, slope = 1.07 [±0.02] for the 95% confidence interval). It shows that subjects incorporated knowledge about the eye-head-shoulder geometry for the visuomotor transformation. The regression slope, the indicator of the transformation (see the Materials and Methods), was significantly higher than 1 (one-sided t test: t[3219] = 8.17, p < 0.001). 
The same analysis was carried out for each subject separately. Correlation coefficients were significant for all 7 subjects and were between 0.82 and 0.95. Figure 5c shows the regression slopes (and the associated 95% confidence interval) for each subject (black). Slopes corresponding to the retinal (orange) and spatial (blue) hypotheses are represented. All subjects had a slope of 1 (S3, S4, S6) or slightly but significantly larger (S1, S2, S5, and S7; one-sided t test run for each subject separately). Therefore, all subjects fully compensated for the head-roll rotation. In the next section, we will further analyze whether OCR was also taken into account. 
Multiple regression on OCR and head roll
There are two effects when subjects tilt their head. First, there is the head-roll angle itself. Second, there is the static OCR. OCR tilts the eye in the opposite direction of the head-roll angle. Typical values between 5% and 20% of the head-roll angle have been reported for static OCR gains (Bockisch & Haslwanter, 2001; Collewijn, Van der Steen, Ferman, & Jansen, 1985; Goltz et al., 2009; Klier & Crawford, 1998). If a subject tilted the head 30° toward the right shoulder and the resulting OCR was −3°, then the predicted 3D transformation should be equal to 30° − 3° = 27°. If subjects compensated for only the head-roll angle and not the OCR angle, then they should compensate for 30°. So the subject would overcompensate by an amount of (30 – 27)/27 = 11% (transformation should equal 1.11). This example shows that one possible reason for the overcompensation effect we observed in some subjects (see Figure 5c) was that OCR is not taken into account by the subjects. 
To test if the brain uses knowledge about the OCR angle in the 3D visuomotor transformation, we ran a linear multiple regression analysis with the observed transformation (dependent variable) as a function of head-roll angle and OCR angle (independent variables). For each subject, multiple correlation coefficients (values between 0.80 and 0.95) are highly significant (p < 0.001; Table 1). Corresponding partial correlations for each independent variable (OCR angle, head-roll angle) were also computed (Table 1) and were all found to be highly significant as well, although quite low for the partial correlation coefficient of OCR (between 0.19 and 0.29). Simple linear regressions between observed transformation and head-roll (resp. OCR) angle alone were also fitted. Regression coefficients were all highly significant (Table 1). For each subject, the multiple correlation coefficient, Rmult, is slightly greater than the simple correlation coefficient with head-roll angle as an independent variable, Rsimple_HR. The difference is shown to be significant (Dowdy, Wearden, & Chilko, 2004) for each subject. Because the partial correlation coefficient for OCR is also significant, this means that adding the OCR angle as a variable to the regression equation improves the quality of the regression (more variance is explained). 
Table 1
 
Correlation coefficients and regression coefficients for each subject (S1 to S7) and for all subjects' data pooled together (All). Notes: Rsimple_HR (resp. Rsimple_OCR): correlation coefficient of simple linear regression between observed correction (dep. variable) as a function of head-roll angle (resp. OCR angle). Rmult: correlation coefficient of the multiple regression between observed correction and head roll and OCR angles (see Equation 3). Part. RHR (resp. Part. ROCR) is the partial regression coefficient of the head roll (resp. OCR) angle in the multiple regression analysis. Multiple regression equation is: corrobs = c0 + cHR*head roll + cOCR*OCR. The two next columns indicate the regression coefficients corresponding to each variable, as well as a 95% confidence interval. **p < 0.001. *p < 0.05 for the one-sided t test to test if the regression coefficient is significantly greater or lower than 1. The last two columns indicate the correlation coefficient ROCRgain of the simple linear regression (Equation 4) between OCR and head-roll angles, as well as OCRgain, the slope of the regression.
Table 1
 
Correlation coefficients and regression coefficients for each subject (S1 to S7) and for all subjects' data pooled together (All). Notes: Rsimple_HR (resp. Rsimple_OCR): correlation coefficient of simple linear regression between observed correction (dep. variable) as a function of head-roll angle (resp. OCR angle). Rmult: correlation coefficient of the multiple regression between observed correction and head roll and OCR angles (see Equation 3). Part. RHR (resp. Part. ROCR) is the partial regression coefficient of the head roll (resp. OCR) angle in the multiple regression analysis. Multiple regression equation is: corrobs = c0 + cHR*head roll + cOCR*OCR. The two next columns indicate the regression coefficients corresponding to each variable, as well as a 95% confidence interval. **p < 0.001. *p < 0.05 for the one-sided t test to test if the regression coefficient is significantly greater or lower than 1. The last two columns indicate the correlation coefficient ROCRgain of the simple linear regression (Equation 4) between OCR and head-roll angles, as well as OCRgain, the slope of the regression.
Subject Rsimple_HR Rsimple_OCR Rmult Part. RHR Part. ROCR cHR [95% CI] cOCR [95% CI] ROCRgain OCRgain
S1 0.940** 0.616** 0.945** 0.910** 0.284** 1.05* [1.01 1.10] 0.87 [0.58 1.15] 0.73 0.12
S2 0.929** 0.619** 0.934** 0.890** 0.255** 1.10* [1.05 1.14] 0.97 [0.67 1.26] 0.74 0.11
S3 0.848** 0.424** 0.856** 0.822** 0.223** 1.05 [0.98 1.12] 1.47 [0.82 2.12] 0.61 0.07
S4 0.801** 0.409** 0.809** 0.765** 0.195** 1.14* [1.05 1.23] 0.88 [0.48 1.27] 0.72 0.14
S5 0.907** 0.696** 0.911** 0.819** 0.198** 1.07* [1.00 1.14] 1.05 [0.57 1.53] 0.87 0.12
S6 0.871** 0.586** 0.880** 0.811** 0.252** 1.02 [0.95 1.08] 0.90 [0.59 1.21] 0.76 0.16
S7 0.932** 0.865** 0.935** 0.708** 0.20** 1.26* [1.13 1.38] 1.00 [0.53 1.48] 0.95 0.25
All 0.891** 0.608** 0.896** 0.830** 0.222** 1.09* [1.07 1.12] 0.87* [0.73 1.00] 0.77 0.15
Table 1 also shows the multiple regression coefficients corresponding to each variable of the multiple regression equation 
where corrobs is the observed transformation, cHR is the regression coefficient for the head-roll variable, and cOCR is the regression coefficient for the OCR variable. Values for cHR vary between 1.02 and 1.26 among subjects (Table 1) and are all significantly different from 0, with 95% confidence intervals less than 0.25. Values for cOCR vary between 0.87 and 1.47 among subjects (Table 1), with 95% confidence intervals less than 1.30. They are all significantly different from 0. There is no consistent trend for the regression coefficients across subjects. However, for 5 of 7 subjects (see Table 1), the regression coefficient for the head-roll variable cHR was significantly greater than 1. 
Finally, we computed the static OCR gain for each subject by fitting the following linear regression to our data:  where k is a constant and OCRgain represents the static OCR gain. Correlation coefficients were between 0.61 and 0.95. The OCR gain varied between 0.07 and 0.25. 
To assess to what extent the OCR is taken into account by the brain, we defined an OCR transformation index (similar to the transformation index, but by comparing the observed direction to the direction predicted if the head roll is fully taken into account, but not the OCR). Figure 5d represents the observed OCR transformation as a function of the predicted 3D OCR transformation for data pooled across all subjects. A simple linear regression was fitted to the data. The correlation coefficient is small (R = 0.27) but highly significant, F(1, 3218) = 257, p < 0.001. The slope of the regression was 0.70 (±0.08 for the 95% confidence interval), indicating that under the assumption that the head roll is perfectly taken into account, the OCR is also taken into account but only partially. The same analysis was carried out for each subject separately, and we represented the slopes (and corresponding 95% confidence intervals) on Figure 5e. All subjects appear to at least partially compensate for the OCR. 
Simple regression: effect of TT direction and velocity
We wanted to see if the amount of transformation (characterized by the slope of the regression between observed transformation and predicted 3D transformation; see Figure 5c) varied with TT direction or with TT velocity. For each variable (TT direction, TT velocity), we conducted an analysis of covariance, testing for the homogeneity of slopes (observed correction as dependent variable, 3D correction as independent variable, and TT direction [or TT velocity] as covariate). TT direction had no effect on the transformation, the regression slope, except for 2 subjects (S2: F[5, 590] = 2.48, p = 0.03, slope for the 0° TT direction slightly lower than for the other directions, S3: F[5, 370] = 2.39, p = 0.038, slope for the 180° TT direction slightly greater than for the other directions). For each subject, TT velocity had no effect on the regression slope. An interaction of TT direction and velocity also had no effect on the regression slope (using an analysis of covariance with two covariates: TT direction and TT velocity). 
Simple regression: effect of arm latency
We investigated the effect of the arm latency on the amount of transformation. Indeed, it could be that the transformation is gradual and that we need some time to take the extraretinal signals into account. Therefore, we divided the trials in latency bins of 50 ms, starting with the 150- to 200-ms bin. Then we computed the observed transformation as a function of the predicted 3D transformation for trials in each latency bin. The correlation coefficient was greater than 0.75 for each subject in all latency bins. For each subject, the latency group had no influence on the regression slope (Sheskin, 2007), indicating that subjects compensated as shown previously (see Figure 5), even for short latencies (between 150 and 200 ms). 
Learning effect
One could argue that the brain might not carry out a visuomotor transformation taking the 3D eye-head-shoulder geometry into account but instead learn the task. However, there is no fixed mapping between retinal inputs and motor output because head-roll angle is variable from trial to trial. Thus, even if subjects learn the task using feedback based on information occurring after the occlusion (visual feedback for TT combined with proprioceptive feedback for the arm) during the trial, they need to make use of the head-roll angle (as well as OCR angle) to explain the observed results. Therefore, the question is rather: Do we need to learn this complex 3D transformation? Or does the brain already implement this 3D visuomotor transformation? 
To test between the two hypotheses, we compared transformation indices obtained by performing the regression over the first 25% of the trials and the last 25% of the trials collected for each subject. Correlation coefficients were significant in both cases (first 25% data: R25 > 0.78, last 25% data: R75 > 0.83) for all subjects. There were no differences in regression slopes between the two groups (Sheskin, 2007). Even when taking the first 5% of the data for each subject (which represents about 10 data points), the correlation coefficient was significant, as well as the regression slope, and the regression slope was not significantly different from the regression slope obtained using all the data. 
Oblique gaze experiment
Figure 6 represents a typical trial in the oblique gaze condition. Finger trajectory (black line) followed TT trajectory (blue line, Figure 6a) during the whole movement, and thus in particular during the initial part of the movement. First the subject had to saccade to the FP (red disk) and then maintain fixation while tracking TT with the arm. For this trial, the difference between the predictions of the two hypotheses is about 6°. 
Figure 6
 
Typical trial in the oblique gaze paradigm. Panel (a) represents trajectories in the screen coordinates after TT onset: TT trajectory (blue line) and projection of the eye-finger vector trajectory onto the screen (black line). Gaze fixation is also represented (red dot) for this particular trial. Panel (b) shows head-in-space 3D angular position and eye-in-head 3D Fick position, as well as the horizontal and vertical components of the tracking position and velocity as a function of time, during the trial. TT position and velocity as a function of time are represented by dotted lines. TT onset is indicated by the black vertical line, whereas TT occlusion is represented by the gray area.
Figure 6
 
Typical trial in the oblique gaze paradigm. Panel (a) represents trajectories in the screen coordinates after TT onset: TT trajectory (blue line) and projection of the eye-finger vector trajectory onto the screen (black line). Gaze fixation is also represented (red dot) for this particular trial. Panel (b) shows head-in-space 3D angular position and eye-in-head 3D Fick position, as well as the horizontal and vertical components of the tracking position and velocity as a function of time, during the trial. TT position and velocity as a function of time are represented by dotted lines. TT onset is indicated by the black vertical line, whereas TT occlusion is represented by the gray area.
Eye-in-head position (Figure 6b) is represented in Fick coordinates. This allows assessing the amount of torsion in these coordinates (magenta trace). Together with the 3D eye position lying in the Listing's plane, it explains why the projection of the TT velocity vector onto the retina is slightly tilted. The head did not move, whereas the arm started to move a bit less than 300 ms after TT onset. Across all valid trials and all subjects, mean arm latency was 244 ms (SD = 71 ms). Mean arm latencies ranged from 212 to 260 ms across subjects. 
Transformation
Figure 7a represents the observed transformation as a function of the predicted 3D transformation (same presentation as in Figure 5b). Here, observed transformation data are averaged (±SEM) on 2° intervals of predicted 3D transformation. If the brain used only the retinal information, there should be no transformation (0-slope orange line), whatever the amount of predicted 3D transformation (which depends on the oblique gaze position). But if the brain fully compensated for the misalignment between spatial and retinal coordinates when the eye lies in the tertiary position, then observed transformation data should be on the full transformation line with a slope of 1 (blue line). 
Figure 7
 
Quantitative results for the oblique gaze paradigm. (a) Standardized observed transformation as a function of the predicted 3D transformation (solid black, 2° bins, M ± SEM). A linear regression line (dashed black line) was fitted to the data. (B) Transformation for each subject and for all subjects pooled together. Same format as in Figure 5c.
Figure 7
 
Quantitative results for the oblique gaze paradigm. (a) Standardized observed transformation as a function of the predicted 3D transformation (solid black, 2° bins, M ± SEM). A linear regression line (dashed black line) was fitted to the data. (B) Transformation for each subject and for all subjects pooled together. Same format as in Figure 5c.
A simple linear regression was fitted to the data pooled across all subjects (R = 0.27, slope = 0.86 [±0.16] for the 95% confidence interval). This correlation was highly significant, F(1, 1443) = 114, p < 0.001. The observed transformation was partial. Indeed, the regression slope was significantly less than 1 (one-sided t test: t[1443] = −1.78, p = 0.037). This shows that subjects took the 3D eye position into account but not completely. 
Figure 7b shows regression slopes and the associated 95% confidence interval for each subject. Correlation coefficients were significant for 5 of 7 subjects and were between 0.19 and 0.40. Subject S3 had a slope not significantly different from 0, meaning that he did not take the 3D eye position into account for that experiment. Subject S7 had a slope neither different from 0 nor different from 1, indicating that results were too noisy to conclude for this subject. 
Among the 5 subjects whose correlation coefficient was significantly different from 0 (S1, S2, S4, S5, S6), all but 1 had a slope that was not significantly different from 1. Subject S6 had a slope significantly lower than 1. Thus, 4 subjects seemed to fully compensate for the 3D eye position, whereas another 1 subject only partially compensated. Two subjects did not appear to compensate and used only retinal information. 
Discussion
This study aimed at determining whether the brain takes the complex nonlinear 3D eye-head-shoulder geometry in the visuomotor velocity transformation into account for the planning of visually guided manual tracking movements. If the 3D eye-in-head and head-on-shoulder signals are available to the brain for the transformation and their estimation is correct, then the brain can reconstruct the correct spatial motion (spatial hypothesis) and build an accurate spatial motor plan. But if they are not available to the neural areas responsible for the transformation, then only the visual motion may be taken into account (retinal hypothesis). If at least one of the extraretinal signals is not taken into account or is biased, then the estimation of the spatial motion and consequently the manual tracking motor plan will be affected. The spatial motion is known to the experimenter, but the retinal motion had to be estimated using a kinematic model of the 3D eye-head-shoulder system. The estimated retinal direction is used as a reference to assess to what extent the 3D eye and head position are taken into account. Two experiments were designed to test how well the human brain takes these extraretinal signals into account. First, the head-roll experiment revealed that the brain accurately compensates for the head-roll rotation as well as the resulting static OCR. For some subjects, this compensation is not perfect and may be due to an underestimation of the OCR angle (see Figure 4e) and/or an overestimation of the head-roll angle. Second, the oblique gaze experiment showed that the brain takes at least partially (except for 2 of 7 subjects) the misalignment of retinal and spatial coordinates into account when the eyes are in tertiary positions. Together, these results imply that the brain makes use of extraretinal signals in an internal model of 3D eye-head-shoulder geometry to carry out the visuomotor transformation of velocity signals for the planning of visually guided arm tracking movements. Signals related to the 3D eye position appear to be less well estimated by the brain than the head-roll angle. 
Feed-forward versus feedback strategy
A priori, it is not trivial that the brain needs to reconstruct the spatial target motion to plan the manual tracking arm movement. Indeed, the retinal direction predicts an approximately correct movement direction for the arm in most situations of everyday life, because we usually keep our head upright and foveate the target of interest. Therefore, it would be possible that in specific situations, when our head is tilted and/or our gaze lies in an eccentric position, we would initiate our arm movement in the incorrect direction as predicted by the retinal hypothesis but later (after sensory delays) correct the arm trajectory using sensory feedback. Indeed, the initiation of a motor response is not always accurate, and feedback is then used to adjust/correct the movement. For example, short-latency saccades toward flashed targets during smooth pursuit are not spatially accurate and are followed by corrective saccades (Blohm et al., 2005). During smooth pursuit eye movements, the initiation of the response to a sudden change in the target direction is independent of the target direction and becomes scaled only to the target direction change after 100 to 150 ms (Engel et al., 2000). Also, without visual feedback, there are systematic position errors when pointing to an eccentric target (Henriques et al., 1998; Van Pelt & Medendorp, 2008), indicating that the motor plan is not perfect. However, the retinal hypothesis would not be optimal in very challenging environments such as sport games or predator-prey hunting, which require fast and accurate movements. Our results show that the brain deals with the nonlinear 3D eye-head-shoulder geometry when interpreting retinal velocity signals for the planning of the manual tracking movement. 
Position versus velocity 3D visuomotor transformation
Our results confirm that the brain can perform complex nonlinear 3D visuomotor transformations in movement planning. The brain has already been shown to take 3D eye position into account in the visuomotor transformation of static target position for planning saccades to remembered targets (Crawford & Guitton, 1997; Klier & Crawford, 1998) for the nonlinear retinal projection geometry in the planning of pointing arm movements to static targets (Crawford, Henriques, & Vilis, 2000), for 3D eye and head position in the planning of reaches toward static remembered targets (Blohm & Crawford, 2007), as well as for the offset between eye and head rotation centers in planning pointing movements (Blohm & Crawford, 2007; Henriques & Crawford, 2002). However, all the above-mentioned studies involved static targets in static situations (the eyes, head, or body were not moving during the task). 
In this study, we set out to tackle the question of dynamics, that is, manual tracking movements toward moving targets in a static eye-head-shoulder configuration. This mainly involved velocity information because in our paradigm (see Figure 2), the TT and arm were aligned before the onset of the target movement. Yet it is well known that neural pathways coding position and velocity signals are different (Krauzlis, 2005; Leigh & Zee, 2006). Recently, Blohm and Lefèvre (2010) showed that there is a visuomotor velocity transformation for smooth pursuit eye movements. However, it remains to be determined whether the visuomotor transformation for the planning of velocity-based manual tracking movements occurs at a different place in the brain than the one for reaching to static positions. Indeed, one could argue that before making the 3D visuomotor transformation, the brain either extrapolates position information from the initial velocity information or samples position information along the target trajectory. In this case, the brain would only need to transform a position signal like it does in the case of reaching movements toward static targets (Blohm & Crawford, 2007). Although this hypothesis is theoretically possible, we discuss several arguments in favor of separate mechanisms for moving targets (see below). 
First, we showed that for the head-roll paradigm, the transformation was perfect even for short arm latencies (as short as between 100 and 150 ms). Actually, it is also the case for the subgroup of trials with the TT moving at 10°/s. In this situation, the visual displacement used to initiate the arm displacement is on average 100 ms. For targets moving at 10°/s, the target will have moved by 1°. If we consider the difference between a target moving horizontally to the right with a 0° orientation or with a 10° orientation (two of the six possible target directions in space), the deviation between the target positions after 100 ms is 1°*sin(10°) = 1*0.17 = 0.17°. We know that it is possible to discriminate such differences in position when there is a visual reference, but we believe that it is unlikely that we can discriminate a difference so tiny without any visual reference (the experiment occurred in complete darkness except for the target itself). Because the 3D compensation is perfect for that category of trials, it is very likely that the direction of the velocity is transformed accurately by the brain to compute a spatially accurate motor plan. 
Second, our paradigm introduces a velocity error only between the tracking arm and the target (even if this introduces a position error over time). From a systems theoretical point of view, the most natural signal to drive the manual tracking movement is a velocity signal, just as during smooth pursuit. Indeed, several behavioral studies have suggested that the smooth pursuit and manual tracking systems share a common drive (Engel & Soechting, 2003; Maioli, Falciati, & Gianesini, 2007), and it is well known that smooth pursuit eye movements are primarily driven by velocity information (Krauzlis, 2005; Morris & Lisberger, 1987; Rashbass, 1961; Robinson, 1965). Furthermore, Van Donkelaar, Lee, and Gellman (1994) showed that subjects can match the target speed in a manual tracking task if their eyes are free to move, but they cannot reduce the position error when vision of their arm is prevented. 
Finally, brain areas mainly involved in visual motion processing have been shown to be directly implicated in the generation of manual tracking movements. In monkeys, MST-l (the lateral part of area MST) is involved in goal-directed arm movements (Ilg & Schumann, 2007), and MT neurons are modulated during manual tracking when visual feedback of the arm is available (Dannenberg, Gieselmann, Kruse, & Hoffmann, 2009), while simultaneous activity coding movement direction before and during manual tracking occurs in MT/MST and in primary motor cortex M1 (Kruse, Dannenberg, Kleiser, & Hoffmann, 2002). In humans, a functional magnetic resonance imaging (fMRI) study revealed that extrastriate visual area V5 (hMT+) is involved in the generation of manual tracking movements (Oreja-Guevara et al., 2004). A patient with bilateral damage to area V5 showed impairments when reaching toward moving objects (Schenk, Mai, Ditterich, & Zihl, 2000). Finally, applying transcranial direct current stimulation onto V5 enhances the percentage of correct manual tracking movements (Antal et al., 2004). 
Model assumptions
Note that the retinal velocity estimation model assumes that the head rotates around a single axis. However, it is known that for head movements following Donders' law, the location of the head rotation axis moves as a function of the rotation angle (Medendorp, Melis, Gielen, & Van Gisbergen, 1998). It is reasonable to assume that it is also the case for head-roll movements. What are the consequences for the model predictions of assuming that the head-roll axis location does not move? From a qualitative point of view, not considering a moving axis has no consequences on the head-in-space orientation, but it has an effect on the translation of the head center of mass in space. However, in our paradigm, the change in retinal eccentricity caused by the slightly different head-in-space position is very small (a few degrees at most) when considering the moving axis, because the targets move onto a screen located quite far from the head, at about 100 cm. Figure 3e shows that for oblique gaze, the difference between the spatial and retinal predictions is less than 1.5° for targets whose eccentricity is less than 15°. Therefore, we may conclude that the difference in retinal direction due to the constant axis hypothesis is smaller than 1°, which is rather small compared with the change in retinal direction due to the head-roll angle itself (between 20° and 60°). As a result, the fact that we did not take a rotation-angle-dependent roll axis into account is a valid assumption because the consequences on the predictions of the models are very small compared with the main effects we describe here. 
Neurophysiological correlates and implications
It would be interesting to know where in the brain the visuomotor velocity transformation occurs. There are several brain areas in which this 3D visuomotor velocity transformation might be carried out. One possibility is area MST, in the posterior parietal cortex (PPC; Andersen, Snyder, Bradley, & Xing, 1997). Indeed, some MST neurons encode target motion in world-centered coordinates, taking eye velocity and head velocity into account (Ilg, Schumann, & Thier, 2004; Inaba, Miura, & Kawano, 2011). Monkey MT and MST neurons are found to be modulated by eye position in a linear gain-field manner (Bremmer, Ilg, Thiele, Distler, & Hoffmann, 1997), and human MT+ area activity is modulated by eye position (DeSouza, Dukelow, & Vilis, 2002). Theoretically, such gain-field modulations can carry out reference frame transformation in a distributed way (Blohm, Keith, & Crawford, 2009; Pouget, Deneve, & Duhamel, 2002; Pouget & Sejnowski, 1997; Zipser & Andersen, 1988). 
The 3D visuomotor transformation could also occur in other parts of the PPC, which is involved in the planning of reaching movements (Buneo & Andersen, 2006). The signals necessary to carry out the transformation all modulate the neuronal activity of many reach-related areas in the PPC. First, in monkeys, velocity signals coming from visual motion were found to modulate the neuronal activity of several parietal areas, including VIP (Colby, Duhamel, & Goldberg, 1993), LIP (Eskandar & Assad, 2002; Fanini & Assad, 2009), area 7a (Merchant, Battaglia-Mayer, & Georgopoulos, 2001), and area MIP (Eskandar & Assad, 2002). In humans, fMRI studies also revealed several parietal areas along the intraparietal sulcus modulated by visual velocity signals (Konen & Kastner, 2008; Sunaert, Van Hecke, Marchal, & Orban, 1999). Second, head position signals modulate neuronal activity in the PPC (Brotchie, Andersen, Snyder, & Goodman, 1995; Brotchie et al., 2003) as well as eye position signals (Andersen, Bracewell, Barash, Gnadt, & Fogassi, 1990; Chang, Papadimitriou, & Snyder, 2009; Duhamel, Bremmer, BenHamed, & Graf, 1997; Galletti, Battaglini, & Fattori, 1995). Area MIP and LIP also receive eye position and velocity signals (Prevosto, Graf, & Ugolini, 2009). In addition, human parietal reach region activity is modulated by eye position (DeSouza et al., 2000). Thus, reach-related areas of the PPC seem to have the necessary incoming signals to carry out the visuomotor velocity transformation. 
Extraretinal signals in updating and visuomotor tasks
Previous studies have shown that the brain has specific mechanisms to compensate for 3D eye, head, or body rotations when dealing with position information. For example, when dealing with spatial updating for saccades, the brain takes the head roll into account, in active (Medendorp, Smith, Tweed, & Crawford, 2002) or passive (Klier, Angelaki, & Hess, 2005) conditions. The brain also compensates for static OCR in motor (Klier & Crawford, 1998; Medendorp et al., 2002) and perceptual (Haustein, 1992; Poljac, Lankheet, & van den Berg, 2005) tasks. Furthermore, the brain takes the misalignment between eye and head coordinates into account when the eyes lie in tertiary positions in visuomotor position transformation tasks for saccades (Klier & Crawford, 1998) and reaching (Blohm & Crawford, 2007), positional updating for saccades (Smith & Crawford, 2001), and perceptual tasks (Haustein & Mittelstaedt, 1990; Poljac et al., 2005). 
However, regarding velocity information, there are few studies that dealt with such 3D nonlinear aspects. A recent study shows that visual motion is also updated after head-roll rotation in a saccadic and a perceptual task (Ruiz-Ruiz & Martinez-Trujillo, 2008). Thus, velocity information can be updated by the brain. Our study addressed whether velocity information was also transformed using a model of body geometry to compute the motor plan. We show that this is indeed the case and that the brain takes head roll and OCR angles as well as the misalignment between spatial and retinal coordinates into account, at least partially. Therefore, brain areas involved in the visuomotor velocity transformation for the planning of manual tracking movements must be provided with 3D eye and head position signals. 
Acknowledgments
Support for this work was provided by Fonds National de la Recherche Scientifique, Action de Recherche Concertée (Belgium). This paper presents research results of the Belgian Network Dynamical Systems, Control and Optimization, funded by the Interuniversity Attraction Poles Programmes, initiated by the Belgian State, Science Policy Office. This work has been supported by NSERC (Canada), ORF (Canada), CFI (Canada) and the Botterell Foundation (Queen's University, Kingston, ON, Canada). 
Commercial relationships: none. 
Corresponding author: Philippe Lefèvre. 
E-mail: philippe.lefevre@uclouvain.be. 
Address: Institute of Information and Communication Technologies, Electronics and Applied Mathematics, and Institute of Neuroscience, Université catholique de Louvain, Louvain-la-Neuve, Belgium. 
References
Andersen R. A. Bracewell R. M. Barash S. Gnadt J. W. Fogassi L . (1990). Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque. Journal of Neuroscience, 10(4):1176–1196. [PubMed]
Andersen R. A. Snyder L. H. Bradley D. C. Xing J . (1997) Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annual Review of Neuroscience, 20:303–330. [CrossRef] [PubMed]
Antal A. Nitsche M. A. Kruse W. Kincses T. Z. Hoffmann K. P. Paulus W . (2004) Direct current stimulation over V5 enhances visuomotor coordination by improving motion perception in humans. Journal of Cognitive Neuroscience, 16(4): 521–527. [CrossRef] [PubMed]
Badler J. B. Heinen S. J . (2006) Anticipatory movement timing using prediction and external cues. Journal of Neuroscience, 26(17):4519–4525. [CrossRef] [PubMed]
Blohm G. Crawford J. D. (2007) Computations for geometrically accurate visually guided reaching in 3-D space. Journal of Vision, 7(5):1–22. [CrossRef]
Blohm G. Keith G. P. Crawford J. D . (2009) Decoding the cortical transformations for visually guided reaching in 3D space. Cerebral Cortex, 19(6): 1372–1393. [CrossRef] [PubMed]
Blohm G. Lefèvre P . (2010) Visuomotor velocity transformations for smooth pursuit eye movements. Journal of Neurophysiology, 104(4): 2103–2115. [CrossRef] [PubMed]
Blohm G. Missal M. Lefèvre P . (2005) Processing of retinal and extraretinal signals for memory-guided saccades during smooth pursuit. Journal of Neurophysiology, 93(3):1510–1522. [PubMed]
Bockisch C. J. Haslwanter T . (2001) Three-dimensional eye position during static roll and pitch in humans. Vision Research, 41(16):2127–2137. [CrossRef] [PubMed]
Bremmer F. Ilg U. J. Thiele A. Distler C. Hoffmann K. P . (1997) Eye position effects in monkey cortex. I. Visual and pursuit-related activity in extrastriate areas MT and MST. Journal of Neurophysiology, 77(2):944–961. [PubMed]
Brotchie P. R. Andersen R. A. Snyder L. H. Goodman S. J . (1995) Head position signals used by parietal neurons to encode locations of visual stimuli. Nature, 375(6528):232–235. [CrossRef] [PubMed]
Brotchie P. R. Lee M. B. Chen D. Y. Lourensz M. Jackson G. Bradley W. G.Jr . (2003) Head position modulates activity in the human parietal eye fields. Neuroimage, 18(1):178–184. [CrossRef] [PubMed]
Buneo C. A. Andersen R. A . (2006) The posterior parietal cortex: Sensorimotor interface for the planning and online control of visually guided movements. Neuropsychologia, 44(13):2594–2606. [CrossRef] [PubMed]
Carl J. R. Gellman R. S . (1987) Human smooth pursuit: Stimulus-dependent responses. Journal of Neurophysiology, 57(5):1446–1463. [PubMed]
Chang S. W. C. Papadimitriou C. Snyder L. H . (2009) Using a compound gain field to compute a reach plan. Neuron, 64(5):744–755. [CrossRef] [PubMed]
Colby C. L. Duhamel J. R. Goldberg M. E . (1993) Ventral intraparietal area of the macaque: Anatomic location and visual response properties. Journal of Neurophysiology, 69(3):902 [PubMed]
Collewijn H. Van der Steen J. Ferman L. Jansen T. C . (1985) Human ocular counterroll: Assessment of static and dynamic properties from electromagnetic scleral coil recordings. Experimental Brain Research, 59(1):185–196. [CrossRef] [PubMed]
Crawford J. D. Guitton D . (1997) Visual-motor transformations required for accurate and kinematically correct saccades. Journal of Neurophysiology, 78(3):1447–1467. [PubMed]
Crawford J. D. Henriques D. Y. Vilis T . (2000) Curvature of visual space under vertical eye rotation: Implications for spatial vision and visuomotor control. Journal of Neuroscience, 20(6):2360–2368. [PubMed]
Crawford J. D. Medendorp W. P. Marotta J. J . (2004) Spatial transformations for eye-hand coordination. Journal of Neurophysiology, 92(1):10–19. [CrossRef] [PubMed]
Crawford J. D. Vilis T . (1991) Axes of eye rotation and Listing's law during rotations of the head. Journal of Neurophysiology, 65(3):407–423. [PubMed]
Dannenberg S. Gieselmann M. A. Kruse W. Hoffmann K. P . (2009) Influence of visually guided tracking arm movements on single cell activity in area MT. Experimental Brain Research, 199(3–4):355–368. [CrossRef] [PubMed]
Desmurget M. Grafton S . (2000) Forward modeling allows feedback control for fast reaching movements. Trends in Cognitive Science, 4(11):423–431. [CrossRef]
DeSouza J. F. Dukelow S. P. Gati J. S. Menon R. S. Andersen R. A. Vilis T . (2000) Eye position signal modulates a human parietal pointing region during memory-guided movements. Journal of Neuroscience, 20(15):5835–5840. [PubMed]
DeSouza J. F. Dukelow S. P. Vilis T . (2002) Eye position signals modulate early dorsal and ventral visual areas. Cerebral Cortex, 12(9):991–997. [CrossRef] [PubMed]
Dowdy S. Wearden S. Chilko D . (2004) Statistics for research (3rd ed.) Hoboken, NJ:Wiley
Duhamel J. R. Bremmer F. BenHamed S. Graf W . (1997 ). Spatial invariance of visual receptive fields in parietal cortex neurons. Nature, 389(6653):845–848. [CrossRef] [PubMed]
Engel K. C. Anderson J. H. Soechting J. F . (2000) Similarity in the response of smooth pursuit and manual tracking to a change in the direction of target motion. Journal of Neurophysiology, 84(3):1149–1156. [PubMed]
Engel K. C. Soechting J. F . (2003) Interactions between ocular motor and manual responses during two-dimensional tracking. Progress in Brain Research, 142:141–153. [CrossRef] [PubMed]
Eskandar E. N. Assad J. A . (2002) Distinct nature of directional signals among parietal cortical areas during visual guidance. Journal of Neurophysiology, 88(4):1777–1790. [PubMed]
Fanini A. Assad J. A . (2009) Direction selectivity of neurons in the macaque lateral intraparietal area. Journal of Neurophysiology, 101(1):289–305. [PubMed]
Galletti C. Battaglini P. P. Fattori P . (1995) Eye position influence on the parieto-occipital area PO (V6) of the macaque monkey. European Journal of Neuroscience, 7(12):2486–2501. [CrossRef] [PubMed]
Goltz H. C. Mirabella G. Leung J. C. Blakeman A. W. Colpa L. Abuhaleeqa K. . (2009) Effects of age, viewing distance and target complexity on static ocular counterroll. Vision Research, 49(14):1848–1852. [CrossRef] [PubMed]
Haslwanter T . (1995) Mathematics of three-dimensional eye rotations. Vision Research, 35(12):1727–1739. [CrossRef] [PubMed]
Haustein W . (1989) Considerations on Listing's law and the primary position by means of a matrix description of eye position control. Biological Cybernetics, 60(6):411–420. [PubMed]
Haustein W . (1992) Head-centric visual localization with lateral body tilt. Vision Research, 32(4):669–673. [CrossRef] [PubMed]
Haustein W. Mittelstaedt H . (1990) Evaluation of retinal orientation and gaze direction in the perception of the vertical. Vision Research, 30(2):255–262. [CrossRef] [PubMed]
Henriques D. Y. Crawford J. D . (2002) Role of eye, head, and shoulder geometry in the planning of accurate arm movements. Journal of Neurophysiology, 87(4):1677–1685. [PubMed]
Henriques D. Y. Klier E. M. Smith M. A. Lowy D. Crawford J. D . (1998) Gaze-centered remapping of remembered visual space in an open-loop pointing task. Journal of Neuroscience, 18(4):1583–1594. [PubMed]
Ilg U. J. Schumann S . (2007) Primate area MST-l is involved in the generation of goal-directed eye and hand movements. Journal of Neurophysiology, 97(1):761–771. [CrossRef] [PubMed]
Ilg U. J. Schumann S. Thier P . (2004 ). Posterior parietal cortex neurons encode target motion in world-centered coordinates. Neuron, 43(1):145–151. [CrossRef] [PubMed]
Inaba N. Miura K. Kawano K . (2011) Direction and speed tuning to visual motion in cortical areas MT and MSTd during smooth pursuit eye movements. Journal of Neurophysiology, 105(4):1531–1545. [CrossRef] [PubMed]
Khan A. Z. Crawford J. D . (2003) Coordinating one hand with two eyes: Optimizing for field of view in a pointing task. Vision Research, 43(4):409–417. [CrossRef] [PubMed]
Klier E. M. Angelaki D. E. Hess B. J . (2005) Roles of gravitational cues and efference copy signals in the rotational updating of memory saccades. Journal of Neurophysiology, 94(1):468–478. [CrossRef] [PubMed]
Klier E. M. Crawford J. D . (1998) Human oculomotor system accounts for 3-D eye orientation in the visual-motor transformation for saccades. Journal of Neurophysiology, 80(5):2274–2294. [PubMed]
Konen C. S. Kastner S . (2008) Representation of eye movements and stimulus motion in topographically organized areas of human posterior parietal cortex. Journal of Neuroscience, 28(33):8361–8375. [CrossRef] [PubMed]
Krauzlis R. J . (2005) The control of voluntary eye movements: New perspectives. Neuroscientist, 11(2):124–137. [CrossRef] [PubMed]
Krauzlis R. J. Miles F. A . (1996) Decreases in the latency of smooth pursuit and saccadic eye movements produced by the “gap paradigm” in the monkey. Vision Research, 36(13):1973–1985. [CrossRef] [PubMed]
Kruse W. Dannenberg S. Kleiser R. Hoffmann K.- . (2002) Temporal relation of population activity in visual areas MT/MST and in primary motor cortex during visually guided tracking movements. Cerebral Cortex, 12(5):466–476. [CrossRef] [PubMed]
Leigh R. J. Zee D. S . (2006 ). The neurology of eye movements, New York:Oxford University Press
Maioli C. Falciati L. Gianesini T . (2007) Pursuit eye movements involve a covert motor plan for manual tracking. Journal of Neuroscience, 27(27):7168–7173. [CrossRef] [PubMed]
Medendorp W. P. Melis B. J. Gielen C. C. Van Gisbergen J. A . (1998) Off-centric rotation axes in natural head movements: Implications for vestibular reafference and kinematic redundancy. Journal of Neurophysiology, 79(4):2025–2039. [PubMed]
Medendorp W. P. Smith M. A. Tweed D. B. Crawford J. D . (2002) Rotational remapping in human spatial memory during eye and head motion. Journal of Neuroscience, 22(1):RC196
Merchant H. Battaglia-Mayer A. Georgopoulos A. P . (2001) Effects of optic flow in motor cortex and area 7a. Journal of Neurophysiology, 86(4):1937–1954. [PubMed]
Morris E. J. Lisberger S. G . (1987) Different responses to small visual errors during initiation and maintenance of smooth-pursuit eye movements in monkeys. Journal of Neurophysiology, 58(6):1351–1369. [PubMed]
Oreja-Guevara C. Kleiser R. Paulus W. Kruse W. Seitz R. J. Hoffmann K. P . (2004) The role of V5 (hMT+) in visually guided hand movements: An fMRI study. European Journal of Neuroscience, 19(11):3113–3120. [CrossRef] [PubMed]
Poljac E. Lankheet M. J. van den Berg A. V . (2005) Perceptual compensation for eye torsion. Vision Research, 45(4):485–496. [CrossRef] [PubMed]
Pouget A. Deneve S. Duhamel J. R . (2002 ). A computational perspective on the neural basis of multisensory spatial representations. Nature Reviews Neuroscience, 3(9):741–747. [CrossRef] [PubMed]
Pouget A. Sejnowski T. J . (1997 ). Spatial transformations in the parietal cortex using basis functions. Journal of Cognitive Neuroscience, 9(2):222–237. [CrossRef] [PubMed]
Prevosto V. Graf W. Ugolini G . (2009) Posterior parietal cortex areas MIP and LIPv receive eye position and velocity inputs via ascending preposito-thalamo-cortical pathways. European Journal of Neuroscience, 30(6):1151–1161. [CrossRef] [PubMed]
Rashbass C . (1961) The relationship between saccadic and smooth tracking eye movements. Journal of Physiology, 159 326 – 338. [CrossRef] [PubMed]
Robinson D. A . (1965) The mechanics of human smooth pursuit eye movement. Journal of Physiology, 180(3):569–591. [CrossRef] [PubMed]
Ruiz-Ruiz M. Martinez-Trujillo J. C . (2008) Human updating of visual motion direction during head rotations. Journal of Neurophysiology, 99(5):2558–2576. [CrossRef] [PubMed]
Schenk T. Mai N. Ditterich J. Zihl J . (2000) Can a motion-blind patient reach for moving objects?. European Journal of Neuroscience, 12(9):3351–3360. [CrossRef] [PubMed]
Schreiber K. Haslwanter T . (2004) Improving calibration of 3-D video oculography systems. IEEE Transactions on Biomedical Engineering, 51(4):676–679. [CrossRef] [PubMed]
Shadmehr R. Wise S. P . (2005) The computational neurobiology of reaching and pointing: A foundation for motor learning, Cambridge, MA:MIT Press
Sheskin D. J . (2007) Handbook of parametric and nonparametric statistical procedures, Boca Raton, FL:Chapman & Hall/CRC
Smith M. A. Crawford J. D . (2001) Implications of ocular kinematics for the internal updating of visual space. Journal of Neurophysiology, 86(4):2112–2117. [PubMed]
Sunaert S. Van Hecke P. Marchal G. Orban G. A . (1999) Motion-responsive regions of the human brain. Experimental Brain Research, 127(4):355–370. [CrossRef] [PubMed]
Tweed D. Cadera W. Vilis T . (1990) Computing three-dimensional eye position quaternions and eye velocity from search coil signals. Vision Research, 30(1):97–110. [CrossRef] [PubMed]
Tweed D. Vilis T . (1987) Implications of rotational kinematics for the oculomotor system in three dimensions. Journal of Neurophysiology, 58(4):832–849. [PubMed]
van Donkelaar P. Lee R. G. Gellman R. S . (1994) The contribution of retinal and extraretinal signals to manual tracking movements. Experimental Brain Research, 99(1):155–163. [CrossRef] [PubMed]
Van Pelt S. Medendorp W. P . (2008) Updating target distance across eye movements in depth. Journal of Neurophysiology, 99(5):2281–2290. [CrossRef] [PubMed]
Zipser D. Andersen R. A . (1988) A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331(6158):679–684. [CrossRef] [PubMed]
Appendix
We built a mathematical model that computes the retinal target direction (the direction of the retinal velocity vector), given the spatial velocity input and 3D eye and head positions (see Figure A1). 
Figure A1
 
Scheme for the estimation of the target retinal velocity. The input of the model is the target spatial velocity. The output is the target retinal velocity.
Figure A1
 
Scheme for the estimation of the target retinal velocity. The input of the model is the target spatial velocity. The output is the target retinal velocity.
Formalism and notations
The model was implemented using the dual quaternion formalism (Blohm & Crawford, 2007). In a sense, dual quaternions extend the notion of quaternions, which are commonly used in the field of 3D eye and head rotations (Crawford & Vilis, 1991; Haslwanter, 1995; Tweed, Cadera, & Vilis, 1990; Tweed & Vilis, 1987). A dual quaternion can easily represent a screw motion, namely, a rotation of angle θi around an axis of orientation Display FormulaImage not available including point Display FormulaImage not available lowed by a translation of length di along this axis. It is then easy to obtain a pure translation or a pure rotation from this general operation. Each translation or rotation operation is described by a dual quaternion operator. We may combine these operations using the dual quaternion product. The useful dual quaternion operations are described elsewhere (Blohm & Crawford, 2007). 
The position of a point TT in a given reference frame is represented by the dual quaternion  where Display FormulaImage not available is a dual quaternion representing the target position in eye-centered eye-fixed coordinates (-centered refers to the origin of the reference frame, whereas -fixed refers to the body to which the reference frame is attached [or fixed]). ε is an operator with the following property: ε2 = 0. Display FormulaImage not available is the 3D vector representing the coordinates of the target position. 
Using dual quaternions, we can express the position of point TT in another reference frame (i.e., relative to another orientation and located at another place in space). For example, the location of point TT in head-centered head-fixed coordinates is described using the following relationship:  where REH is a rotation dual quaternion representing the 3D orientation of the eye in the head and TEH is a translation dual quaternion representing the offset between the eye and head rotation centers (in other words, the position of the eye location expressed in a reference frame fixed to the head and centered on the head rotation center). Display FormulaImage not available is the dual quaternion conjugate of REH
The velocity of the point TT in a given reference frame is represented by the dual quaternion Display FormulaImage not available , where Display FormulaImage not available is the 3D vector representing the coordinates of the target velocity. We can easily derive the velocity of point TT in another reference frame, assuming that the rotation and translation are time invariant:  where the second line is derived easily and illustrates the fact that a fixed offset between two reference frames does not affect a velocity vector. 
Target retinal velocity estimation model
Here, the model input is the shoulder-centered shoulder-fixed velocity vector as input, whereas the model output is the eye-centered eye-fixed velocity vector as output. They are related in the following way:   
In practice, in our experiments, spatial coordinates of the tracking target (TT, see Figure 2) velocity vector, Display FormulaImage not available , are known (i.e., in shoulder-centered shoulder-fixed coordinates, because the shoulder is fixed in space). In particular, the direction of this vector in the frontoparallel plane is known. To obtain the velocity vector in an eye-centered eye-fixed reference frame (to know what the eyes see), we apply the model described above. Three-dimensional head or eye rotations were given by measurements (see the Materials and Methods section). Once we obtained the velocity error vector in eye-centered eye-fixed coordinates, we computed the direction of this vector in a plane orthogonal to the line of sight. The obtained direction depended on the eye-head-shoulder configuration and therefore was usually not the same direction as the TT direction in space. 
Figure 1
 
Velocity visuomotor transformation for manual tracking. (a) Schematic representation of the central nervous system areas implicated in the visuomotor velocity transformation. V1: primary visual cortex. MT: middle temporal area. MST: medial superior temporal area. PPC: posterior parietal cortex. M1: primary motor cortex. PMv: ventral premotor cortex. PMd: dorsal premotor cortex. (b) Example of manual tracking in a challenging dynamic environment: clays shooting. Before shooting, the shooter must react rapidly and track the target as accurately as possible. If the head is slightly tilted, projection of the velocity vector onto the retina will be tilted (black arrow on the retinal projection, inset on the left). If the brain does not take the head posture into account, the tracking movement will start in the direction indicated by the orange arrow. If it is taken into account, the initiation of the movement will be correct (blue arrow).
Figure 1
 
Velocity visuomotor transformation for manual tracking. (a) Schematic representation of the central nervous system areas implicated in the visuomotor velocity transformation. V1: primary visual cortex. MT: middle temporal area. MST: medial superior temporal area. PPC: posterior parietal cortex. M1: primary motor cortex. PMv: ventral premotor cortex. PMd: dorsal premotor cortex. (b) Example of manual tracking in a challenging dynamic environment: clays shooting. Before shooting, the shooter must react rapidly and track the target as accurately as possible. If the head is slightly tilted, projection of the velocity vector onto the retina will be tilted (black arrow on the retinal projection, inset on the left). If the brain does not take the head posture into account, the tracking movement will start in the direction indicated by the orange arrow. If it is taken into account, the initiation of the movement will be correct (blue arrow).
Figure 2
 
Experimental paradigms. All the experiments were carried out in complete darkness. (a) Head-roll paradigm. Subjects first rolled their head either toward the left shoulder or right shoulder or kept it upright, as indicated by the gray bar, and maintained this posture (Part 1). Then they fixated the fixation point (red disk), located at screen center, and pointed their arm toward the tracking target, denoted TT (green disk), located at the same location (Parts 2 and 3). In Part 4, the TT started to move at a constant velocity (10°, 20°, or 30°/s) in one of six directions (dotted green lines). Subjects were required to track the TT with their right arm while maintaining fixation. When moving, the TT was first visible for 300 ms (Part 4a), then became invisible for 450 ms (Part 4b), before reappearing for another 450 ms. (b) Oblique gaze paradigm. Subjects first pointed to the TT, initially located at screen center (Part 1), before orienting their gaze on one (red disk) of the five possible fixation points (red dotted circles), located on the two upper diagonals, at 0°, 15°, or 30° eccentricity (Part 2). Then they tracked the TT with their arm while maintaining gaze fixation, as for Part 4 of the head-roll paradigm.
Figure 2
 
Experimental paradigms. All the experiments were carried out in complete darkness. (a) Head-roll paradigm. Subjects first rolled their head either toward the left shoulder or right shoulder or kept it upright, as indicated by the gray bar, and maintained this posture (Part 1). Then they fixated the fixation point (red disk), located at screen center, and pointed their arm toward the tracking target, denoted TT (green disk), located at the same location (Parts 2 and 3). In Part 4, the TT started to move at a constant velocity (10°, 20°, or 30°/s) in one of six directions (dotted green lines). Subjects were required to track the TT with their right arm while maintaining fixation. When moving, the TT was first visible for 300 ms (Part 4a), then became invisible for 450 ms (Part 4b), before reappearing for another 450 ms. (b) Oblique gaze paradigm. Subjects first pointed to the TT, initially located at screen center (Part 1), before orienting their gaze on one (red disk) of the five possible fixation points (red dotted circles), located on the two upper diagonals, at 0°, 15°, or 30° eccentricity (Part 2). Then they tracked the TT with their arm while maintaining gaze fixation, as for Part 4 of the head-roll paradigm.
Figure 3
 
Model predictions for visually guided manual tracking. A target (green disk, denoted TT, for tracking target) moved at constant velocity to the right (from the subject's perspective) on a flat screen. Panels A–C describe the same manual tracking task but for different static geometrical eye-head-shoulder configurations. The projection of the TT velocity vector onto the retina (black arrow) is represented on the left side of each panel. Black solid lines are the projections of the horizontal and vertical axes. Predictions for the initial direction of the arm from two visuomotor hypotheses (not extensive) are represented on each panel: the spatial hypothesis (blue arrow) and the retinal hypothesis (orange arrow) (a) Primary position. The head was upright, and the fixation point (red disk) was located at the screen center, such that the gaze was in the primary position. (b) Head roll. The head was rolled toward the left shoulder, and the gaze was still directed onto the screen center. (c) Oblique gaze. The head was upright, and the gaze was directed at an eccentric position on the upper left diagonal (red disk on line). Panels (d) and (e) describe the predicted tracking direction errors as a function of head-roll angle and oblique gaze angle for the represented hypotheses. For the spatial hypothesis, there is no error (blue line). For the retinal hypothesis, we represented the average retinal error (orange line) across subjects and the 95% confidence interval (orange area). Gray lines and gray area show the retinal error for each subject and the related 95% confidence interval (see text). The black lines in panels (d) and (e) show the retinal error we should observe if (d) the OCR gain was equal to 0 or if (e) the primary position of the eye is orthogonal to the screen.
Figure 3
 
Model predictions for visually guided manual tracking. A target (green disk, denoted TT, for tracking target) moved at constant velocity to the right (from the subject's perspective) on a flat screen. Panels A–C describe the same manual tracking task but for different static geometrical eye-head-shoulder configurations. The projection of the TT velocity vector onto the retina (black arrow) is represented on the left side of each panel. Black solid lines are the projections of the horizontal and vertical axes. Predictions for the initial direction of the arm from two visuomotor hypotheses (not extensive) are represented on each panel: the spatial hypothesis (blue arrow) and the retinal hypothesis (orange arrow) (a) Primary position. The head was upright, and the fixation point (red disk) was located at the screen center, such that the gaze was in the primary position. (b) Head roll. The head was rolled toward the left shoulder, and the gaze was still directed onto the screen center. (c) Oblique gaze. The head was upright, and the gaze was directed at an eccentric position on the upper left diagonal (red disk on line). Panels (d) and (e) describe the predicted tracking direction errors as a function of head-roll angle and oblique gaze angle for the represented hypotheses. For the spatial hypothesis, there is no error (blue line). For the retinal hypothesis, we represented the average retinal error (orange line) across subjects and the 95% confidence interval (orange area). Gray lines and gray area show the retinal error for each subject and the related 95% confidence interval (see text). The black lines in panels (d) and (e) show the retinal error we should observe if (d) the OCR gain was equal to 0 or if (e) the primary position of the eye is orthogonal to the screen.
Figure 4
 
Typical trial in the head-roll paradigm. Panel (a) represents trajectories in the screen coordinates after the TT movement onset: TT trajectory (blue line) and projection of the eye-finger vector trajectory onto the screen (black line). Gaze fixation is also represented (red dot) as well as the head-roll indication (black dashed line) for this particular trial. Orange line shows the direction predicted using retinal information only. Panel (b) shows head-in-space 3D angular position and eye-in-head 3D Fick position, as well as the horizontal and vertical components of tracking position and velocity (thick traces for the tracking, dotted lines for the target) as a function of time during the trial. TT onset is indicated by the black vertical line, whereas TT occlusion is represented by the gray area.
Figure 4
 
Typical trial in the head-roll paradigm. Panel (a) represents trajectories in the screen coordinates after the TT movement onset: TT trajectory (blue line) and projection of the eye-finger vector trajectory onto the screen (black line). Gaze fixation is also represented (red dot) as well as the head-roll indication (black dashed line) for this particular trial. Orange line shows the direction predicted using retinal information only. Panel (b) shows head-in-space 3D angular position and eye-in-head 3D Fick position, as well as the horizontal and vertical components of tracking position and velocity (thick traces for the tracking, dotted lines for the target) as a function of time during the trial. TT onset is indicated by the black vertical line, whereas TT occlusion is represented by the gray area.
Figure 5
 
Quantitative results for the head-roll paradigm. (a) Mean arm trajectories for the first 200 ms following movement onset for trials with the head rolled toward the left shoulder, for each TT direction. In blue, the directions predicted by the spatial hypothesis (i.e., the TT directions onto the screen). In orange, the trajectories predicted by the retinal hypothesis. In black, the mean initial trajectories of the arm across subjects and in gray the standard error of the mean. Model predictions and mean observed trajectories are represented with dottedlines for a TT direction of 10° (right part) or 190° (left part). Similarly, we have solid (dashed) lines for 0° (−10) and 180° (170) TT directions. The black curve represents the direction (in degrees). (b) Standardized observed transformation as a function of the predicted 3D transformation (5° bins). Predictions for the spatial (blue) and retinal (orange) hypotheses are represented. A linear regression line (dashed black line) was fitted to the data collected across all subjects. The slope of the regression line represents the amount of transformation. (c) Transformation for each subject and for all subjects pooled together. Linear regression lines were fitted like in Panel (b), for each subject. The transformation indices (corresponding slopes) and corresponding 95% confidence intervals are represented in black. (d) Standardized observed OCR transformation as a function of predicted 3D OCR transformation. (e) OCR transformation (slope of the regression lines in [D]) for each subject, similar to (c).
Figure 5
 
Quantitative results for the head-roll paradigm. (a) Mean arm trajectories for the first 200 ms following movement onset for trials with the head rolled toward the left shoulder, for each TT direction. In blue, the directions predicted by the spatial hypothesis (i.e., the TT directions onto the screen). In orange, the trajectories predicted by the retinal hypothesis. In black, the mean initial trajectories of the arm across subjects and in gray the standard error of the mean. Model predictions and mean observed trajectories are represented with dottedlines for a TT direction of 10° (right part) or 190° (left part). Similarly, we have solid (dashed) lines for 0° (−10) and 180° (170) TT directions. The black curve represents the direction (in degrees). (b) Standardized observed transformation as a function of the predicted 3D transformation (5° bins). Predictions for the spatial (blue) and retinal (orange) hypotheses are represented. A linear regression line (dashed black line) was fitted to the data collected across all subjects. The slope of the regression line represents the amount of transformation. (c) Transformation for each subject and for all subjects pooled together. Linear regression lines were fitted like in Panel (b), for each subject. The transformation indices (corresponding slopes) and corresponding 95% confidence intervals are represented in black. (d) Standardized observed OCR transformation as a function of predicted 3D OCR transformation. (e) OCR transformation (slope of the regression lines in [D]) for each subject, similar to (c).
Figure 6
 
Typical trial in the oblique gaze paradigm. Panel (a) represents trajectories in the screen coordinates after TT onset: TT trajectory (blue line) and projection of the eye-finger vector trajectory onto the screen (black line). Gaze fixation is also represented (red dot) for this particular trial. Panel (b) shows head-in-space 3D angular position and eye-in-head 3D Fick position, as well as the horizontal and vertical components of the tracking position and velocity as a function of time, during the trial. TT position and velocity as a function of time are represented by dotted lines. TT onset is indicated by the black vertical line, whereas TT occlusion is represented by the gray area.
Figure 6
 
Typical trial in the oblique gaze paradigm. Panel (a) represents trajectories in the screen coordinates after TT onset: TT trajectory (blue line) and projection of the eye-finger vector trajectory onto the screen (black line). Gaze fixation is also represented (red dot) for this particular trial. Panel (b) shows head-in-space 3D angular position and eye-in-head 3D Fick position, as well as the horizontal and vertical components of the tracking position and velocity as a function of time, during the trial. TT position and velocity as a function of time are represented by dotted lines. TT onset is indicated by the black vertical line, whereas TT occlusion is represented by the gray area.
Figure 7
 
Quantitative results for the oblique gaze paradigm. (a) Standardized observed transformation as a function of the predicted 3D transformation (solid black, 2° bins, M ± SEM). A linear regression line (dashed black line) was fitted to the data. (B) Transformation for each subject and for all subjects pooled together. Same format as in Figure 5c.
Figure 7
 
Quantitative results for the oblique gaze paradigm. (a) Standardized observed transformation as a function of the predicted 3D transformation (solid black, 2° bins, M ± SEM). A linear regression line (dashed black line) was fitted to the data. (B) Transformation for each subject and for all subjects pooled together. Same format as in Figure 5c.
Figure A1
 
Scheme for the estimation of the target retinal velocity. The input of the model is the target spatial velocity. The output is the target retinal velocity.
Figure A1
 
Scheme for the estimation of the target retinal velocity. The input of the model is the target spatial velocity. The output is the target retinal velocity.
Table 1
 
Correlation coefficients and regression coefficients for each subject (S1 to S7) and for all subjects' data pooled together (All). Notes: Rsimple_HR (resp. Rsimple_OCR): correlation coefficient of simple linear regression between observed correction (dep. variable) as a function of head-roll angle (resp. OCR angle). Rmult: correlation coefficient of the multiple regression between observed correction and head roll and OCR angles (see Equation 3). Part. RHR (resp. Part. ROCR) is the partial regression coefficient of the head roll (resp. OCR) angle in the multiple regression analysis. Multiple regression equation is: corrobs = c0 + cHR*head roll + cOCR*OCR. The two next columns indicate the regression coefficients corresponding to each variable, as well as a 95% confidence interval. **p < 0.001. *p < 0.05 for the one-sided t test to test if the regression coefficient is significantly greater or lower than 1. The last two columns indicate the correlation coefficient ROCRgain of the simple linear regression (Equation 4) between OCR and head-roll angles, as well as OCRgain, the slope of the regression.
Table 1
 
Correlation coefficients and regression coefficients for each subject (S1 to S7) and for all subjects' data pooled together (All). Notes: Rsimple_HR (resp. Rsimple_OCR): correlation coefficient of simple linear regression between observed correction (dep. variable) as a function of head-roll angle (resp. OCR angle). Rmult: correlation coefficient of the multiple regression between observed correction and head roll and OCR angles (see Equation 3). Part. RHR (resp. Part. ROCR) is the partial regression coefficient of the head roll (resp. OCR) angle in the multiple regression analysis. Multiple regression equation is: corrobs = c0 + cHR*head roll + cOCR*OCR. The two next columns indicate the regression coefficients corresponding to each variable, as well as a 95% confidence interval. **p < 0.001. *p < 0.05 for the one-sided t test to test if the regression coefficient is significantly greater or lower than 1. The last two columns indicate the correlation coefficient ROCRgain of the simple linear regression (Equation 4) between OCR and head-roll angles, as well as OCRgain, the slope of the regression.
Subject Rsimple_HR Rsimple_OCR Rmult Part. RHR Part. ROCR cHR [95% CI] cOCR [95% CI] ROCRgain OCRgain
S1 0.940** 0.616** 0.945** 0.910** 0.284** 1.05* [1.01 1.10] 0.87 [0.58 1.15] 0.73 0.12
S2 0.929** 0.619** 0.934** 0.890** 0.255** 1.10* [1.05 1.14] 0.97 [0.67 1.26] 0.74 0.11
S3 0.848** 0.424** 0.856** 0.822** 0.223** 1.05 [0.98 1.12] 1.47 [0.82 2.12] 0.61 0.07
S4 0.801** 0.409** 0.809** 0.765** 0.195** 1.14* [1.05 1.23] 0.88 [0.48 1.27] 0.72 0.14
S5 0.907** 0.696** 0.911** 0.819** 0.198** 1.07* [1.00 1.14] 1.05 [0.57 1.53] 0.87 0.12
S6 0.871** 0.586** 0.880** 0.811** 0.252** 1.02 [0.95 1.08] 0.90 [0.59 1.21] 0.76 0.16
S7 0.932** 0.865** 0.935** 0.708** 0.20** 1.26* [1.13 1.38] 1.00 [0.53 1.48] 0.95 0.25
All 0.891** 0.608** 0.896** 0.830** 0.222** 1.09* [1.07 1.12] 0.87* [0.73 1.00] 0.77 0.15
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×