December 2019
Volume 19, Issue 14
Open Access
Article  |   December 2019
Gaze behavior during visuomotor tracking with complex hand-cursor dynamics
Author Affiliations
  • James Mathew
    Aix-Marseille Université, CNRS, Institut de Neurosciences de la Timone, Marseille, France
    Current affiliation: Institute of Neuroscience, Institute of Communication & Information Technologies, Electronics & Applied Mathematics, Université Catholique de Louvain, Louvain-la-neuve, Belgium
  • J. Randall Flanagan
    Department of Psychology and Centre for Neurosciences Studies, Queens University, Ontario, Canada
  • Frederic R. Danion
    Aix-Marseille Université, CNRS, Institut de Neurosciences de la Timone, Marseille, France
    frederic.danion@univ-amu.fr
Journal of Vision December 2019, Vol.19, 24. doi:https://doi.org/10.1167/19.14.24
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      James Mathew, J. Randall Flanagan, Frederic R. Danion; Gaze behavior during visuomotor tracking with complex hand-cursor dynamics. Journal of Vision 2019;19(14):24. https://doi.org/10.1167/19.14.24.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The ability to track a moving target with the hand has been extensively studied, but few studies have characterized gaze behavior during this task. Here we investigate gaze behavior when participants learn a new mapping between hand and cursor motion, such that the cursor represented the position of a virtual mass attached to the grasped handle via a virtual spring. Depending on the experimental condition, haptic feedback consistent with mass-spring dynamics could also be provided. For comparison a simple one-to-one hand-cursor mapping was also tested. We hypothesized that gaze would be drawn, at times, to the cursor in the mass-spring conditions, especially in the absence of haptic feedback. As expected hand tracking performance was less accurate under the spring mapping, but gaze behavior was virtually unaffected by the spring mapping, regardless of whether haptic feedback was provided. Specifically, relative gaze position between target and cursor, rate of saccades, and gain of smooth pursuit were similar under both mappings and both haptic feedback conditions. We conclude that even when participants are exposed to a challenging hand-cursor mapping, gaze is primarily concerned about ongoing target motion suggesting that peripheral vision is sufficient to monitor cursor position and to update hand movement control.

Introduction
The ability to track moving objects with the hand, or an object held in the hand, is important in many natural tasks and has been extensively studied (Foulkes & Miall, 2000; Miall, Weir, & Stein, 1993; Poulton, 1974; Streng, Popa, & Ebner, 2018). However, how such tracking behavior is supported by gaze has received less attention (Danion & Flanagan, 2018; Miall, Reckess, & Imamizu, 2001; Xia & Barnes, 1999). Previous studies have shown that when tracking simple (sinusoidal) or complex (Danion & Flanagan, 2018; Koken & Erkelens, 1992; Niehorster, Siu, & Li, 2015; Tramper & Gielen, 2011) trajectories with a cursor controlled by the hand, gaze typically leads the hand while both gaze and hand tend to lag behind the target. However, previous work has mostly focused on simple hand-cursor mappings, and it is not clear whether this observation holds for arm movement performed under more complex mappings, such as those that arise when tracking with a hand-held object with its own dynamics (e.g., a mass attached to the hand via a spring). Previous work has explored the effects of delaying (Foulkes & Miall, 2000; Miall & Jackson, 2006; Vercher & Gauthier, 1992), inverting (Grigorova & Bock, 2006; Vercher, Quaccia, & Gauthier, 1995), and rotating visual feedback of the hand (Gouirand, Mathew, Brenner, & Danion, 2019). However, the effects of more complex perturbations, linked to object dynamics, on eye-hand coordination remains to be fully explored. 
When being asked to track a moving target with the hand, not only do participants need to monitor the target position, they also need to keep track of the current cursor position. Evaluating the difference between target and cursor position is mandatory for accurate hand tracking. When people perform full arm movements under a simple (one-to-one) hand–cursor relationship, their gaze is much closer to the target than the cursor (Danion & Flanagan, 2018), suggesting that an estimate of cursor position is accessible through peripheral vision and/or arm (efferent/afferent) signals. We recently showed that when participants track a moving target with a joystick, their gaze is also closer to the target than the cursor after adapting to a visuomotor rotation that rotates the cursor away from the hand but preserves a one-to-one mapping between cursor speed and hand speed (Gouirand et al., 2019). The goal of the current study was to determine whether fixating the target with the eyes is a gaze strategy that extends to full arm movements performed under more complex (nonlinear) hand–cursor mappings. 
We asked participants to move a cursor controlled by the hand. In the spring condition, the cursor that behaved like a mass attached to the hand by means of a spring (Danion, Diamond, & Flanagan, 2012; Dingwell, Mah, & Mussa-Ivaldi, 2002; Landelle, Montagnini, Madelain, & Danion, 2016; Nagengast, Braun, & Wolpert, 2009). We examined hand and gaze behavior during both initial learning and subsequent steady state performance. For comparison, we also examined a rigid condition in which the cursor behaved like a mass without a spring. We hypothesized that, during learning, the mass-spring dynamics, in the spring condition, would affect gaze behavior because the location of the controlled object (cursor) cannot be easily estimated based on arm movement related signals. More specifically, we reasoned that, in comparison to the rigid condition, gaze would become more equally shared between the cursor and target. We also expected that the need to monitor cursor position with gaze would decrease as hand tracking improves during learning. 
When manipulating nonrigid objects, the contribution of haptic feedback has been demonstrated to be valuable (Danion et al., 2012; although see Hasson, Nasseroleslami, Krakauer, & Sternad, 2012; Huang, Gillespie, & Kuo, 2006). Therefore, we also included a spring haptic condition in which we applied forces to the arm that simulated a mass-spring acting at the hand, to the arm. We reasoned that the provision of haptic feedback might improve the sensory estimate of the cursor position, and therefore allows gaze to be released from the monitoring of the cursor. 
For each of these three conditions (rigid, spring, and spring haptic) participants performed 40 consecutive trials, allowing monitoring of possible changes in gaze behavior as learning progressed. In contrast to our hypotheses, and despite marked differences in hand tracking performance across cursor–target mappings and across trials, results showed only modest changes in gaze behavior, such that in all conditions gaze was predominantly directed toward the target. 
Method
Participants
Eighteen self-proclaimed right-handed participants (Aged 24.2 ± 6.9 years; 10 women, 8 men) participated in this study. None of the participants had neurological or visual disorders. They were naïve as to the experimental conditions and hypotheses, and had no previous experience of ocular motor testing. All participants gave written informed consent prior to the study. The experimental protocol was approved by the General Research Ethics Board at Queen's University in compliance with the Canadian Tri-Council Policy on Ethical Conduct for Research Involving humans. Each experimental protocol lasted about one hour, and participants were compensated $15 for their participation. 
Apparatus
The experimental setup is similar to the one used in our recent study (Danion & Flanagan, 2018), thus we will only report key information. Our setup is illustrated in Figure 1. Participants were comfortably seated and performed the tasks with their arm supported by, and secured to, a robotic exoskeleton (Kinarm, BKIN Technologies, Kingston, ON, Canada) that allowed the arm to move in the horizontal plane and could apply torques at the elbow and shoulder joint to simulate loads acting on the hand (Scott, 1999). Visual stimuli (i.e., target and cursor) were projected from above onto an opaque mirror positioned over the arm, and appeared in the plane of arm motion. Participants could not see their actual hand or arm. Hand movements were recorded at a sampling rate of 1000 Hz with a resolution of 0.1 mm. 
Figure 1
 
Top view of the experimental setup. Both arms of the participant were inserted into an exoskeleton. An opaque mirror placed above the arms blocked their view. A purple target was projected on the mirror from above and appeared at the height of the hand. The dotted line shows an example target path (not visible to the participant). A red cursor representing the right index fingertip was also displayed and the participant was instructed to move his/her right arm so as to bring the cursor as close as possible to the moving target.
Figure 1
 
Top view of the experimental setup. Both arms of the participant were inserted into an exoskeleton. An opaque mirror placed above the arms blocked their view. A purple target was projected on the mirror from above and appeared at the height of the hand. The dotted line shows an example target path (not visible to the participant). A red cursor representing the right index fingertip was also displayed and the participant was instructed to move his/her right arm so as to bring the cursor as close as possible to the moving target.
The cursor and the target were represented, respectively, as red and purple filled circles (0.6 cm in diameter). A built-in video based eye tracker (Eyelink 1000; SR Research Ltd., Ottawa, ON, Canada) recorded eye movements at 500 Hz. Before the experiment, gaze position in the work plane was calibrated by having participants fixate a grid of targets. When looking at the center of the region of the work plane (and the center of target motion), a 1 cm change in gaze position corresponded to a 1.6° change in gaze angle. 
Procedure
Two types of hand-cursor visual mapping were tested. Under the RIGID mapping, the cursor position directly matched the position of the hand in the horizontal plane. No haptic feedback was implemented under the RIGID mapping; the motors of the robotic device were simply turned off. Under the SPRING mapping, the cursor behaved as a mass attached to the hand by means of a spring. We used the following parameters for the simulation: mass = 1 Kg, stiffness = 40 N/m, damping = 1.66 N/m/s, resting length = 0 m. These values are about one third of values used in previous studies investigating the manipulation of nonrigid objects (Danion et al., 2012; Dingwell et al., 2002; Dingwell, Mah, & Mussa-Ivaldi, 2004; Landelle et al., 2016; Nagengast et al., 2009), but a similar parameter setting was used in our recent study (Danion, Mathew, & Flanagan, 2017). The rational for decreasing object inertia was to prevent possible fatigue effects while keeping a 1 Hz resonance frequency as in other studies; the resonance frequency (F) of a mass-spring system depends on its mass (m) and its stiffness (k) such that  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}F = {1 \over {2\pi }}\sqrt {{k \over m}}\end{equation}
 
Depending on the experimental conditions, haptic feedback could be provided (SPRING-HAPT) or not (SPRING). When haptic feedback was provided the same parameters were employed to simulate the physical and visual behavior of the cursor. In the absence of haptic feedback, the motors of the robotic device were turned off. Overall our experimental design included three conditions: RIGID, SPRING, SPRING-HAPT. 
For each experimental condition participants were instructed to track as accurately as possible a target with the cursor. There was no explicit requirement in terms of gaze behavior. The motion of the target resulted from the combination of sinusoids: two along the x axis (one fundamental and a second or third harmonic) and two along the y axis (same procedure; see Figure 1 for axes). We used the following equation to construct target motion  
\begin{equation}{x_t} = {A_{1x}}\cos \omega t + {A_{2x}}\cos \left( {{h_x}\omega t - {\varphi _x}} \right)\end{equation}
 
\begin{equation}{y_t} = {A_{1y}}\sin \omega t + {A_{2y}}\sin \left( {{h_y}\omega t - {\varphi _y}} \right)\end{equation}
 
A similar technique was used elsewhere to generate pseudo-random 2D pattern while preserving smooth changes in velocity and direction (Mrotek & Soechting, 2007; Soechting, Rao, & Juveli, 2010). A total of five different patterns were used throughout the experiment (see one example in Figure 1). All trajectories had a period of 5 s (fundamental = 0.2 Hz). The parameters (gain, frequency, phase, and harmonics) used to generate all our patterns can be found in our previous study (Danion & Flanagan, 2018). They were selected so as to maintain the same path length over one cycle (78 cm). Given that each trial was 10 s long (i.e., two cycles), the total distance covered by the target was 156 cm, resulting in a mean tangential target velocity of 15.6 cm/s. 
Before the experimental session each participant completed a familiarization session with five trials under the RIGID mapping. Each participant then completed one block of 40 trials in each experimental condition. The order of the three blocks was randomized across participants. Overall, a total of 120 experimental trials were collected per participant. The overall duration of the experiment averaged 60 min. 
Data analysis
To assess the participants' ability to perform our hand-tracking task, the following dependent variables were extracted from each trial. For all trials we computed the mean Euclidian distance between cursor position and target position. The temporal relationship between cursor and target movement, and between eye and target movement was estimated by means of cross correlations that simultaneously took into account the vertical and horizontal axes. To simultaneously cross correlate horizontal (x) and vertical (y) position signals between effectors, we interleaved the x and y signals, and always time shifted these interleaved signals by a multiple of two samples (Danion & Flanagan, 2018; Flanagan, Terao, & Johansson, 2008). A positive value indicates that the cursor was lagging on the target. 
Regarding gaze behavior, we first performed a sequence of analyses to separate periods of smooth pursuit, saccades, and blinks from the raw eye position signals. The identification and removal of the blinks (1% of the total trial duration on average) was performed by visual inspection. Eye signals were then low-pass filtered with a fourth order Butterworth filter using a cutoff frequency of 25 Hz. The resultant signals were differentiated to obtain velocity traces, and then were low-pass filtered again with a cutoff frequency of 25 Hz to remove the noise from the numerical differentiation. These eye velocity signals were differentiated to provide accelerations traces that we also low-pass filtered at 25 Hz to remove noise. The identification of the saccades was based on acceleration and deceleration peaks of the eye (>1,500 cm/s2). Based on these computations, periods of pursuit and of saccades were extracted. To better characterize saccadic activity we computed for each trial the mean saccade rate (average number of saccades per second). To evaluate the performance of smooth pursuit, we computed its mean tangential velocity as well as its gain (SP gain) by averaging the ratio between instantaneous gaze and target tangential velocities (only situations where target tangential velocity was greater than 10 cm/s were considered; Landelle et al., 2016). Finally, to assess the relative contribution of saccades and smooth pursuit, for each trial we computed the total distance traveled by the eye with saccades and then expressed this as a percentage of the total distance traveled by the eye using both saccades and smooth pursuit (Orban de Xivry et al., 2006; Landelle et al., 2016). 
To assess how gaze was shared between target and cursor, we developed the following procedure (see also Danion & Flanagan, 2018). At each point in time we projected the gaze position onto an axis connecting the target and cursor and determined the relative position along this axis, with 0 indicating that gaze projected on the target, 1 indicating gaze projected on the cursor, and a value of 0.5 indicating that gaze was equidistance between the cursor and target. We will refer to this variable as the relative projected gaze position. For all the analyses described above, the first second of each trial was discarded. 
Finally, as an aside, hand–cursor dynamics was investigated by means of computing the mean distance between the cursor (projected on the opaque mirror) and the projected hand position on the opaque mirror (for a similar approach, see Danion et al., 2012). Because this distance is 0 under the RIGID mapping, it will not be presented. Dynamics of hand movements was also investigated by means of hand tangential velocity. Specifically, we computed mean tangential velocity and its associated fluctuation (SD) over each trial. 
Statistical analysis
Two-way repeated measures ANOVAs were used to assess the effect of TRIAL (2 first vs. 2 last trials) and MAPPING (RIGID, SPRING, SPRING-HAPT). Newman-Keuls corrections were used for post hoc t tests to correct for multiple comparisons. A conventional 0.05 significance threshold was used for all analyses. 
Results
Typical trials
Figure 2 plots typical trials performed by the same subject in each of the three experimental conditions. As can be seen, cursor and gaze were always lagging behind the target; however, this lag was substantially smaller for gaze. It is also apparent that tracking performance (i.e., how well the cursor tracked the target) was better under the RIGID mapping than the SPRING ones. In the next section we analyze in more details those observations. 
Figure 2
 
Typical trials by the same participant in each of the three experimental conditions. Target, cursor, eye, and hand position signals during early exposure. Although each trial was 10 s long, for clarity only 5 s of signals are displayed. For convenience, we have selected three trials that used the same target trajectory. Saccadic eye movements are depicted by red segments.
Figure 2
 
Typical trials by the same participant in each of the three experimental conditions. Target, cursor, eye, and hand position signals during early exposure. Although each trial was 10 s long, for clarity only 5 s of signals are displayed. For convenience, we have selected three trials that used the same target trajectory. Saccadic eye movements are depicted by red segments.
Tracking performance
The accuracy with which the cursor tracked the target was greatly influenced by hand–cursor mapping, with lower performance under both SPRING mappings. Figure 3 shows mean tracking performance as a function of trials and experimental conditions. Regarding the cursor–target distance (Figure 3A), the ANOVA showed a main effect of MAPPING, F(2, 34) = 247.27, p < 0.001; TRIAL, F(1, 17) = 15.79, p < 0.001; and an interaction, F(2, 34) = 15.14, p < 0.001. Post hoc analyses of the MAPPING effect indicated that cursor–target distance was nearly doubled under SPRING and SPRING-HAPT compared to RIGID (4.8 vs. 2.5 cm; p < 0.001); however, there was no significant difference between SPRING and SPRING-HAPT (p = 0.21). Post hoc analysis of the interaction revealed that cursor–target distance decreased across trials under both SPRING mappings (p < 0.001), confirming that prolonged experience benefitted cursor tracking. However, no similar improvement was observed under the RIGID mapping (p = 0.65). 
Figure 3
 
Average cursor tracking performance as a function of experimental condition and trial number. A. Euclidian distance between cursor and target. B. Temporal lag between cursor and target (a positive lag indicates that the hand is lagging behind the target). Error bars represent SEM. For both indexes, note the lower performance under the SPRING mappings.
Figure 3
 
Average cursor tracking performance as a function of experimental condition and trial number. A. Euclidian distance between cursor and target. B. Temporal lag between cursor and target (a positive lag indicates that the hand is lagging behind the target). Error bars represent SEM. For both indexes, note the lower performance under the SPRING mappings.
Rather similar observations were obtained when examining the temporal lag between cursor and target (see Figure 3B). The ANOVA showed a main effect of MAPPING, F(2, 34) = 138.41, p < 0.001, and an interaction, F(2, 34) = 11.64, p < 0.001, but no TRIAL effect, F(1, 17) = 0.05, p = 0.83. Post hoc analysis of MAPPING revealed that the lag was more than doubled under both SPRING mappings compared to the RIGID mapping (245 vs. 112 ms; p < 0.001). Post hoc analysis of the interaction revealed a significant decrease in lag across trials under SPRING-HAPT (p < 0.001), but not under SPRING and RIGID (p > 0.10). Overall, those analyses support the view that the SPRING mappings were more challenging than the RIGID one, even though tracking improved across trials with the SPRING mappings. 
Hand–cursor dynamics under SPRING and SPRING-HAPT
Figure 4A shows the mean hand tangential velocity as a function of trials for each SPRING mapping. As can be seen, hand movements were typically slower under the SPRING conditions. Indeed, the ANOVA showed a main effect of MAPPING, F(2, 34) = 25.52, p < 0.001, with post hoc comparisons indicating lower hand velocities under SPRING and SPRING-HAPT compared to RIGID (p < 0.001). Although the provision of haptic feedback tends to increase hand velocity, the difference between SPRING-HAPT and SPRING did not reach significance (p = 0.09). The ANOVA also showed a main effect of TRIAL, F(1, 17) = 7.39, p < 0.05), and a TRIAL by MAPPING interaction, F(2, 34) = 5.37, p < 0.01, linked to a decrease in hand velocity across trial in the SPRING conditions only. Further analyses of fluctuations in hand tangential velocity (see Figure 4B) revealed that hand movements were also smoother under the SPRING conditions. Indeed, the ANOVA showed a main effect of MAPPING, F(2, 34) = 16.90, p < 0.001, with post hoc comparisons indicating smaller fluctuations under SPRING and SPRING-HAPT compared to RIGID (p < 0.001). Although haptic feedback tends to favor hand velocity fluctuations, the difference between SPRING-HAPT and SPRING did not reach significance (p = 0.16). The ANOVA also showed a main effect of TRIAL, F(1, 17) = 36.67, p < 0.001), and a TRIAL by MAPPING interaction, F(2, 34) = 7.78, p < 0.01, associated with a decrease in hand velocity fluctuations in the SPRING conditions only. Overall, these analyses demonstrate that participants employed slower and smoother hand movements when moving the cursor during the SPRING conditions. 
Figure 4
 
Average hand tangential velocity and its fluctuations as a function of experimental condition and trial number. Error bars represent SEM.
Figure 4
 
Average hand tangential velocity and its fluctuations as a function of experimental condition and trial number. Error bars represent SEM.
Figure 5 shows the mean distance between hand and cursor as a function of trials for each SPRING mapping. Averaged across trials and mappings, mean hand–cursor distance was 2.1 cm. ANOVA showed a significant difference between SPRING and SPRING-HAPT, F(1, 17) = 72.51; p < 0.001, such that the provision of haptic feedback led to a smaller hand–cursor distance (1.8 vs. 2.3 cm). There was also an effect of TRIAL, F(1, 17) = 46.05; p < 0.001, due to a progressive reduction in hand–cursor distance under both mappings. Regarding the temporal level, as expected, the cursor lagged behind the hand. Again we found a significant difference between SPRING and SPRING-HAPT, F(1, 17) = 89.9; p < 0.001, such that the provision of haptic feedback led to a smaller temporal lag (61 vs. 92 ms). There was also an interaction between TRIAL and MAPPING, F(1, 17) = 19.98; p < 0.001, due to a progressive reduction in hand–cursor lag across trials under SPRING but not under SPRING-HAPT. Overall, although prolonged exposure and haptic feedback allowed participants to keep the cursor closer from the hand, the cursor remained temporally and spatially dissociated from the hand motion. 
Figure 5
 
Average hand–cursor distance as a function of experimental condition and trial number. Error bars represent SEM.
Figure 5
 
Average hand–cursor distance as a function of experimental condition and trial number. Error bars represent SEM.
Gaze behavior
Figure 6 presents the time course of mean group eye–target (panel A) and eye–cursor (panel B) distance as a function of experimental conditions. The comparison between these two panels indicates that the eye was usually closer from the target than the cursor. Indeed, although eye–target distance was on the order of 1–2 cm, the eye–cursor distance ranged between 2 and 5 cm, a twofold difference. Although the eye–target distance increased for both SPRING mappings compared to RIGID (1.8 vs. 1 cm), a similar phenomenon was observed for the eye–cursor distance (3.5 vs. 2.2 cm), suggesting that the relative position of gaze did not change much across mappings. 
Figure 6
 
Gaze position as a function of experimental condition and trial number. A. Euclidian distance between eye and target. B. Euclidian distance between eye and cursor. Error bars represent SEM. For all mappings, note how gaze is closer to the target than the cursor.
Figure 6
 
Gaze position as a function of experimental condition and trial number. A. Euclidian distance between eye and target. B. Euclidian distance between eye and cursor. Error bars represent SEM. For all mappings, note how gaze is closer to the target than the cursor.
To characterize in more detail gaze behavior relative to the cursor and target, in Figure 7 we show the mean group distribution of the relative projected gaze position for each experimental condition. As can be seen, for each of the three mappings, the distribution had a single peak that was closer to the target than the cursor. Further analyses comparing early and late trials show no obvious trend in the location of this peak: 21.8 vs. 22.5%; F(1, 17) = 0.29; p = 0.59. However a main effect of MAPPING was found, F(2, 34) = 9.18; p < 0.001). Post hoc analysis indicated that the peak was shifted slightly away from the target in SPRING-HAPT compared to the other two mappings (25 vs. 20%; p < 0.01). Overall, it seems that relative projected gaze position was rather invariant across TRIALS and MAPPING, with gaze being closer to the target than the cursor. 
Figure 7
 
Distribution of relative projected gaze position in each experimental condition. Mean group distributions are presented by thick lines (with dotted lines indicating ±1 SEM). All distributions had a single peak much closer to the target than the cursor.
Figure 7
 
Distribution of relative projected gaze position in each experimental condition. Mean group distributions are presented by thick lines (with dotted lines indicating ±1 SEM). All distributions had a single peak much closer to the target than the cursor.
Saccadic and smooth pursuit activity
The rate of saccades was approximately 2 per second and was rather stable across conditions and trials. ANOVA showed no main effect of MAPPING, F(2, 34) = 0.48, p = 0.62, or TRIAL, F(1, 17) = 2.92, p = 0.11, and there was no interaction, F(2, 34) = 0.19, p = 0.82. Similar results were obtained for the contribution of saccades to total eye displacement, reaching on average 22% (p > 0.17). Smooth pursuit gain and velocity were also found to be similar across mappings, F(2, 34) < 1.75; p > 0.19, reaching, respectively, 0.94 and 15.7 cm/s on average. Overall, saccadic and smooth pursuit activity appeared rather insensitive to our experimental factors. 
Discussion
The main goal of this study was to investigate free gaze behavior under a complex hand–cursor mapping when participants have to track a visual target with a cursor controlled by the arm movement. Overall our experiment brought the following key findings. As largely expected, hand tracking accuracy was substantially impaired by the SPRING mapping, resulting in a doubling of cursor–target distance and lag. Despite substantial differences in cursor tracking accuracy, only minimal changes were found with respect to gaze behavior. Indeed, gaze position relative to the target and cursor positions was similar under the SPRING and RIGID mapping. In both cases, gaze was typically located between the cursor and the target, but was closer to the target than the cursor (20% vs. 80%). Analyses of the distribution of the relative position of gaze between the cursor and target showed unimodal distributions, ruling out the possibility that gaze alternated between cursor and target fixation. Furthermore, the saccadic and smooth pursuit activity did not change with hand–cursor mapping. Finally, although the provision of haptic feedback influenced hand behavior, it had virtually no impact on gaze behavior. We will now discuss in more detail these findings and their implications. 
Cursor tracking and hand behavior are strongly dependent on hand–cursor dynamics
As expected, the accuracy of cursor tracking was substantially impaired by the SPRING mapping, resulting in a doubling of both the distance and the time lag between the cursor and target when compared to the RIGID mapping. Although cursor tracking in the SPRING condition improved over the time course of the experiment, it never reached the level in the RIGID condition. Those observations are consistent with earlier ones emphasizing the real challenge of manipulating nonrigid objects (Danion et al., 2012; Nagengast et al., 2009) even when extended practice is offered (Dingwell et al., 2002; Hasson, Shen, & Sternad, 2012). 
Regarding hand behavior, we observed a progressive reduction in hand–cursor distance under both SPRING mappings. This strategy contrasts with our previous study in which participants had to move a cursor with mass-spring dynamics as fast as possible from one location to another one (Danion et al., 2012). Indeed, during that discrete task we found that the best strategy was to have the object move away from the hand. As participants were given more practice we observed an increase in hand–cursor distance. Here our continuous task may be more challenging in the sense that this time the object has to follow an imposed trajectory, and if the object gets “out of control”, cursor tracking will be poor. Of course, if participants were given days to practice, the best strategy may be to free the object. With perfect control, we would expect participants to generate larger, more rapid hand movements, which would cause the object to move further from the hand. However, over a shorter time scale and in the absence of perfect control, this strategy would likely result in large tracking errors. This reasoning is in line with our observation that hand tangential velocity and its fluctuations were substantially reduced under both SPRING mappings. More generally this comparison across studies suggests that the way participants handle the properties of nonrigid objects is strongly influenced by the characteristics of the task. 
Gaze behavior is virtually unaffected by hand–cursor dynamics
In contrast to cursor tracking performance, gaze behavior was modestly influenced by the spring dynamics. We did find that gaze was further away from the target under the SPRING mapping compared to the RIGID mapping. However, when normalized by the target–cursor distance, gaze position was rather similar across our experimental conditions. In all cases we found that gaze was much closer to the target than the cursor, and that this relative position did not change much with learning. Furthermore, the distribution of gaze indicated that the eye was not alternating between periods of target and cursor fixation, even during the early stage of exposure. This is quite different than the gaze behavior observed when a participant was learning a completely novel and arbitrary mapping between hand actions and cursor motion, where gaze tends to be directed at the cursor in early learning and then to the target in late learning (Sailer et al., 2005). Overall, those observations suggest that participants employed a rather robust gaze strategy that consists of positioning gaze between cursor and target but closer to the target. Although similar findings were observed when participants perform this tracking task with a joystick and a rotated cursor (Gouirand et al., 2019), the current study demonstrates that this strategy (gaze on target first) holds for more complex mappings and full arm movements. 
As noted above, gaze was not strictly on the target, but rather in between the target and the cursor. A first reason is that since target motion was not fully predictable, gaze was necessarily lagging on the target. Second, this behavior is reminiscent of the center-looking strategy that participants often adopt when tracking, with their eyes, multiple objects simultaneously (Fehd & Seiffert, 2008). For instance, when participants track three moving targets surrounded by distractors, fixation is close to the center of the triangle formed by the targets, rather than alternating between individual targets. This behavior is interpreted as a strategy that consists in grouping multiple targets into a single object, and also limits saccades (among individual targets) during which targets cannot be tracked. Participants in our experiment may have employed a similar strategy, allowing them to track both the target and the cursor while limiting saccades. 
Under the SPRING mappings, participants were able to improve their tracking performance across trials without directed their gaze at the cursor. This suggests that peripheral vision was sufficient to monitor the cursor and provide the error signals necessary for updating the novel relationship between hand motor commands and their visual consequences. Work on reaching to static targets has shown that peripheral vision provides precise information direction and speed of the cursor controlled by the hand—information that can be used to rapidly update motor commands during the reach (Brenner & Smeets, 2003; de Brouwer, Gallivan, & Flanagan, 2018; Dimitriou, Wolpert, & Franklin, 2013; Franklin & Wolpert, 2008; Knill, Bondada, & Chhabra, 2011; Sarlegna et al., 2003; Saunders & Knill, 2003, 2005). Of course, one can ask why participants do not fixate on the cursor and use peripheral vision to track the target. Presumably, participants tend to fixate on the target because it facilitates the use of extraretinal information (i.e., gaze-related proprioceptive signal and/or efference copies of eye movement comments) in locating what is the target of their action (Mrotek & Soechting, 2007; Neggers & Bekkering, 2001; Prablanc, Echallier, Komilis, & Jeannerod, 1979; Prablanc, Pélisson, & Goodale, 1986). 
Separate contribution of haptics for eye and hand
Based on our previous study, we reasoned that haptic feedback would provide relevant information for cursor tracking (Danion et al., 2012). This reasoning is consistent with novel evidence that the provision of haptic feedback accelerated learning. Indeed, despite similar initial performance, target–cursor distance was smaller in the SPRING-HAPT condition compared to the SPRING condition. Regarding the influence of haptic feedback on gaze behavior, our previous study, in which participants had to track, with the eyes, a self-moved target (Danion et al., 2017) that was transiently occluded, showed that haptic feedback was useful under a SPRING mapping. In the current study, however, haptic feedback had no effect on eye–target lag and eye–target distance, as well as relative gaze position. Although these results extend the view that haptic feedback benefits the hand movement control, they do not support a systematic contribution of haptic feedback to eye movement control. Moreover, we conclude that haptic feedback can have distinct contribution depending on the effector, even when these effectors need to be coordinated as envisaged during manual tracking of a visual target. 
Conclusions
The main goal of this study was to investigate free gaze behavior when participants learn a complex hand–cursor mapping in order to track a visual target. Overall, our study makes two main contributions. First, our results indicate that maintaining gaze on the target remains a priority, thereby suggesting that peripheral vision is sufficient to learn a new cursor dynamics. Second, despite an intricate relationship between eye and hand movements (Crawford, Medendorp, & Marotta, 2004; de Brouwer, Albaghdadi, Flanagan, & Gallivan, 2018; Johansson, Westling, Bäckström, & Flanagan, 2001), we show that haptic feedback can make distinct contributions to each of these effectors. 
Acknowledgments
We would like to thank Martin York, Justin Peterson, and Tayler Jarvis for technical support and logistical support. Support for this research was provided by the CNRS (PICS N° 191607), the Natural Sciences and Engineering Research Council of Canada (RGPIN/04837), and the Canadian Institutes of Health Research (82837). JM was supported by the Innovative Training Network Perception and Action in Complex Environment (PACE) that has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement N° 642961. This paper reflects only the authors' view and that the Research Executive Agency (REA) of the European Commission is not responsible for any use that may be made of the information it contains. 
FRD and RF designed research; FRD performed research; FRD and JM analyzed data; FRD and JM prepared the figures; FRD, JM, and RF interpreted the results and wrote the paper. 
Commercial relationships: none. 
Corresponding author: Frederic R. Danion. 
Address: Aix-Marseille Université, CNRS, Institut de Neurosciences de la Timone, Marseille, France. 
References
Brenner, E., & Smeets, J. B. J. (2003). Perceptual requirements for fast manual responses. Experimental Brain Research, 153 (2), 246–252, https://doi.org/10.1007/s00221-003-1598-y.
Crawford, J. D., Medendorp, W. P., & Marotta, J. J. (2004). Spatial transformations for eye–hand coordination. Journal of Neurophysiology, 92 (1), 10–19, https://doi.org/10.1152/jn.00117.2004.
Danion, F., Diamond, J. S., & Flanagan, J. R. (2012). The role of haptic feedback when manipulating nonrigid objects. Journal of Neurophysiology, 107 (1), 433–441, https://doi.org/10.1152/jn.00738.2011.
Danion, F. R., & Flanagan, J. R. (2018). Different gaze strategies during eye versus hand tracking of a moving target. Scientific Reports, 8 (1): 10059, https://doi.org/10.1038/s41598-018-28434-6.
Danion, F., Mathew, J., & Flanagan, J. R. (2017). Eye tracking of occluded self-moved targets: Role of haptic feedback and hand-target dynamics. eNeuro, 4 (3), 1–12, http://www.eneuro.org/content/early/2017/06/26/ENEURO.0101-17.2017.
de Brouwer, A. J., Albaghdadi, M., Flanagan, J. R., & Gallivan, J. P. (2018). Using gaze behavior to parcellate the explicit and implicit contributions to visuomotor learning. Journal of Neurophysiology, 120 (4), 1602–1615, https://doi.org/10.1152/jn.00113.2018.
de Brouwer, A. J., Gallivan, J. P., & Flanagan, J. R. (2018). Visuomotor feedback gains are modulated by gaze position. Journal of Neurophysiology, 120 (5), 2522–2531, https://doi.org/10.1152/jn.00182.2018.
Dimitriou, M., Wolpert, D. M., & Franklin, D. W. (2013). The temporal evolution of feedback gains rapidly update to task demands. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 33 (26), 10898–10909, https://doi.org/10.1523/JNEUROSCI.5669-12.2013.
Dingwell, J. B., Mah, C. D., & Mussa-Ivaldi, F. A. (2002). Manipulating objects with internal degrees of freedom: Evidence for model-based control. Journal of Neurophysiology, 88 (1), 222–235.
Dingwell, J. B., Mah, C. D., & Mussa-Ivaldi, F. A. (2004). Experimentally confirmed mathematical model for human control of a nonrigid object. Journal of Neurophysiology, 91 (3), 1158–1170, https://doi.org/10.1152/jn.00704.2003.
Fehd, H. M., & Seiffert, A. E. (2008). Eye movements during multiple object tracking: Where do participants look? Cognition, 108 (1), 201–209, https://doi.org/10.1016/j.cognition.2007.11.008.
Flanagan, J. R., Terao, Y., & Johansson, R. S. (2008). Gaze behavior when reaching to remembered targets. Journal of Neurophysiology, 100 (3), 1533–1543, https://doi.org/10.1152/jn.90518.2008.
Foulkes, A. J., & Miall, R. C. (2000). Adaptation to visual feedback delays in a human manual tracking task. Experimental Brain Research, 131 (1), 101–110.
Franklin, D. W., & Wolpert, D. M. (2008). Specificity of reflex adaptation for task-relevant variability. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 28 (52), 14165–14175, https://doi.org/10.1523/JNEUROSCI.4406-08.2008.
Gouirand, N., Mathew, J., Brenner, E., & Danion, F. (2019). Eye movements do not play an important role in the adaptation of hand tracking to a visuomotor rotation. Journal of Neurophysiology, 121 (5), 1967–1976, https://doi.org/10.1152/jn.00814.2018.
Grigorova, V., & Bock, O. (2006). The role of eye movements in visuo-manual adaptation. Experimental Brain Research, 171 (4), 524–529, https://doi.org/10.1007/s00221-005-0301-x.
Hasson, C. J., Nasseroleslami, B., Krakauer, J. W., & Sternad, D. (2012, October). Comparing haptic and visual feedback control of an object with complex dynamics. Paper presented at the Society for Neuroscience 42nd Annual Meeting, New Orleans, LA.
Hasson, C. J., Shen, T., & Sternad, D. (2012). Energy margins in dynamic object manipulation. Journal of Neurophysiology, 108 (5), 1349–1365, https://doi.org/10.1152/jn.00019.2012.
Huang, F. C., Gillespie, R. B., & Kuo, A. D. (2006). Human adaptation to interaction forces in visuo-motor coordination. IEEE Transactions on Neural Systems and Rehabilitation Engineering: A Publication of the IEEE Engineering in Medicine and Biology Society, 14 (3), 390–397, https://doi.org/10.1109/TNSRE.2006.881533.
Johansson, R. S., Westling, G., Bäckström, A., & Flanagan, J. R. (2001). Eye–hand coordination in object manipulation. The Journal of Neuroscience, 21 (17), 6917–6932.
Knill, D. C., Bondada, A., & Chhabra, M. (2011). Flexible, task-dependent use of sensory feedback to control hand movements. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 31 (4), 1219–1237, https://doi.org/10.1523/JNEUROSCI.3522-09.2011.
Koken, P. W., & Erkelens, C. J. (1992). Influences of hand movements on eye movements in tracking tasks in man. Experimental Brain Research, 88 (3), 657–664.
Landelle, C., Montagnini, A., Madelain, L., & Danion, F. (2016). Eye tracking a self-moved target with complex hand-target dynamics. Journal of Neurophysiology, 116 (4), 1859–1870, https://doi.org/10.1152/jn.00007.2016.
Miall, R. C., & Jackson, J. K. (2006). Adaptation to visual feedback delays in manual tracking: Evidence against the Smith predictor model of human visually guided action. Experimental Brain Research, 172 (1), 77–84, https://doi.org/10.1007/s00221-005-0306-5.
Miall, R. C., Reckess, G. Z., & Imamizu, H. (2001). The cerebellum coordinates eye and hand tracking movements. Nature Neuroscience, 4 (6), 638–644, https://doi.org/10.1038/88465.
Miall, R. C., Weir, D. J., & Stein, J. F. (1993). Intermittency in human manual tracking tasks. Journal of Motor Behavior, 25 (1), 53–63.
Mrotek, L. A., & Soechting, J. F. (2007). Target interception: Hand-eye coordination and strategies. The Journal of Neuroscience, 27 (27), 7297–7309, https://doi.org/10.1523/JNEUROSCI.2046-07.2007.
Nagengast, A. J., Braun, D. A., & Wolpert, D. M. (2009). Optimal control predicts human performance on objects with internal degrees of freedom. PLoS Computational Biology, 5 (6), e1000419, https://doi.org/10.1371/journal.pcbi.1000419.
Neggers, S. F., & Bekkering, H. (2001). Gaze anchoring to a pointing target is present during the entire pointing movement and is driven by a non-visual signal. Journal of Neurophysiology, 86 (2), 961–970.
Niehorster, D. C., Siu, W. W. F., & Li, L. (2015). Manual tracking enhances smooth pursuit eye movements. Journal of Vision, 15 (15): 11, 1–14, https://doi.org/10.1167/15.15.11. [PubMed] [Article]
Orban de Xivry, J.-J., Bennett, S. J., Lefèvre, P., & Barnes, G. R. (2006). Evidence for synergy between saccades and smooth pursuit during transient target disappearance. Journal of Neurophysiology, 95, 418–427.
Poulton, E. (1974). Tracking skill and manual control. New York, NY: Academic Press.
Prablanc, C., Echallier, J. F., Komilis, E., & Jeannerod, M. (1979). Optimal response of eye and hand motor systems in pointing at a visual target. I. Spatio-temporal characteristics of eye and hand movements and their relationships when varying the amount of visual information. Biological Cybernetics, 35 (2), 113–124.
Prablanc, C., Pélisson, D., & Goodale, M. A. (1986). Visual control of reaching movements without vision of the limb. I. Role of retinal feedback of target position in guiding the hand. Experimental Brain Research, 62 (2), 293–302.
Sailer, U., Flanagan, J. R., & Johansson, R. S. (2005). Eye-hand coordination during learning of a novel visuomotor task. Journal of Neuroscience, 25, 8833–8842.
Sarlegna, F., Blouin, J., Bresciani, J.-P., Bourdin, C., Vercher, J.-L., & Gauthier, G. M. (2003). Target and hand position information in the online control of goal-directed arm movements. Experimental Brain Research. Experimentelle Hirnforschung. Expérimentation Cérébrale, 151 (4), 524–535, https://doi.org/10.1007/s00221-003-1504-7.
Saunders, J. A., & Knill, D. C. (2003). Humans use continuous visual feedback from the hand to control fast reaching movements. Experimental Brain Research, 152 (3), 341–352, https://doi.org/10.1007/s00221-003-1525-2.
Saunders, J. A., & Knill, D. C. (2005). Humans use continuous visual feedback from the hand to control both the direction and distance of pointing movements. Experimental Brain Research, 162 (4), 458–473, https://doi.org/10.1007/s00221-004-2064-1.
Scott, S. H. (1999). Apparatus for measuring and perturbing shoulder and elbow joint positions and torques during reaching. Journal of Neuroscience Methods, 89 (2), 119–127.
Soechting, J. F., Rao, H. M., & Juveli, J. Z. (2010). Incorporating prediction in models for two-dimensional smooth pursuit. PLoS One, 5 (9), e12574, https://doi.org/10.1371/journal.pone.0012574.
Streng, M. L., Popa, L. S., & Ebner, T. J. (2018). Modulation of sensory prediction error in Purkinje cells during visual feedback manipulations. Nature Communications, 9 (1): 1099, https://doi.org/10.1038/s41467-018-03541-0.
Tramper, J. J., & Gielen, C. C. (2011). Visuomotor coordination is different for different directions in three-dimensional space. The Journal of Neuroscience, 31 (21), 7857–7866, https://doi.org/10.1523/JNEUROSCI.0486-11.2011.
Vercher, J. L., & Gauthier, G. M. (1992). Oculo-manual coordination control: Ocular and manual tracking of visual targets with delayed visual feedback of the hand motion. Experimental Brain Research, 90 (3), 599–609.
Vercher, J. L., Quaccia, D., & Gauthier, G. M. (1995). Oculo-manual coordination control: Respective role of visual and non-visual information in ocular tracking of self-moved targets. Experimental Brain Research, 103 (2), 311–322.
Xia, R., & Barnes, G. (1999). Oculomanual coordination in tracking of pseudorandom target motion stimuli. Journal of Motor Behavior, 31 (1), 21–38.
Figure 1
 
Top view of the experimental setup. Both arms of the participant were inserted into an exoskeleton. An opaque mirror placed above the arms blocked their view. A purple target was projected on the mirror from above and appeared at the height of the hand. The dotted line shows an example target path (not visible to the participant). A red cursor representing the right index fingertip was also displayed and the participant was instructed to move his/her right arm so as to bring the cursor as close as possible to the moving target.
Figure 1
 
Top view of the experimental setup. Both arms of the participant were inserted into an exoskeleton. An opaque mirror placed above the arms blocked their view. A purple target was projected on the mirror from above and appeared at the height of the hand. The dotted line shows an example target path (not visible to the participant). A red cursor representing the right index fingertip was also displayed and the participant was instructed to move his/her right arm so as to bring the cursor as close as possible to the moving target.
Figure 2
 
Typical trials by the same participant in each of the three experimental conditions. Target, cursor, eye, and hand position signals during early exposure. Although each trial was 10 s long, for clarity only 5 s of signals are displayed. For convenience, we have selected three trials that used the same target trajectory. Saccadic eye movements are depicted by red segments.
Figure 2
 
Typical trials by the same participant in each of the three experimental conditions. Target, cursor, eye, and hand position signals during early exposure. Although each trial was 10 s long, for clarity only 5 s of signals are displayed. For convenience, we have selected three trials that used the same target trajectory. Saccadic eye movements are depicted by red segments.
Figure 3
 
Average cursor tracking performance as a function of experimental condition and trial number. A. Euclidian distance between cursor and target. B. Temporal lag between cursor and target (a positive lag indicates that the hand is lagging behind the target). Error bars represent SEM. For both indexes, note the lower performance under the SPRING mappings.
Figure 3
 
Average cursor tracking performance as a function of experimental condition and trial number. A. Euclidian distance between cursor and target. B. Temporal lag between cursor and target (a positive lag indicates that the hand is lagging behind the target). Error bars represent SEM. For both indexes, note the lower performance under the SPRING mappings.
Figure 4
 
Average hand tangential velocity and its fluctuations as a function of experimental condition and trial number. Error bars represent SEM.
Figure 4
 
Average hand tangential velocity and its fluctuations as a function of experimental condition and trial number. Error bars represent SEM.
Figure 5
 
Average hand–cursor distance as a function of experimental condition and trial number. Error bars represent SEM.
Figure 5
 
Average hand–cursor distance as a function of experimental condition and trial number. Error bars represent SEM.
Figure 6
 
Gaze position as a function of experimental condition and trial number. A. Euclidian distance between eye and target. B. Euclidian distance between eye and cursor. Error bars represent SEM. For all mappings, note how gaze is closer to the target than the cursor.
Figure 6
 
Gaze position as a function of experimental condition and trial number. A. Euclidian distance between eye and target. B. Euclidian distance between eye and cursor. Error bars represent SEM. For all mappings, note how gaze is closer to the target than the cursor.
Figure 7
 
Distribution of relative projected gaze position in each experimental condition. Mean group distributions are presented by thick lines (with dotted lines indicating ±1 SEM). All distributions had a single peak much closer to the target than the cursor.
Figure 7
 
Distribution of relative projected gaze position in each experimental condition. Mean group distributions are presented by thick lines (with dotted lines indicating ±1 SEM). All distributions had a single peak much closer to the target than the cursor.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×