Open Access
Article  |   April 2019
Ocular tracking of occluded ballistic trajectories: Effects of visual context and of target law of motion
Author Affiliations
  • Sergio Delle Monache
    Department of Systems Medicine, Neuroscience Section, University of Rome Tor Vergata, Rome, Italy
    Center of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy
    Laboratory of Neuromotor Physiology, Santa Lucia Foundation, Rome, Italy
    sergio.dellemonache@gmail.com
  • Francesco Lacquaniti
    Department of Systems Medicine, Neuroscience Section, University of Rome Tor Vergata, Rome, Italy
    Center of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy
    Laboratory of Neuromotor Physiology, Santa Lucia Foundation, Rome, Italy
  • Gianfranco Bosco
    Department of Systems Medicine, Neuroscience Section, University of Rome Tor Vergata, Rome, Italy
    Center of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy
    Laboratory of Neuromotor Physiology, Santa Lucia Foundation, Rome, Italy
Journal of Vision April 2019, Vol.19, 13. doi:https://doi.org/10.1167/19.4.13
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sergio Delle Monache, Francesco Lacquaniti, Gianfranco Bosco; Ocular tracking of occluded ballistic trajectories: Effects of visual context and of target law of motion. Journal of Vision 2019;19(4):13. https://doi.org/10.1167/19.4.13.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In tracking a moving target, the visual context may provide cues for an observer to interpret the causal nature of the target motion and extract features to which the visual system is weakly sensitive, such as target acceleration. This information could be critical when vision of the target is temporarily impeded, requiring visual motion extrapolation processes. Here we investigated how visual context influences ocular tracking of motion either congruent or not with natural gravity. To this end, 28 subjects tracked computer-simulated ballistic trajectories either perturbed in the descending segment with altered gravity effects (0g/2g) or retaining natural-like motion (1g). Shortly after the perturbation (550 ms), targets disappeared for either 450 or 650 ms and became visible again until landing. Target motion occurred with either quasi-realistic pictorial cues or a uniform background, presented in counterbalanced order. We analyzed saccadic and pursuit movements after 0g and 2g target-motion perturbations and for corresponding intervals of unperturbed 1g trajectories, as well as after corresponding occlusions. Moreover, we considered the eye-to-target distance at target reappearance. Tracking parameters differed significantly between scenarios: With a neutral background, eye movements did not depend consistently on target motion, whereas with pictorial background they showed significant dependence, denoting better tracking of accelerated targets. These results suggest that oculomotor control is tuned to realistic properties of the visual scene.

Introduction
Motor control and perceptual processes rely critically on acquisition of detailed visual information. Because of the limited extent of the visual field covered by the fovea, in performing actions the image of objects of interest is foveated by accurate gaze movements to maintain clear vision even in the face of object motion or self-motion (Abrams, Meyer, & Kornblum, 1990; Carnahan & Marteniuk, 1991; Land, Mennie, & Rusted, 1999; Helsen, Elliott, Starkes, & Ricker, 2000; Neggers & Bekkering, 2000, 2001; Binsted, Chua, Helsen, & Elliott, 2001; Johansson, Westling, Bäckström, & Flanagan, 2001; Pelz, Hayhoe, & Loeber, 2001; Bowman, Johansson, & Flanagan, 2009; Gielen, Dijkstra, Roozen, & Welten, 2009; López-Moliner & Brenner, 2016; Li, Wang, & Cui, 2018). Thus, when catching an object on the fly, ocular tracking of object motion is instrumental for an observer to keep the projection of the object image on the fovea in preparation for the catching action (Land & McLeod, 2000; McLeod, Reed, & Dienes, 2006; McLeod, Reed, Gilson, & Glennerster, 2008; Brenner & Smeets, 2009, 2011; Dessing, Oostwoud Wijdenes, Peper, & Beek, 2009; Bennett, Orban de Xivry, Lefèvre, & Barnes, 2010; Spering, Schütz, Braun, & Gegenfurtner, 2011; Hardiess, Hansmann-Roth, & Mallot, 2013; Cesqui, Mezzetti, Lacquaniti, & d'Avella, 2015; Fooken, Yeo, Pai, & Spering, 2016). Ocular tracking often combines two types of eye movements: smooth-pursuit movements, which maintain the target image on the fovea by driving the eyes at velocities proportional to those of the moving targets, and saccades that compensate for retinal slips of the target image due to the limitations of the smooth-pursuit system with high-speed and unpredictable motion (for a review, see Orban de Xivry & Lefèvre, 2007). 
Both saccadic and pursuit movements rely heavily on predictive mechanisms to compensate for sensorimotor delays as well as ambiguous or lacking visual information. For example, in the absence of visual feedback, smooth-pursuit movements continue to be driven by intentional signals and by the expectancy of the target reappearance, although at lower velocity relative to that of the occluded target (Mitrani & Dimitrov, 1978; Becker & Fuchs, 1985; Morris & Lisberger, 1987; Pola & Wyatt, 1997; Bennett & Barnes, 2003; Madelain & Krauzlis, 2003; Barnes & Collins, 2008). Saccadic movements, in turn, place the eyes ahead of the extrapolated position of the occluded object, compensating for the lower pursuit velocities (Bennett & Barnes, 2006b). 
Predictive estimates of the target motion are based largely on signals related to the target kinematics prior to its disappearance (Kowler, Martins, & Pavels, 1984; de Brouwer, Missal, & Lefèvre, 2001; Bennett & Barnes 2004, 2005, 2006a; Collins & Barnes, 2006; Orban de Xivry, Bennett, Lefèvre, & Barnes, 2006; Bennett, Orban de Xivry, Barnes, & Lefèvre, 2007; Mrotek & Soechting, 2007; Orban de Xivry, Missal, & Lefèvre, 2008; Brostek, Eggert, & Glasauer, 2017). 
In addition to visual information acquired before the target occlusion, cognitive factors and long-term memory information of past experiences are also known to contribute to the predictive control of eye movements (Makin, Poliakoff, Chen, & Stewart, 2008; Makin, Poliakoff, & El-Deredy, 2009; Bennett et al., 2010; Kattoulas et al., 2011; Santos & Kowler, 2017). For example, acquired expertise with particular visual contexts, such as those associated with sport games, may also exert a strong influence on oculomotor behavior in response to visual occlusions (Crespi, Robino, Silva, & de'Sperati, 2012; Sarpeshkar, Abernethy, & Mann, 2017). Furthermore, studies using virtual-reality simulations of racquetball and manipulations of the naturalness of target motion have suggested that the oculomotor plan includes long-term memory about the dynamic and natural properties of the moving object (Diaz, Cooper, & Hayhoe, 2013; Diaz, Cooper, Rothkopf, & Hayhoe, 2013; Souto & Kerzel, 2013). 
In this respect, gravity represents a major invariant property of the natural environment, imposing on object motion a quasi-constant downward acceleration of 9.81 m/s2. Strong evidence supporting the idea that implicit knowledge of gravity contributes to predictive processes has come mostly from studies that have reported interceptive and perceptual responses compatible with an expectation of gravity effects on object motion (McIntyre, Zago, Berthoz, & Lacquaniti, 2001; Zago et al., 2004, 2005; Indovina et al., 2005; Senot, Zago, Lacquaniti, & McIntyre, 2005; Zago, Iosa, Maffei, & Lacquaniti, 2010; Moscatelli & Lacquaniti, 2011; Bosco, Delle Monache, & Lacquaniti, 2012; Senot et al., 2012; La Scaleia, Lacquaniti, & Zago, 2014; La Scaleia, Zago, & Lacquaniti, 2015; Russo et al., 2017). The putative neural substrates of the internal representation of gravity have been identified in a complex of multimodal brain structures belonging to the vestibular network, and located mainly in the perisylvian region, thalamus, cerebellum, and vestibular nuclei (Indovina et al., 2005; Bosco, Carrozzo, & Lacquaniti, 2008; Miller et al., 2008; Maffei, Macaluso, Indovina, Orban, & Lacquaniti, 2010; Indovina et al., 2013; Indovina et al., 2015; Maffei et al., 2015; Delle Monache, Lacquaniti, & Bosco, 2017). This internal representation of gravity stored in the vestibular network appears to be rather abstract in nature, since it might be evoked not only by interactions with real objects subjected to the gravity force but also by pictorial scenes displayed on a computer screen, where the effects of gravity on the object motion are simulated by scaling the object kinematics to the overall visual context. Compatible with this idea, functional MRI experiments that manipulated parametrically both the effects of gravity on the target motion and the visual context have shown that stationary pictorial elements, enhancing the overall sense of realism of the visual scene and providing a metric scale, facilitate specifically the interception of motion congruent with gravity effects, by engaging activity in the vestibular nuclei and in the posterior cerebellar vermis (Miller et al., 2008). 
Despite the amount of data supporting the view that a priori information about gravity is integrated in the predictive control of interceptive movements to overcome sensory ambiguity or limitations in the central processing of sensory information (Zago, McIntyre, Senot, & Lacquaniti, 2008, 2009; Lacquaniti et al., 2014, 2015; Bosco et al., 2015), direct evidence for a role of an internal model of physics on oculomotor control is mostly confined to the literature on the vestibulo-ocular reflex (André-Deshays et al., 1993; Merfeld, Zupan, & Peterka, 1999; Angelaki, Shaikh, Green, & Dickman, 2004; Clarke & Haslwanter, 2007; Clarke, 2008; Nooij, Bos, & Groen, 2008; Green & Angelaki, 2010a, 2010b). 
Within this framework, an earlier study of ours examined the spontaneous oculomotor behavior of subjects who manually intercepted trajectories perturbed with altered gravity effects, and suggested that anticipation of gravity effects may be common to both interceptive and oculomotor control (Delle Monache et al., 2015). 
Following this evidence, in the present study we specifically tested the idea that presupposed knowledge of gravity might contribute to the predictive control of eye-tracking movements depending on the naturalness of the visual context. In particular, based on the results of Miller et al. (2008), we hypothesized that target-motion predictions would weigh more internalized gravity information when target motion is tracked in a visual scene containing elements of naturalness compared to when the same motion is embedded in a neutral background. 
For this purpose, we asked healthy human subjects to continuously track computer-simulated ballistic trajectories, which could be perturbed with the effects of altered gravity (either by removing or by doubling the gravity acceleration) and occluded for variable time intervals in order to require visual motion extrapolation. The same target trajectories were presented either on a structured visual scenario containing quasi-realistic pictorial elements or on a uniform dark-gray background. 
We anticipate that stronger weighting of internalized gravity information with the quasi-realistic pictorial background might be reflected by higher tracking accuracy of accelerated motion congruent with the effects of natural gravity compared to constant-velocity motion or motion accelerated at twice natural gravity. In addition, a stronger expectation of accelerated motion with a falling target at constant velocity might induce significantly longer time leads of the eye relative to the target motion during the descending limb of the ballistic trajectory. Conversely, stronger reliance on visual motion feedback signals than internalized gravity information with the neutral background might result in similar ocular-tracking performance across types of target motion, or even in better tracking of the more predictable constant-velocity targets. 
Experimental findings were generally compatible with this hypothetical framework, since ocular tracking with the neutral background did not depend consistently on the gravity level imposed on the target motion, whereas eye movements with the pictorial background showed stronger dependence on the target kinematics, denoting higher tracking accuracy of accelerated motion. 
Methods
Twenty-eight healthy subjects (16 women, 12 men; mean age ± SD: 21.64 years ± 2.42) with either normal or corrected-to-normal vision gave informed written consent to participate in the experiments. Experimental procedures, approved by the ethics committee of the University of Rome Tor Vergata (Protocol 140.12), were performed in agreement with the Declaration of Helsinki. Subjects sat in front of a 22-in. LCD screen (ViewSonic VX2268WM) with their head stabilized by a chin rest. Visual scenarios were created using the graphics software package Presentation (Version 14.9; Neurobehavioral Systems, Berkeley, CA) and were projected on the LCD screen at 100 Hz with a spatial resolution of 1,680 × 1,050 pixels, spanning 42.86° × 26.79° visual angle. 
Visual scenes and target-motion trajectories
The moving target was a white ball (7 pixels diameter, 0.18° visual angle) which followed a ballistic trajectory along the fronto-parallel plane from the bottom left corner of the screen to the bottom right corner. In separate blocks of trials, ball trajectories were presented on either a pictorial or a neutral scene. The pictorial scene reproduced a fly-ball play of a baseball game. The ball was batted by the hitter located at the bottom left corner of the scene. Stationary graphic elements—such as the perimeter of the baseball field, the players, and the overall landscape–provided perspective view and metric cues (Figure 1A; for further details, see Bosco et al., 2012; Delle Monache et al., 2015, 2017). 
Figure 1
 
Visual scenes for the ocular-tracking task. (A) Pictorial scenario. The scene reproduced a fly-ball play of a baseball game and spanned 42.86° × 26.79° visual angle. Ball motion (white circle, enlarged slightly for illustration purposes) started from the batter at the bottom left of the scene and landed on the right half of the scene following a parabolic path (magenta dotted trace, shown here for illustrative purposes but never appearing on-screen). Stationary graphic elements, such as the baseball field's perimeter, the players, and the landscape, provided perspective view and metric cues. (B) Neutral scenario. The same ball motion as presented in the pictorial scene was projected over a uniform gray background, with average luminance matched to the pictorial scenario (27 cd/m2). The oriented orange rectangle, located at the same pixel coordinates as the batter in the pictorial scene, symbolized a ball launcher.
Figure 1
 
Visual scenes for the ocular-tracking task. (A) Pictorial scenario. The scene reproduced a fly-ball play of a baseball game and spanned 42.86° × 26.79° visual angle. Ball motion (white circle, enlarged slightly for illustration purposes) started from the batter at the bottom left of the scene and landed on the right half of the scene following a parabolic path (magenta dotted trace, shown here for illustrative purposes but never appearing on-screen). Stationary graphic elements, such as the baseball field's perimeter, the players, and the landscape, provided perspective view and metric cues. (B) Neutral scenario. The same ball motion as presented in the pictorial scene was projected over a uniform gray background, with average luminance matched to the pictorial scenario (27 cd/m2). The oriented orange rectangle, located at the same pixel coordinates as the batter in the pictorial scene, symbolized a ball launcher.
In the neutral scene, the same ball trajectories were displayed on a uniform dark-gray background; the only stationary pictorial element, located at the same screen coordinates as the hitter in the pictorial condition, was a tilted orange rectangle taking the place of a ball launcher (Figure 1B). 
Projectile trajectories had a fixed launch angle of 76.5° from the horizontal (see Figure 2). The ascending segment, modeled according to the equations originally described by Brancazio (1985), took into account Earth's gravity and air-drag effects, scaled to the metrics of the pictorial scenario. The descending segment either retained the same level of gravity (unperturbed 1g trajectories) or was perturbed with simulated microgravity (0g) and hypergravity (2g) effects. Acceleration perturbations occurred either 1,750 or 1,500 ms prior to the ball landing (Figure 2A and 2B illustrates the resulting trajectories for the 1,750- and 1,500-ms perturbation intervals, respectively). 
Figure 2
 
Ball ballistic trajectories. The ascending segment was modeled by accounting for Earth gravity and air-drag effects, scaled to the metrics of the pictorial scene. The descending segment either retained the same level of gravity (unperturbed 1g trajectories, red traces) or was perturbed with simulated micro- (0g, blue traces) or hypergravity (2g, green traces) effects. Crosses and filled circles indicate perturbation and visual-occlusion onsets, respectively. Crosses indicating the perturbation onsets are illustrated also on unperturbed 1g trajectories, since they were used as temporal markers for the eye-movement analyses and to define the onsets of the visual occlusions. Visual occlusions began 550 ms after the temporal markers of the perturbation and lasted either 450 or 650 ms (the open circles and squares mark the end of the 450- and 650-ms intervals, respectively). (A) Trajectories perturbed 1,750 ms before landing. (B) Trajectories perturbed 1,500 ms before landing.
Figure 2
 
Ball ballistic trajectories. The ascending segment was modeled by accounting for Earth gravity and air-drag effects, scaled to the metrics of the pictorial scene. The descending segment either retained the same level of gravity (unperturbed 1g trajectories, red traces) or was perturbed with simulated micro- (0g, blue traces) or hypergravity (2g, green traces) effects. Crosses and filled circles indicate perturbation and visual-occlusion onsets, respectively. Crosses indicating the perturbation onsets are illustrated also on unperturbed 1g trajectories, since they were used as temporal markers for the eye-movement analyses and to define the onsets of the visual occlusions. Visual occlusions began 550 ms after the temporal markers of the perturbation and lasted either 450 or 650 ms (the open circles and squares mark the end of the 450- and 650-ms intervals, respectively). (A) Trajectories perturbed 1,750 ms before landing. (B) Trajectories perturbed 1,500 ms before landing.
Ball trajectories were then occluded for either 450 or 650 ms, beginning 550 ms after the temporal markers of the trajectory perturbations. The temporal markers of the onsets of the acceleration perturbations in 0g and 2g trajectories were applied also to unperturbed 1g trajectories as reference points to define the visual-occlusion intervals and to delimit the time windows for the eye-movement analyses. 
The number of experimental conditions was increased further by imposing two possible initial velocities for the target motion: 20.4°/s or 21.2°/s visual angle. Finally, at random intervals of 800–1,200 ms during the visible portions of the trajectories, the ball color switched to orange for 200 ms. 
The order of the two scenarios was counterbalanced among subjects. One group of 14 subjects (G1) tracked the target trajectories first in the pictorial and then the neutral condition, and the sequence was inverted for the remaining 14 subjects (G2). 
Each block of trials with a given scenario consisted of eight repetitions of 24 conditions (2 initial velocities × 2 occlusion intervals × 2 perturbation intervals × 3 ball accelerations). The resulting 192 trials were distributed pseudorandomly with respect to the experimental conditions and delivered in two subblocks of 96 trials each. Subjects rested for 5 min between subblocks. The overall duration of the experiment was about 90 min. 
Behavioral task
Subjects were instructed to maintain their head fixed on the chin rest and continuously track the moving ball, even during its transient disappearance (ocular-tracking task). Binocular movements were recorded with an EyeLink 1000 tracker system at a sampling frequency of 500 Hz (SR Research, Ontario, Canada). Eye-tracker signals were calibrated every 32 trials with a 9-point calibration grid, and drift corrections were applied every eight trials. To reduce the occurrence of blink artifacts in eye-movement recordings, subjects were also advised to refrain from blinking throughout the trial. 
While tracking the ball, subjects were asked to press the left button of a computer gaming mouse (Razer Copperhead, San Francisco, CA) in response to the sudden color change of the ball, which occurred at unpredictable times along the visible portion of its trajectory (reaction-time task). The reaction-time task enforced subjects' attention on the moving target, thereby improving tracking performance (Shagass, Roemer, & Amadeo, 1976). Button-press responses were recorded at a sampling frequency of 1 kHz through a Power1401 acquisition board (Cambridge Electronic Design, Cambridge, UK). Synchronization between the PC running the Presentation software and the EyeLink system was mastered through custom-made software written for the Spike2 real-time module (Cambridge Electronic Design). 
Analysis of eye movements
Data collected from the eye tracker were preliminarily screened for poorly calibrated signals and recording artifacts by using custom-made MATLAB (MathWorks, Natick, MA) scripts. The screening analysis detected 1,451/10,752 trials (13.5%) with data from only one eye affected by poorly calibrated signals. For these trials, we retained monocular data from the unaffected eye trace. We discarded 178/10,752 trials (1.6%) because both eye traces were unreliable. For the remaining 9,123/10,752 trials with clean binocular signals (i.e., 84.9% of all trials), we obtained cyclopean eye-position time series by averaging the two eye traces bin by bin (2-ms bin width). Monocular eye-position time series (either monocular at source or computed cyclopean) were numerically differentiated and filtered with a zero-lag 2nd-order low-pass Butterworth filter (cutoff frequency = 40 Hz). 
In order to evaluate the ocular-tracking behavior in response to the acceleration perturbations of the ball trajectories and the absence of visual feedback, we defined two temporal windows (Figure 3). 
Figure 3
 
Perturbation and occlusion time windows used for eye-movement analysis. Horizontal and vertical and eye-position traces (blue) recorded from one subject during tracking of one 0g trajectory (red traces) are illustrated in the top and bottom panels, respectively. Vertical solid, dashed, and dash-dotted lines indicate the ball-trajectory perturbation, occlusion, and reappearance events, respectively. The green transparency delimits the perturbation interval, from 100 ms after the perturbation onset until the ball disappearance. The red transparency corresponds to the occlusion interval, lasting 350 ms from the time 100 ms after the ball disappearance.
Figure 3
 
Perturbation and occlusion time windows used for eye-movement analysis. Horizontal and vertical and eye-position traces (blue) recorded from one subject during tracking of one 0g trajectory (red traces) are illustrated in the top and bottom panels, respectively. Vertical solid, dashed, and dash-dotted lines indicate the ball-trajectory perturbation, occlusion, and reappearance events, respectively. The green transparency delimits the perturbation interval, from 100 ms after the perturbation onset until the ball disappearance. The red transparency corresponds to the occlusion interval, lasting 350 ms from the time 100 ms after the ball disappearance.
The first window, named perturbation—starting 100 ms after the temporal markers of the trajectory perturbations and lasting for the successive 450 ms of visible motion—was used to examine ocular tracking of visible motion either congruent or not with gravity effects. The second window, named occlusion—comprising 100 ms after the ball disappearance and the successive 350 ms of occluded motion—was used to evaluate tracking behavior in the absence of visual feedback. 
The first 100 ms of data recorded after the onsets of the trajectory perturbations and occlusions were not included in the perturbation or occlusion windows, to account for oculomotor response delays to the sudden perturbations and occlusions of the ball motion. We quantified ocular-tracking behavior during the perturbation and occlusion windows by analyzing separately saccadic movements and bouts of smooth pursuit. 
Saccadic movements
Saccadic movements were detected by using combined eye-velocity (>30°/s) and eye-acceleration (>2,000°/s2) thresholds (Bennett & Barnes, 2003). Saccades that were followed by post-saccadic eye-to-target distances greater than 4° visual angle were discarded because they likely diverted gaze from the target trajectory, and therefore they could not be considered related to the ocular-tracking task. 
To evaluate the saccadic behavior and quantify potential changes across experimental conditions, we used the post-saccadic error—defined as the Euclidean eye-to-target distance averaged over 10 ms after the saccade—and the saccadic frequency computed separately for the perturbation and occlusion time windows. The post-saccadic error, in other words, indicated the scalar deviation of the eye from the target trajectory following a saccadic movement. Since saccades, in the context of ocular tracking, may represent corrective movements to potential smooth-pursuit limitations, we may also interpret the post-saccadic error as a measure of the accuracy of these corrections (Bennet and Barnes 2006b). By the same line of reasoning, the saccadic frequency may relate to the degree of saccadic correction of the smooth pursuit needed in order to maintain the eye on the target trajectory (Bennett & Barnes, 2006b). 
Smooth-pursuit movements
Ocular-pursuit bouts were identified from desaccaded eye traces as time epochs during which the mean eye-to-target distance remained below 4° visual angle and the mean ratio between eye and target velocities was between 0.25 and 1.80. The analysis considered only pursuit bouts that occurred within either the perturbation or the occlusion time window and lasted a minimum of 300 or 150 ms, respectively. For each bout, we computed the time lag/lead τ of the eye relative to the ball motion by minimizing the error function:  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}f\left( \tau \right) = \sqrt {{{\left( {{x_{eye}}\left( t \right) - {x_{ball}}\left( {t - \tau } \right)} \right)}^2} + {{\left( {{y_{eye}}\left( t \right) - {y_{ball}}\left( {t - \tau } \right)} \right)}^2}} {\rm {,}}\end{equation}
where x and y represent the coordinates of either eye or ball position, as indicated in subscript (Mrotek & Soechting, 2007; Delle Monache et al., 2015). Briefly, we computed Display Formula\(f\left( \tau \right)\) for each bin of the pursuit bout by varying the value of τ ±200 ms around the time t of the given bin. Once the error function was defined, we found the τ value correspondent to the minimum of Display Formula\(f\left( \tau \right)\). Negative τ values indicated that the eye led the ball, and positive values indicated time lags. We also computed the pursuit gain as follows:  
\begin{equation}gain\left( t \right) = {\vec v_{eye}}\left( t \right) \cdot {\vec v_{ball}}\left( t \right)/{v_{ball}}^2\left( t \right){\rm {,}}\end{equation}
where Display Formula\(\vec v\) represents the velocity vector and v its module for either the eye or the ball, as indicated in subscript (Mrotek & Soechting, 2007; Delle Monache et al., 2015). Sample-by-sample values of τ and pursuit gain were averaged over the pursuit bout.  
Post-reappearance error
Subjects' estimates of the ball position at reappearance were evaluated by computing the post-reappearance error—that is, the mean Euclidean eye-to-target distance in the first 100 ms after the ball reappearance or up to the first saccadic movement occurring within 100 ms of the ball reappearance. 
Statistical analyses of oculomotor parameters
Statistical analyses were performed on the oculomotor parameters extracted from smooth-pursuit bouts and saccadic movements occurring during either the perturbation or the occlusion window (i.e., pursuit gain and τ, post-saccadic error and saccadic frequency), as well as on the post-reappearance error. To simplify data interpretation, these analyses focused exclusively on the effects of gravity level and the visual scenario. Therefore, the manipulations of the perturbation and occlusion intervals, as well as those of the target initial velocity, were considered only to increase the overall number and variability among experimental conditions. For example, the different perturbation intervals made the point and time of the acceleration change unpredictable from trial to trial. Similarly, varying the lengths of the occlusion intervals created uncertainty on the time and place of the ball's reappearance. However, the effects of these two experimental manipulations on oculomotor behavior could not be interpreted uniquely because they were associated with changes in both the trajectory curvature and the instantaneous target velocity. The ball's initial velocity was not considered as a predictor of the oculomotor behavior, on the basis of results of previous experiments with similar ballistic trajectories which indicated only marginal effects on oculomotor and interceptive behavior (Delle Monache et al., 2015). 
In sum, data collected during the various experimental conditions were collapsed with respect to the ball's acceleration level and the type of visual scene by averaging, separately for each experimental block with either the pictorial or the neutral scenario, the oculomotor-parameter values across trials with the same acceleration level. Data sets created for each oculomotor parameter by pooling mean values across subjects were submitted to generalized linear mixed models (GLMMs; see Moscatelli, Mezzetti, & Lacquaniti, 2012). Gravity level (0g, 1g, 2g), scenario (pictorial, neutral), scenario order (G1, G2), and their two- and three-way interactions were used as predictors of the oculomotor parameters and treated as fixed effects. A subject-identifier variable (28 levels) was included as a dummy random effect to control for unaccounted heterogeneity in the individual subjects' average response. Post-saccadic errors, saccadic frequency, pursuit gain, and post-reappearance errors were modeled with gamma distributions, and τ values with a normal distribution. The parameter estimate covariance matrix was computed by using the robust estimate method to account for violations of model assumptions. The covariance structure for the residuals was specified as variance components. The degrees of freedom for the significance tests were computed with the residual method and the significance level was set at p < 0.01. GLMM analysis was performed with the statistical software package SPSS (Version 23.0). 
Analysis of reaction-time responses
Reaction times to the color changes of the ball were computed by subtracting the button-press times from the times of the ball color changes. We discarded 9.4% of the recorded reaction-time responses as anticipated and late responses, as they were, respectively, shorter than 50 ms and longer than 400 ms. We also did not consider for further analysis the reaction-time responses to events occurring within 10 ms of a saccade, because they could have been affected by the transient blanks of visual information associated with saccadic movements (1.5% of the remaining reaction-time responses). 
For each subject, we averaged the reaction-time responses to all events occurring in a given trial, and then we computed the mean values across trials separately for the blocks in which either the pictorial or the neutral scenario was presented. The data set of mean reaction-time values pooled from all subjects was submitted to a two-way mixed analysis of variance (ANOVA) with scenario and scenario order as within-subject and between-subjects factors, respectively. Greenhouse–Geisser corrections were applied to the significance levels of the ANOVA factors. 
Unlike in the GLMMs applied to the oculomotor parameters, gravity level was not included as a predictor in the ANOVAs of the reaction-time responses, because the ball color changes occurred too sporadically (every 800–1,200 ms) and at random points of the visible portions of the trajectories, to allow the collection of a consistent number of responses from individual subjects to events occurring in corresponding segments of the ball trajectories across trials. ANOVAs were carried out using SPSS (Version 23.0). 
Results
We found that ocular tracking was affected by the gravity levels applied to the moving targets, but this effect depended on the visual context. Saccades and smooth-pursuit movements, in fact, were strongly dependent on the target acceleration when subjects tracked the ball trajectories in the pictorial scenario, but to a much lesser extent when the same target motion was displayed on a uniform background. 
Ocular tracking following perturbations of the gravity acceleration
The effects of the experimental manipulations of the effects of gravity on the target motion and of the visual context on ocular tracking were assessed by analyzing the oculomotor parameters extracted from saccadic and smooth-pursuit movements during the perturbation window—that is, the time interval from 100 ms after the trajectory perturbation until target disappearance due to occlusion 450 ms later. 
Saccadic movements
The results of the GLMMs applied to the data sets of post-saccadic error values and saccadic frequencies during the perturbation window are summarized in Table 1A
Table 1
 
Results of generalized linear mixed model analyses on indexes derived from (A) saccadic movements (post-saccadic error and saccadic frequency) and (B) smooth-pursuit movements (τ and gain) during the perturbation temporal window. Statistically significant fixed effects (p < 0.01) are in bold.
Table 1
 
Results of generalized linear mixed model analyses on indexes derived from (A) saccadic movements (post-saccadic error and saccadic frequency) and (B) smooth-pursuit movements (τ and gain) during the perturbation temporal window. Statistically significant fixed effects (p < 0.01) are in bold.
First of all, post-saccadic errors were significantly smaller in the neutral compared to the pictorial scenario (main effect of scenario; see also the right subplot of Figure 4A). In order to facilitate the interpretation of this effect, we performed an additional analysis motivated by the earlier evidence that primary saccades to visual targets in a saccadic adaptation task were systematically less hypometric in a patterned visual background than in complete darkness (Gerardin, Gaveau, Pélisson, & Prablanc, 2011). For continuously moving targets, in fact, these results may imply a stronger influence of the visual background on the post-saccadic errors along the direction of motion than in directions away from it. We tested this possibility by relating the post-saccadic error to a coordinate system centered on the ball position at the end of the saccade and with the x-axis tangent to the ball trajectory, thereby parsing the post-saccadic error into its components along the direction of motion of the target and orthogonal to it. Significant differences between scenarios emerged only for the post-saccadic error component tangential to the target trajectory (paired t test: t(27) = 7.1, p < 0.001), with the positive t value indicating that in the pictorial scenario, errors were larger in the forward direction of motion. 
Figure 4
 
Ocular-tracking parameters during the perturbation interval. (A) In the left panel, mean post-saccadic errors computed among subjects by pooling experimental conditions with the same gravity level are plotted against the gravity level (±SEM). The right panel shows the mean post-saccadic error (±SEM) computed, separately for G1 and G2 subjects, across experimental conditions of a given block with either the neutral or the pictorial scenario. Mean values in the pictorial and neutral scenarios are indicated by black filled circles and gray filled triangles, respectively. (B–D) Same layout as in (A) for, respectively, saccadic frequency, smooth-pursuit τ, and gain values.
Figure 4
 
Ocular-tracking parameters during the perturbation interval. (A) In the left panel, mean post-saccadic errors computed among subjects by pooling experimental conditions with the same gravity level are plotted against the gravity level (±SEM). The right panel shows the mean post-saccadic error (±SEM) computed, separately for G1 and G2 subjects, across experimental conditions of a given block with either the neutral or the pictorial scenario. Mean values in the pictorial and neutral scenarios are indicated by black filled circles and gray filled triangles, respectively. (B–D) Same layout as in (A) for, respectively, saccadic frequency, smooth-pursuit τ, and gain values.
An even stronger predictor of the post-saccadic errors during the perturbation window was the level of gravity acceleration imposed on the targets after perturbation. This effect was explained by larger post-saccadic errors in response to 0g compared to accelerated 1g and 2g motion (left panel of Figure 4A), and thus it appeared consistent with the idea that implicit knowledge of gravity effects on the ball motion was integrated in the saccadic plan. 
Moreover, consistent with our hypothesis that internalized gravity information may be weighted more in the presence of visual cues of naturalness, we found a significant Scenario × Gravity level interaction, which accounted for the observation that post-saccadic errors in response to constant velocity and accelerated targets were significantly different only in the pictorial, not the neutral, scenario. The effects of visual scenario and gravity level on the post-saccadic error depended further on the order in which the two scenarios were presented (three-way Scenario × Gravity level × Scenario order interaction). 
Saccadic frequencies depended significantly on the ball acceleration, with higher values during 0g than accelerated 1g and 2g motion (see the left panel in Figure 4B). This result may suggest that subjects broke smooth pursuit more often with 0g trajectories, likely to correct for tracking errors resulting from inaccurate predictions of the ball motion. This seems compatible with the idea that anticipation of gravity effects on the ball motion was applied to constant-velocity motion, leading to larger pursuit errors. 
Saccadic frequencies also depended significantly on the Scenario × Scenario order interaction effect (see Table 1A). In fact, only G2 subjects showed significantly different saccadic frequencies between the two visual scenarios (right panel of Figure 4B). 
Smooth-pursuit movements
Table 1B summarizes the GLMM results for advance/delay (τ) and gain of smooth-pursuit bouts during the perturbation window. Figure 4C shows that τ values varied greatly with the type of visual scene (see right panel) and the ball acceleration (see left panel). In fact, eye motion led ball motion more when subjects tracked in the pictorial scenario (main effect of scenario) and in response to 0g motion (main effect of ball acceleration) compared to the other types of motion. Interestingly, the differences in τ values across target accelerations were larger when subjects tracked the targets in the pictorial scenario (two-way Scenario × Gravity level interaction), suggesting, again, a stronger influence of a priori knowledge of gravity. Congruently, in the pictorial scenario we observed a much larger time lead in response to 0g trials, which may go along with a greater expectation of gravity effects on the ball motion (see Figure 5C, left panel). The statistical significance of the three-way Scenario × Gravity level × Scenario order interaction was mostly explained by the different τ values shown by G1 and G2 subjects tracking 2g trials in the pictorial visual scene. 
Figure 5
 
Ocular-tracking parameters during the occlusion interval. Same layout as Figure 4.
Figure 5
 
Ocular-tracking parameters during the occlusion interval. Same layout as Figure 4.
Pursuit gain was also influenced significantly by the visual scenario, with higher gain values when subjects tracked the ball motion in the neutral compared to the pictorial scenario (Figure 4D). Interestingly, the strongest predictor for this oculomotor parameter was represented by the two-way Scenario × Gravity level interaction, which accounted for the fact that differences in pursuit gain across ball accelerations were evident only when subjects tracked the targets in the pictorial scene. This pattern resembled that reported for post-saccadic error (compare Figure 4D and 4A), in that it denotes better tracking performance in the pictorial scenario for accelerated 1g and 2g trials than for 0g trials, which may be compatible with predictions based on an expectation of gravity effects. 
Ocular tracking during visual occlusion of the target motion
Ocular tracking in the absence of visual motion feedback was evaluated by analyzing the oculomotor indexes extracted from saccadic and smooth-pursuit movements during the occlusion window, which comprised 350 ms of occluded motion starting 100 ms after the target disappearance. 
Saccadic movements
Like in the perturbation window, subjects made, on average, smaller post-saccadic errors when tracking the targets in the neutral compared to the pictorial scenario (main effect of scenario; see Figure 5A and Table 2). Analogously, analysis of the post-saccadic error orthogonal components indicated that post-saccadic error differences between visual scenarios were mostly confined to the component tangent to the target trajectory, denoting greater anticipation of the target trajectory in the pictorial scenario (paired t test, tangential component: t(27) = 7.8, p < 0.001; orthogonal component: t(27) = 1.8, p = 0.08). Post-saccadic errors were also significantly influenced by the target acceleration, as they tended to decrease with increasing gravity level (main effect of gravity level). More importantly, the distribution of post-saccadic errors replicated the pattern, observed in the perturbation window and modeled by the two-way Scenario × Gravity level interaction, that differences across gravity levels occurred only when subjects tracked the ball trajectories in the pictorial scene (see left panel of Figure 5A). 
Table 2
 
Results of generalized linear mixed model analyses on indexes derived from (A) saccadic movements (post-saccadic error and saccadic frequency) and (B) smooth-pursuit movements (τ and gain) during the occlusion temporal window. Statistically significant fixed effects (p < 0.01) are in bold.
Table 2
 
Results of generalized linear mixed model analyses on indexes derived from (A) saccadic movements (post-saccadic error and saccadic frequency) and (B) smooth-pursuit movements (τ and gain) during the occlusion temporal window. Statistically significant fixed effects (p < 0.01) are in bold.
The results of the analysis of saccadic frequency during the occlusion window also showed strong similarities with those reported for the perturbation window, albeit with overall smaller effects. Gravity level was the main predictor, with 1g trials evoking higher saccadic frequencies than 2g trials. Moreover, a significant two-way Scenario × Scenario order interaction accounted for the facts that only G1 subjects, who experienced the pictorial scene first, showed different saccadic frequencies between the two visual scenes and that saccadic frequencies between the two subject groups were significantly different only in the neutral scenario (see Figure 5B, right panel). 
Smooth-pursuit movements
Figure 5C shows that, like in the perturbation window, eye motion in the absence of visual feedback tended to lead ball motion more when ocular tracking was performed in the pictorial scene (main effect of scenario) and in response to 0g compared to accelerated 1g and 2g motion (main effect of gravity level). Larger differences in τ values across gravity levels were observed, again, in the pictorial compared to the neutral scene (two-way Scenario × Gravity level interaction), supporting the idea that the naturalness of the visual background influenced the relative weighting of information in the predictive processes underlying ocular tracking of hidden targets. 
This was evident also from the analysis of the smooth-pursuit gain (see Figure 5D and Table 2B). This parameter, in fact, was strongly affected by the visual context, with higher values when subjects tracked the ball trajectories in the neutral scene (main effect of scenario; see right panel in Figure 5D). Moreover, the significant two-way Scenario × Gravity level interaction accounted for the finding that pictorial-scene gain values followed an increasing monotonic trend with gravity level (see Figure 5D, left panel). 
Estimates of target position at reappearance
We analyzed subjects' estimates of the ball position at reappearance by considering the post-reappearance error (see Methods). Post-reappearance errors depended significantly on the gravity level, but as with other oculomotor parameters, the dependence on the gravity level was stronger when subjects tracked the ball in the pictorial scenario, as denoted by the significant two-way Scenario × Gravity level interaction (see Table 3 and Figure 6). 
Table 3
 
Results of generalized linear mixed model analyses on the post-reappearance error. Statistically significant fixed effects (p < 0.01) are in bold.
Table 3
 
Results of generalized linear mixed model analyses on the post-reappearance error. Statistically significant fixed effects (p < 0.01) are in bold.
Figure 6
 
Post-reappearance errors. In the left panel, mean post-reappearance errors computed among subjects by pooling experimental conditions with the same gravity level are plotted against the gravity level (±SEM). The right panel shows the mean post-reappearance error (±SEM) computed, separately for G1 and G2 subjects, across experimental conditions of a given block with either the neutral or the pictorial scenario. Mean values in the pictorial and neutral scenarios are indicated by black filled circles and gray filled triangles, respectively.
Figure 6
 
Post-reappearance errors. In the left panel, mean post-reappearance errors computed among subjects by pooling experimental conditions with the same gravity level are plotted against the gravity level (±SEM). The right panel shows the mean post-reappearance error (±SEM) computed, separately for G1 and G2 subjects, across experimental conditions of a given block with either the neutral or the pictorial scenario. Mean values in the pictorial and neutral scenarios are indicated by black filled circles and gray filled triangles, respectively.
In addition, we found a significant two-way Scenario × Scenario order interaction, accounting for the observation that subjects tended to make larger errors in the first scenario they experienced (the pictorial scenario for G1 subjects, the neutral one for G2 subjects), and this trend was more pronounced for G1 than G2 subjects (Figure 6, right panel). 
Reaction-time responses
Figure 7 illustrates the mean reaction-time responses observed in G1 and G2 subjects (who experienced the inverted order of visual scenarios) when they performed the reaction-time task in either the pictorial (black circles) or the neutral (gray triangles) visual scene. Both groups, on average, responded quicker to the ball color change with the scenario presented in their second block of trials (neutral for G1 subjects, pictorial for G2 subjects). This shortening of reaction-time responses in the second block of trials was much stronger for G2 than G1 subjects, accounting for the high statistical significance of the effect of the Scenario × Scenario order interaction in the repeated-measures ANOVA (F(1,26) = 19.3, Display Formula\(\eta _{\rm{p}}^2\) = 0.42, p < 0.001). This analysis pointed out also a much smaller main effect of visual scenario, explained primarily by the shorter reaction-time responses in the pictorial compared to the neutral scenario observed in G2 subjects (F(1,26) = 6.92, Display Formula\(\eta _{\rm{p}}^2\) = 0.21, p = 0.014). In sum, reaction-time responses appeared to be influenced mostly by a combination of practice with the task and the sequence in which the two visual scenarios were experienced. 
Figure 7
 
Reaction-time responses. Mean reaction-time values (±SEM) in the pictorial (black filled circles) and visual (gray filled triangles) scenarios were computed separately for the two subject groups that experienced either the pictorial (G1) or the neutral (G2) scenario first.
Figure 7
 
Reaction-time responses. Mean reaction-time values (±SEM) in the pictorial (black filled circles) and visual (gray filled triangles) scenarios were computed separately for the two subject groups that experienced either the pictorial (G1) or the neutral (G2) scenario first.
Discussion
In the present study, we tested the idea that presupposed knowledge of gravity might contribute to the predictive control of eye-tracking movements depending on the naturalness of the visual context. For this purpose, we asked healthy human subjects to track computer-simulated ballistic trajectories with altered-gravity effects and displayed over either a uniform background or a pictorial realistic scenario. 
Effects of the visual background
The visual context in which ballistic trajectories were displayed was one of the significant factors affecting the ocular-tracking parameters. Previous studies have shown that presenting target motion over patterned backgrounds can influence the speed, acceleration, and latency of smooth-pursuit movements, and can fragment eye tracking with a higher number of saccades because of a potential interference between the optokinetic reflex pathway—stimulated by the textured background—and the smooth-pursuit system (Yee, Daniels, Jones, Baloh, & Honrubia, 1983; Collewijn & Tamminga, 1984; Keller & Khan, 1986; Howard & Marton, 1992; Worfolk & Barnes, 1992; Masson, Proteau, & Mestre, 1995; Mohrmann & Their, 1995; Niemann & Hoffmann, 1997; Lindner, Schwarz, & Ilg, 2001; Spering & Gegenfurtner, 2007; Kreyenmeier, Fooken, & Spering, 2017). In this respect, the lower pursuit gains and higher saccadic frequencies observed in the pictorial scenario are consistent with this earlier evidence. 
In the pictorial scenario, subjects also made larger post-saccadic errors. In particular, saccadic movements placed the eyes farther ahead of the target, a result compatible with previously reported reduced saccadic hypometry in a patterned background compared to complete darkness (Gerardin et al., 2011) and rather suggestive of greater anticipation of the target's future trajectory in the pictorial scenario. This idea is supported by the more negative smooth-pursuit τ values observed in this scenario, which imply larger temporal anticipation of the target trajectory during smooth-pursuit bouts. Overall, these latter two findings appear congruent with other studies suggesting a predominance of anticipatory mechanisms when target motion is tracked in more “realistic” visual environments, which can provide cues that may make the target motion more easily interpretable (Bahill & McDonald, 1983; Kowler, Martins, & Pavel, 1984; Lisberger, Morris, & Tychsen, 1987; Kowler, 1989; Boudet, Bocca, Dollfus, & Denise, 2006; Collins & Barnes, 2009; Kowler, Aitkin, Ross, Santos, & Zhao, 2014). 
A borderline significant effect of the visual context was observed also on reaction-time responses, which were shorter, on average, in the pictorial scenario. The order of magnitude of this effect (∼20 ms) was comparable to that reported in an earlier study where the shortening of reaction-time responses in the pictorial scenario compared to the uniform background failed to reach the statistical level of significance (Miller et al., 2008). In the current study, however, the highly significant Scenario × Scenario order interaction indicated further that reaction-time responses tended to be shorter in the second experimental block, and more so for G2 subjects, who experienced the pictorial scenario in their second block of trials (see also Figure 7)—suggesting that reaction-time responses were influenced not as much by the visual background but predominantly by practice, and perhaps also by idiosyncratic differences between subject groups. 
Internalized gravity information is used primarily if the visual context is congruent
The gravity acceleration imposed on the ball motion after perturbation was another strong factor affecting the ocular-tracking parameters (see Tables 13). The main effect of gravity level was accompanied, for most oculomotor parameters, by highly significant Scenario × Gravity level interactions, denoting a strong dependence of the effects of target acceleration on the visual context. In fact, oculomotor parameters showed more similar values across target accelerations when tracking was performed with the uniform background, whereas with the pictorial scenario they were systematically different between accelerated (both 1g and 2g) and constant-velocity targets. Oculomotor differences between 1g and 2g accelerated motion were, however, sporadic and not as consistent. Previous studies have examined manual interceptive and oculomotor responses under similar experimental conditions and reported smaller and more sporadic differences between the responses to accelerated 1g and 2g motion compared to the differences occurring between constant-velocity and accelerated motion (Bosco et al., 2012; Delle Monache et al., 2015). The similar responses to 1g and 2g stimuli observed across these studies could depend on the poor sensitivity of the visual system to retinal acceleration (de Brouwer, Missal, Barnes, & Lefèvre, 2002; Zago et al. 2009), as well as on the fact that the pictorial information provided by the “realistic” visual scene, which was common to all studies, may have been insufficient to properly scale proximal retinal acceleration to distal world acceleration. It should be remarked that in the earlier studies, the smaller response differences between 1g and 2g targets were nevertheless congruent with the larger ones observed between 0g and accelerated targets, in that they both seemed to reflect the use of implicit knowledge of gravity effects on the target motion. 
Based on these considerations, we may interpret the result that ocular tracking of accelerated motion in the pictorial scenario was slightly more accurate than that of constant-velocity targets as being compatible with the idea that a priori knowledge of gravity was integrated in the oculomotor plan depending on whether visual-context information was congruent with a natural setting. This interpretation, moreover, goes along with previous experimental work suggesting that expectations of gravity effects could be enhanced by realistic environments (Miller et al., 2008; Maffei et al., 2010; Zago, La Scaleia, Miller, & Lacquaniti, 2011; Fiori, Candidi, Acciarino, David, & Aglioti, 2015; Jörges & López-Moliner, 2017; for reviews, see Lacquaniti et al., 2014, 2015). 
Both saccadic and smooth-pursuit control use internalized gravity information
Oculomotor parameters derived from pursuit and saccadic movements were generally explained by a similar set of GLMM predictors (see Tables 13). This commonality between smooth-pursuit and saccade properties may be consistent with the idea that during ocular tracking, pursuit and saccadic movements represent outcomes of the same internal path (Collewijn & Tamminga, 1984; de Brouwer, Missal, & Lefèvre, 2001; de Brouwer et al., 2002; Blohm, Missal, & Lefèvre, 2003; Bennett & Barnes, 2006b; Orban de Xivry et al., 2006; Kreyenmeier et al., 2017; for a review, see Orban de Xivry & Lefèvre, 2007). In particular, the statistical significance of the main effect of gravity level and of the two-way Scenario × Gravity level interaction for both pursuit and saccadic parameters may imply that both types of eye movements integrate implicit knowledge of gravity. This integration may occur at relatively high hierarchical stages of neural processing, before the information is relayed to the pursuit and saccadic components of the oculomotor system. Among the cortical areas involved in oculomotor control, frontal and supplementary eye fields could represent reasonable candidates, since their neural activity has been related to the extrapolation of occluded target motion in conjunction with other brain areas, such as the lateral intraparietal cortex and the cerebellum (Assad & Maunsell, 1995; Eskandar & Assad, 1999; Fukushima, Yamanobe, Shinmei, & Fukushima, 2002; Barborica & Ferrera, 2003, 2004; Olson, Gatenby, Leung, Skudlarski, & Gore, 2004; Nagel et al., 2006; Xiao, Barborica, & Ferrera, 2007; Fukushima et al., 2008; O'Reilly, Mesulam, & Nobre, 2008; Cerminara, Apps, & Marple-Horvat, 2009; Ferrera & Barborica, 2010; Schmitt, Klingenhoefer, & Bremmer, 2018). Moreover, because of their responsiveness to vestibular stimulation, frontal and supplementary eye fields have been associated with the vestibular network, a constellation of cortical and subcortical brain areas receiving vestibular input that neuroimaging and transcranial magnetic stimulation studies have indicated as putative neural correlates of the internal model of gravity (Guldin & Grüsser, 1998; Brandt & Dieterich, 1999; Indovina et al., 2005; Bosco et al., 2008; Miller et al., 2008; Delle Monache et al., 2017; for reviews, see Dieterich & Brandt, 2015). 
Weighted integration of internalized gravity and visual information
Another aspect inherent to the functional significance of the effect of the Scenario × Gravity level interaction concerns the combinatorial nature of the information processed by the oculomotor control centers, at least within the present experimental conditions. This implies that visual-context signals are used by oculomotor control centers to adjust the relative weight between moment-to-moment visual information about the target kinematics processed by visual motion areas and internalized gravity information relayed by vestibular-network areas (Zago et al., 2008; for reviews, see Lacquaniti et al., 2014, 2015). According to this scheme, if visual-context information is congruent with a natural setting, predictive estimates of the target trajectory would reflect mostly internalized knowledge of the effects of natural gravity and, to a lesser extent, incoming visual motion information. This emerged, in part, as a general preference for accelerated motion compared to arbitrary motion in the pictorial scenario, exemplified by lower saccadic errors, lower saccadic frequency and higher pursuit gains during 1g and 2g trials. Similarly, the longer time leads of the eye relative to the ball when pursuing 0g trajectories in the pictorial scenario are consistent with stronger anticipation of the effects of gravity acceleration on the ball motion in the presence of visual cues about the naturalness of the visual scene. As already mentioned, the evidence reported here may be limited by the fact that, contrary to our original hypothesis, subjects' ocular responses did not differentiate between natural-like and enhanced gravity, perhaps because scaling between 1g and 2g accelerated motion in the realistic pictorial scenario was ambiguous. Conversely, visual information about the target kinematics would prevail over internalized gravity information if the visual context does not provide clear information about the causal nature of the target motion. Indeed, the similarity of oculomotor parameters across gravity levels we generally observed in the neutral scenario may be consistent with the predominance of visual information in guiding ocular-tracking behavior. 
Notably, similar effects of the visual context on behavioral responses to motion either congruent or not with the effects of natural gravity have been reported previously by an earlier study of ours that investigated the issue in the framework of manual interceptive actions (Miller et al., 2008). That study identified the posterior cerebellar vermis and the vestibular nuclei as putative neural structures responsible for extracting information from the visual context that could facilitate the interpretation of the causal nature of the target motion. In a similar vein, future neuroimaging and transcranial magnetic stimulation experiments may help reveal the reweighting of information we hypothesized here as differential activity changes between brain areas involved in processing visual motion and internalized gravity information, or brain activity correlated to the effect of the Gravity level × Scenario interaction, as reported by Miller et al. for the cerebellar vermis and the vestibular nuclei. With respect to this latter possibility, we could consider the oculomotor regions of the posterior cerebellar vermis (McElligott & Keller, 1984; Suzuki & Keller, 1988; Fujikado & Noda, 1987; Krauzlis & Miles, 1998; O'Driscoll et al., 2000; Tanabe, Tregellas, Miller, Ross, & Freedman, 2002; Konen, Kleiser, Seitz, & Bremmer, 2005; Müri, 2006) as potential candidates for modulating the relative weighting between internalized gravity and visual motion information in the predictive processes underlying ocular-tracking control. 
Ocular tracking of visible and occluded trajectories may share common properties
The significant effects of most GLMM predictors, including the two-way Scenario × Gravity level interaction, extended across the two temporal windows (perturbation and occlusion) we defined to examine the ocular-tracking parameters in relation to the target's motion perturbation and disappearance. This finding suggests that the combination of internalized and visual motion information guiding ocular tracking of visible targets during the perturbation window may also contribute to the visual extrapolation processes, which continued to drive the eyes along the invisible trajectory during the occlusion window. This interpretation of our behavioral results, in effect, may support previous neuroimaging evidence indicating that processing of visible and occluded motion can share the same mechanisms (Olson et al., 2004). 
Effects of prior experience with the visual background
Finally, the effects of visual context and ball acceleration on some of the oculomotor parameters depended also on the order in which visual scenarios were presented (see, in Tables 13, the statistical significance of the interaction terms that included scenario order). This result might hint at the possibility that prior experience with the visual scenarios affected tracking performance. For example, the significant Scenario × Gravity level × Scenario order and Scenario × Scenario order interactions we reported, respectively, for post-saccadic errors and saccadic frequencies during the perturbation window may suggest that subjects who experienced the pictorial scenario first showed more similar oculomotor parameters between the two scenarios by applying the scaling information derived from the quasi-realistic pictorial elements in their second block of trials with the neutral scenario. However, the opposite was true for other oculomotor parameters, like the pursuit τ values in the perturbation window and the saccadic frequencies in the occlusion window. Indeed, the interpretation of these interaction effects of scenario order must be taken cautiously, because the different order of visual scenarios was tested on two distinct groups of subjects. With this experimental design, it may be difficult to disambiguate entirely genuine effects of scenario order from intergroup differences in ocular-tracking performance, even though the numerosity of the two groups (n = 14) could be sufficient to compensate for individual subjects' idiosyncratic oculomotor behavior. In this respect, further experiments are needed to evaluate in depth the possibility that ocular-tracking performance is influenced by different prior experience with natural and neutral visual contexts. 
Conclusions
This study presented novel evidence that predictive control of eye-tracking movements can take advantage of an internal model of natural gravity effects on external object motion. Moreover, the weight with which internalized gravity information is integrated with sensory information depends critically on the naturalness of the overall visual context. Finally, we presented suggestive evidence that the order in which the realistic and neutral scenarios were experienced might also influence ocular-tracking behavior. 
Acknowledgments
We thank Nuno Alexandre De Sá Teixeira and Riccardo Ingrosso for helping with the data collection in a few experimental sessions. This work was supported by the Italian University Ministry (PRIN Grant 2015HFWRYY_002), the Italian Space Agency (Grant I/006/06/0), the Horizon 2020 Robotics Program (ICT-23-2014 under Grant Agreement 644727-CogIMon), and the University of Tor Vergata (Consolidating the Foundation Grant 2015). 
Commercial relationships: none. 
Corresponding author: Sergio Delle Monache. 
Address: Department of Systems Medicine, Neuroscience Section, University of Rome Tor Vergata, Rome, Italy. 
References
Abrams, R. A., Meyer, D. E., & Kornblum, S. (1990). Eye-hand coordination: Oculomotor control in rapid aimed limb movements. Journal of Experimental Psychology: Human Perception and Performance, 16 (2), 248–267, https://doi.org/10.1037/0096-1523.16.2.248.
André-Deshays, C., Israël, I., Charade, O., Berthoz, A., Popov, K., & Lipshits, M. (1993). Gaze control in microgravity: 1. Saccades, pursuit, eye-head coordination. Journal of Vestibular Research: Equilibrium & Orientation, 3 (3), 331–343. [PubMed]
Angelaki, D. E., Shaikh, A. G., Green, A. M., & Dickman J. D. (2004, July 29). Neurons compute internal models of the physical laws of motion. Nature, 430 (6999), 560–564, https://doi.org/10.1038/nature02754.
Assad, J. A., & Maunsell, J. H. (1995, February 9). Neuronal correlates of inferred motion in primate posterior parietal cortex. Nature, 373 (6514), 518–521, https://doi.org/10.1038/373518a0.
Bahill, A. T., & McDonald, J. D. (1983). Model emulates human smooth pursuit system producing zero-latency target tracking. Biological Cybernetics, 48 (3), 213–222, https://doi.org/10.1007/BF00318089.
Barborica, A., & Ferrera, V. P. (2003). Estimating invisible target speed from neuronal activity in monkey frontal eye field. Nature Neuroscience, 6 (1), 66–74, https://doi.org/10.1038/nn990.
Barborica, A., & Ferrera, V. P. (2004). Modification of saccades evoked by stimulation of frontal eye field during invisible target tracking. The Journal of Neuroscience, 24 (13), 3260–3267, https://doi.org/10.1523/JNEUROSCI.4702-03.2004.
Barnes, G. R., & Collins, C. J. S. (2008). Evidence for a link between the extra-retinal component of random-onset pursuit and the anticipatory pursuit of predictable object motion. Journal of Neurophysiology, 100 (2), 1135–1146, https://doi.org/10.1152/jn.00060.2008.
Becker, W., & Fuchs, A. F. (1985). Prediction in the oculomotor system: Smooth pursuit during transient disappearance of a visual target. Experimental Brain Research, 57, 562–575, https://doi.org/10.1007/BF00237843.
Bennett, S. J., & Barnes, G. R. (2003). Human ocular pursuit during the transient disappearance of a visual target. Journal of Neurophysiology, 90 (4), 2504–2520, https://doi.org/10.1152/jn.01145.2002.
Bennett, S. J., & Barnes, G. R. (2004). Predictive smooth ocular pursuit during the transient disappearance of a visual target. Journal of Neurophysiology, 92 (1), 578–590, https://doi.org/10.1152/jn.01188.2003.
Bennett, S. J., & Barnes, G. R. (2005). Timing the anticipatory recovery in smooth ocular pursuit during the transient disappearance of a visual target. Experimental Brain Research, 163, 198–203, https://doi.org/10.1007/s00221-004-2164-y.
Bennett, S. J., & Barnes, G. R. (2006a). Combined smooth and saccadic ocular pursuit during the transient occlusion of a moving visual object. Experimental Brain Research, 168, 313–321, https://doi.org/10.1007/s00221-005-0101-3.
Bennett, S. J., & Barnes, G. R. (2006b). Smooth ocular pursuit during the transient disappearance of an accelerating visual target: The role of reflexive and voluntary control. Experimental Brain Research, 175, 1–10, https://doi.org/10.1007/s00221-006-0533-4.
Bennett, S. J., Orban de Xivry, J. J., Barnes, G. R., & Lefèvre, P. (2007). Target acceleration can be extracted and represented within the predictive drive to ocular pursuit. Journal of Neurophysiology, 98 (3), 1405–1414, https://doi.org/10.1152/jn.00132.2007.
Bennett, S. J., Orban de Xivry, J. J., Lefèvre, P., & Barnes, G. R. (2010). Oculomotor prediction of accelerative target motion during occlusion: Long-term and short-term effects. Experimental Brain Research, 204 (4), 493–504, https://doi.org/10.1007/s00221-010-2313-4.
Binsted, G., Chua, R., Helsen, W., & Elliott, D. (2001). Eye-hand coordination in goal-directed aiming. Human Movement Science, 20, 563–585, https://doi.org/10.1016/S0167-9457(01)00068-9.
Blohm, G., Missal, M., & Lefèvre, P. (2003). Interaction between smooth anticipation and saccades during ocular orientation in darkness. Journal of Neurophysiology, 89 (3), 1423–1433, https://doi.org/10.1152/jn.00675.2002.
Bosco, G., Carrozzo, M., & Lacquaniti, F. (2008). Contributions of the human temporoparietal junction and MT/V5+ to the timing of interception revealed by transcranial magnetic stimulation. The Journal of Neuroscience, 28 (46), 12071–12084, https://doi.org/10.1523/JNEUROSCI.2869-08.2008.
Bosco, G., Delle Monache, S., Gravano, S., Indovina, I., La Scaleia, B., Maffei, V.,… Lacquaniti, F. (2015). Filling gaps in visual motion for target capture. Frontiers in Integrative Neuroscience, 9 (13), 1–17, https://doi.org/10.3389/fnint.2015.00013.
Bosco, G., Delle Monache, S., & Lacquaniti, F. (2012). Catching what we can't see: Manual interception of occluded fly-ball trajectories. PLoS One, 7 (11), e49381, https://doi.org/10.1371/journal.pone.0049381.
Boudet, C., Bocca, M. L., Dollfus, S., & Denise, P. (2006). The saccadic component of ocular pursuit is influenced by the predictability of the target motion in humans. Experimental Brain Research, 168, 294–297, https://doi.org/10.1007/s00221-005-0181-0.
Bowman, M. C., Johansson, R. S., & Flanagan, J. R. (2009). Eye-hand coordination in a sequential target contact task. Experimental Brain Research, 195, 273–283, https://doi.org/10.1007/s00221-009-1781-x.
Brancazio, P. J. (1985). Looking into Chapman's homer: The physics of judging a fly ball. American Journal of Physics, 53, 849–855, https://doi.org/10.1119/1.14350.
Brandt, T., & Dieterich, M. (1999). The vestibular cortex: Its locations, functions, and disorders. Annals of the New York Academy of Sciences, 871, 293–312, https://doi.org/10.1111/j.1749-6632.1999.tb09193.x.
Brenner, E., & Smeets, J. B. J. (2009). Sources of variability in interceptive movements. Experimental Brain Research, 195, 117–133, https://doi.org/10.1007/s00221-009-1757-x.
Brenner, E., & Smeets, J. B. J. (2011). Continuous visual control of interception. Human Movement Science, 30 (3), 475–494, https://doi.org/10.1016/j.humov.2010.12.007.
Brostek, L., Eggert, T., & Glasauer, S. (2017). Gain control in predictive smooth pursuit eye movements: Evidence for an acceleration-based predictive mechanism. eNeuro, 4 (3), ENEURO.0343-16.2017, https://doi.org/10.1523/ENEURO.0343-16.2017.
Carnahan, H., & Marteniuk, R. G. (1991). The temporal organization of hand, eye, and head movements during reaching and pointing. Journal of Motor Behavior, 23 (2), 109–119, https://doi.org/10.1080/00222895.1991.9942028.
Cerminara, N. L., Apps, R., & Marple-Horvat, D. E. (2009). An internal model of a moving visual target in the lateral cerebellum. The Journal of Physiology, 587 (2), 429–442, https://doi.org/10.1113/jphysiol.2008.163337.
Cesqui, B., Mezzetti, M., Lacquaniti, F., & d'Avella, A. (2015). Gaze behavior in one-handed catching and its relation with interceptive performance: What the eyes can't tell. PLoS One, 10 (3), e0119445, https://doi.org/10.1371/journal.pone.0119445.
Clarke, A. H. (2008). Listing's plane and the otolith-mediated gravity vector. Progress in Brain Research, 171, 291–294, https://doi.org/10.1016/S0079-6123(08)00642-0.
Clarke, A. H., & Haslwanter, T. (2007). The orientation of Listing's plane in microgravity. Vision Research, 47, 3132–3140, https://doi.org/10.1016/j.visres.2007.09.001.
Collewijn, H., & Tamminga, E. P. (1984). Human smooth and saccadic eye movements during voluntary pursuit of different target motions on different backgrounds. The Journal of Physiology, 351, 217–250, https://doi.org/10.1113/jphysiol.1984.sp015242.
Collins, C. J. S., & Barnes, G. R. (2006). The occluded onset pursuit paradigm: Prolonging anticipatory smooth pursuit in the absence of visual feedback. Experimental Brain Research, 175, 11–20, https://doi.org/10.1007/s00221-006-0527-2.
Collins, C. J. S., & Barnes, G. R. (2009). Predicting the unpredictable: Weighted averaging of past stimulus timing facilitates ocular pursuit of randomly timed stimuli. The Journal of Neuroscience, 29 (42), 13302–13014, https://doi.org/10.1523/JNEUROSCI.1636-09.2009.
Crespi, S., Robino, C., Silva, O., & de'Sperati, C. (2012). Spotting expertise in the eyes: Billiards knowledge as revealed by gaze shifts in a dynamic visual prediction task. Journal of Vision, 12 (11): 30, 1–19, https://doi.org/10.1167/12.11.30. [PubMed] [Article]
de Brouwer, S., Missal, M., Barnes, G., & Lefèvre, P. (2002). Quantitative analysis of catch-up saccades during sustained pursuit. Journal of Neurophysiology, 87 (4), 1772–1780, https://doi.org/10.1152/jn.00621.2001.
de Brouwer, S., Missal, M., & Lefèvre, P. (2001). Role of retinal slip in the prediction of target motion during smooth and saccadic pursuit. Journal of Neurophysiology, 86 (2), 550–558, https://doi.org/10.1152/jn.2001.86.2.550.
Delle Monache, S., Lacquaniti, F., & Bosco, G. (2015). Eye movements and manual interception of ballistic trajectories: Effects of law of motion perturbations and occlusions. Experimental Brain Research, 233 (2), 359–374, https://doi.org/10.1007/s00221-014-4120-9.
Delle Monache, S., Lacquaniti, F., & Bosco, G. (2017). Differential contributions to the interception of occluded ballistic trajectories by the temporoparietal junction, area hMT/V5+, and the intraparietal cortex. Journal of Neurophysiology, 118 (3), 1809–1823, https://doi.org/10.1152/jn.00068.2017.
Dessing, J. C., Oostwoud Wijdenes, L., Peper, C. E., & Beek, P. J. (2009). Visuomotor transformation for interception: Catching while fixating. Experimental Brain Research, 196 (4), 511–527, https://doi.org/10.1007/s00221-009-1882-6.
Diaz, G., Cooper, J., & Hayhoe, M. (2013). Memory and prediction in natural gaze control. Philosophical Transactions of the Royal Society B: Biological Sciences, 368 (1628), 20130064, https://doi.org/10.1098/rstb.2013.0064.
Diaz, G., Cooper, J., Rothkopf, C., & Hayhoe, M. (2013). Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task. Journal of Vision, 13 (1): 20, 1–14, https://doi.org/10.1167/13.1.20. [PubMed] [Article]
Dieterich, M., & Brandt, T. (2015). The bilateral central vestibular system: Its pathways, functions, and disorders. Annals of the New York Academy of Sciences, 1343 (1), 10–26, https://doi.org/10.1111/nyas.12585.
Eskandar, E. N., & Assad, J. A. (1999). Dissociation of visual, motor and predictive signals in parietal cortex during visual guidance. Nature Neuroscience, 2 (1), 88–93, https://doi.org/10.1038/4594.
Ferrera, V. P., & Barborica, A. (2010). Internally generated error signals in monkey frontal eye field during an inferred motion task. The Journal of Neuroscience, 30 (35), 11612–11623, https://doi.org/10.1523/JNEUROSCI.2977-10.2010.
Fiori, F., Candidi, M., Acciarino, A., David, N., & Aglioti, S. M. (2015). The right temporoparietal junction plays a causal role in maintaining the internal representation of verticality. Journal of Neurophysiology, 114 (5), 2983–2990, https://doi.org/10.1152/jn.00289.2015.
Fooken, J., Yeo, S. H., Pai, D. K., & Spering, M. (2016). Eye movement accuracy determines natural interception strategies. Journal of Vision, 16 (14): 1, 1–15, https://doi.org/10.1167/16.14.1. [PubMed] [Article]
Fujikado, T., & Noda, H. (1987). Saccadic eye movements evoked by microstimulation of lobule VII of the cerebellar vermis of macaque monkeys. The Journal of Physiology, 394 (1), 573–594, https://doi.org/10.1113/jphysiol.1987.sp016885.
Fukushima, K., Akao, T., Shichinohe, N., Nitta, T., Kurkin, S., & Fukushima, J. (2008). Predictive signals in the pursuit area of the monkey frontal eye fields. Progress in Brain Research, 171, 433–440, https://doi.org/10.1016/S0079-6123(08)00664-X.
Fukushima, K., Yamanobe, T., Shinmei, Y., & Fukushima, J. (2002). Predictive responses of periarcuate pursuit neurons to visual target motion. Experimental Brain Research, 145 (1), 104–120, https://doi.org/10.1007/s00221-002-1088-7.
Gerardin, P., Gaveau, V., Pélisson, D., & Prablanc, C. (2011). Integration of visual information for saccade production. Human Movement Science, 30 (6), 1009–1021, https://doi.org/10.1016/j.humov.2011.01.004.
Gielen, C. C., Dijkstra, T. M., Roozen, I. J., & Welten, J. (2009). Coordination of gaze and hand movements for tracking and tracing in 3D. Cortex, 45 (3), 340–355, https://doi.org/10.1016/j.cortex.2008.02.009.
Green, A. M., & Angelaki, D. E. (2010a). Internal models and neural computation in the vestibular system. Experimental Brain Research, 200 (3–4), 197–222, https://doi.org/10.1007/s00221-009-2054-4.
Green, A. M., & Angelaki, D. E. (2010b). Multisensory integration: Resolving sensory ambiguities to build novel representations. Current Opinion in Neurobiology, 20 (3), 353–360, https://doi.org/10.1016/j.conb.2010.04.009.
Guldin, W. O., & Grüsser, O. J. (1998). Is there a vestibular cortex? Trends in Neurosciences, 21 (6), 254–259, https://doi.org/10.1016/S0166-2236(97)01211-3.
Hardiess, G., Hansmann-Roth, S., & Mallot, H. A. (2013). Gaze movements and spatial working memory in collision avoidance: A traffic intersection task. Frontiers in Behavioral Neuroscience, 7, 62, https://doi.org/10.3389/fnbeh.2013.00062.
Helsen, W. F., Elliott, D., Starkes, J. L., & Ricker, K. L. (2000). Coupling of eye, finger, elbow, and shoulder movements during manual aiming. Journal of Motor Behavior, 32 (3), 241–248, https://doi.org/10.1080/00222890009601375.
Howard, I. P., & Marton, C. (1992). Visual pursuit over textured backgrounds in different depth planes. Experimental Brain Research, 90 (3), 625–629, https://doi.org/10.1007/BF00230947.
Indovina, I., Maffei, V., Bosco, G., Zago, M., Macaluso, E., & Lacquaniti, F. (2005, April 15). Representation of visual gravitational motion in the human vestibular cortex. Science, 308 (5720), 416–419, https://doi.org/10.1126/science.1107961.
Indovina, I., Maffei, V., Pauwels, K., Macaluso, E., Orban, G. A., & Lacquaniti, F. (2013). Simulated self-motion in a visual gravity field: Sensitivity to vertical and horizontal heading in the human brain. NeuroImage, 71, 114–124, https://doi.org/10.1016/j.neuroimage.2013.01.005.
Indovina, I., Mazzarella, E., Maffei, V., Cesqui, B., Passamonti, L., & Lacquaniti, F. (2015). Sound-evoked vestibular stimulation affects the anticipation of gravity effects during visual self-motion. Experimental Brain Research, 233 (8), 2365–2371, https://doi.org/10.1007/s00221-015-4306-9.
Johansson, R. S., Westling, G., Bäckström, A., & Flanagan, J. R. (2001). Eye-hand coordination in object manipulation. The Journal of Neuroscience, 21 (17), 6917–6932, https://doi.org/10.1523/JNEUROSCI.21-17-06917.2001.
Jörges, B., & López-Moliner, J. (2017). Gravity as a strong prior: Implications for perception and action. Frontiers in Human Neuroscience, 11, 1–16, https://doi.org/10.3389/fnhum.2017.00203.
Kattoulas, E., Smyrnis, N., Stefanis, N. C., Avramopoulos, D., Stefanis, C. N., & Evdokimidis, I. (2011). Predictive smooth eye pursuit in a population of young men: I. Effects of age, IQ, oculomotor and cognitive tasks. Experimental Brain Research, 215 (3–4), 207–218, https://doi.org/10.1007/s00221-011-2887-5.
Keller, E. L., & Khan, N. S. (1986). Smooth-pursuit initiation in the presence of a textured background in monkey. Vision Research, 26 (6), 943–955, https://doi.org/10.1016/0042-6989(86)90152-5.
Konen, C. S., Kleiser, R., Seitz, R. J., & Bremmer, F. (2005). An fMRI study of optokinetic nystagmus and smooth-pursuit eye movements in humans. Experimental Brain Research, 165 (2), 203–216, https://doi.org/10.1007/s00221-005-2289-7.
Kowler, E. (1989). Cognitive expectations, not habits, control anticipatory smooth oculomotor pursuit. Vision Research, 29 (9), 1049–1057, https://doi.org/10.1016/0042-6989(89)90052-7.
Kowler, E., Aitkin, C. D., Ross, N. M., Santos, E. M., & Zhao, M. (2014). Davida Teller Award Lecture 2013: The importance of prediction and anticipation in the control of smooth pursuit eye movements. Journal of Vision, 14 (5): 10, 1–16, https://doi.org/10.1167/14.5.10. [PubMed] [Article]
Kowler, E., Martins, A. J., & Pavel, M. (1984). The effect of expectations on slow oculomotor control—IV. Anticipatory smooth eye movements depend on prior target motions. Vision Research, 24 (3), 197–210, https://doi.org/10.1016/0042-6989(84)90122-6.
Krauzlis, R. J., & Miles, F. A. (1998). Role of the oculomotor vermis in generating pursuit and saccades: Effects of microstimulation. Journal of Neurophysiology, 80 (4), 2046–2062, https://doi.org/10.1152/jn.1998.80.4.2046.
Kreyenmeier, P., Fooken, J., & Spering, M. (2017). Context effects on smooth pursuit and manual interception of a disappearing target. Journal of Neurophysiology, 118 (1), 404–415, https://doi.org/10.1152/jn.00217.2017.
Lacquaniti, F., Bosco, G., Gravano, S., Indovina, I., La Scaleia, B., Maffei, V., & Zago, M. (2014). Multisensory integration and internal models for sensing gravity effects in primates. BioMed Research International, 2014, 1–10, https://doi.org/10.1155/2014/615854.
Lacquaniti, F., Bosco, G., Gravano, S., Indovina, I., La Scaleia, B., Maffei, V., & Zago, M. (2015). Gravity in the brain as a reference for space and time perception. Multisensory Research, 28 (5–6), 397–426, https://doi.org/10.1163/22134808-00002471.
Land, M. F., & McLeod, P. (2000). From eye movements to actions: How batsmen hit the ball. Nature Neuroscience, 3 (12), 1340–1345, https://doi.org/10.1038/81887.
Land, M., Mennie, N., & Rusted, J. (1999). The roles of vision and eye movements in the control of activities of daily living. Perception, 28 (11), 1311–1328, https://doi.org/10.1068/p2935.
La Scaleia, B., Lacquaniti, F., & Zago, M. (2014). Neural extrapolation of motion for a ball rolling down an inclined plane. PLoS One, 9 (6), e99837, https://doi.org/10.1371/journal.pone.0099837.
La Scaleia, B., Zago, M., & Lacquaniti, F. (2015). Hand interception of occluded motion in humans: A test of model-based vs. on-line control. Journal of Neurophysiology, 114 (3), 1577–1592, https://doi.org/10.1152/jn.00475.2015.
Li, Y., Wang, Y., & Cui, H. (2018). Eye-hand coordination during flexible manual interception of an abruptly appearing, moving target. Journal of Neurophysiology, 119 (1), 221–234, https://doi.org/10.1152/jn.00476.2017.
Lindner, A., Schwarz, U., & Ilg, U. J. (2001). Cancellation of self-induced retinal image motion during smooth pursuit eye movements. Vision Research, 41 (13), 1685–1694, https://doi.org/10.1016/S0042-6989(01)00050-5.
Lisberger, S. G., Morris, E. J., & Tychsen, L. (1987). Visual motion processing and sensory-motor integration for smooth pursuit eye movements. Annual Review of Neuroscience, 10, 97–129, https://doi.org/10.1146/annurev.ne.10.030187.000525.
López-Moliner, J., & Brenner, E. (2016). Flexible timing of eye movements when catching a ball. Journal of Vision, 16 (5): 13, 1–11, https://doi.org/10.1167/16.5.13. [PubMed] [Article]
Madelain, L., & Krauzlis, R. J. (2003). Effects of learning on smooth pursuit during transient disappearance of a visual target. Journal of Neurophysiology, 90 (2), 972–982, https://doi.org/10.1152/jn.00869.2002.
Maffei, V., Indovina, I., Macaluso, E., Ivanenko, Y. P., Orban, G. A., & Lacquaniti, F. (2015). Visual gravity cues in the interpretation of biological movements: Neural correlates in humans. NeuroImage, 104, 221–230, https://doi.org/10.1016/j.neuroimage.2014.10.006.
Maffei, V., Macaluso, E., Indovina, I., Orban, G., & Lacquaniti, F. (2010). Processing of targets in smooth or apparent motion along the vertical in the human brain: An fMRI study. Journal of Neurophysiology, 103 (1), 360–370, https://doi.org/10.1152/jn.00892.2009.
Makin, A. D., Poliakoff, E., Chen, J., & Stewart, A. J. (2008). The effect of previously viewed velocities on motion extrapolation. Vision Research, 48 (18), 1884–1893, https://doi.org/10.1016/j.visres.2008.05.023.
Makin, A. D., Poliakoff, E., & El-Deredy, W. (2009). Tracking visible and occluded targets: Changes in event related potentials during motion extrapolation. Neuropsychologia, 47 (4), 1128–1137, https://doi.org/10.1016/j.neuropsychologia.2009.01.010.
Masson, G., Proteau, L., & Mestre, D. R. (1995). Effects of stationary and moving textured backgrounds on the visuo-oculo-manual tracking in humans. Vision Research, 35 (6), 837–852, https://doi.org/10.1016/0042-6989(94)00185-O.
McElligott, J. G., & Keller, E. L. (1984). Cerebellar vermis involvement in monkey saccadic eye movements: Microstimulation. Experimental Neurology, 86 (3), 543–558, https://doi.org/10.1016/0014-4886(84)90088-8.
McIntyre, J., Zago, M., Berthoz, A., & Lacquaniti, F. (2001). Does the brain model Newton's laws? Nature Neuroscience, 4, 693–694, https://doi.org/10.1038/89477.
McLeod, P., Reed, N., & Dienes, Z. (2006). The generalized optic acceleration cancellation theory of catching. Journal of Experimental Psychology: Human Perception and Performance, 32 (1), 139–148, https://doi.org/10.1037/0096-1523.32.1.139.
McLeod, P., Reed, N., Gilson, S., & Glennerster, A. (2008). How soccer players head the ball: A test of optic acceleration cancellation theory with virtual reality. Vision Research, 48 (13), 1479–1487, https://doi.org/10.1016/j.visres.2008.03.016.
Merfeld, D. M., Zupan, L., & Peterka, R. J. (1999, April 15). Humans use internal models to estimate gravity and linear acceleration. Nature, 398 (6728), 615–618, https://doi.org/10.1038/19303.
Miller, W. L., Maffei, V., Bosco, G., Iosa, M., Zago, M., Macaluso, E., & Lacquaniti, F. (2008). Vestibular nuclei and cerebellum put visual gravitational motion in context. Journal of Neurophysiology, 99 (4), 1969–1982, https://doi.org/10.1152/jn.00889.2007.
Mitrani, L., & Dimitrov, G. (1978). Pursuit eye movements of a disappearing moving target. Vision Research, 18 (5), 537–539, https://doi.org/10.1016/0042-6989(78)90199-2.
Mohrmann, H., & Thier, P. (1995). The influence of structured visual backgrounds on smooth-pursuit initiation, steady-state pursuit and smooth-pursuit termination. Biological Cybernetics, 73 (1), 83–93, https://doi.org/10.1007/BF00199058.
Morris, E. J., & Lisberger, S. G. (1987). Different responses to small visual errors during initiation and maintenance of smooth-pursuit eye movements in monkeys. Journal of Neurophysiology, 58, 1351–1369, https://doi.org/10.1152/jn.1987.58.6.1351.
Moscatelli, A., & Lacquaniti, F. (2011). The weight of time: Gravitational force enhances discrimination of visual motion duration. Journal of Vision, 11 (4): 5, 1–17, https://doi.org/10.1167/11.4.5. [PubMed] [Article]
Moscatelli, A., Mezzetti, M., & Lacquaniti, F. (2012). Modeling psychophysical data at the population-level: The generalized linear mixed model. Journal of Vision, 12 (11): 26, 1–17, https://doi.org/10.1167/12.11.26. [PubMed] [Article]
Mrotek, L. A., & Soechting, J. F. (2007). Predicting curvilinear target motion through an occlusion. Experimental Brain Research, 178 (1), 99–114, https://doi.org/10.1007/s00221-006-0717-y.
Müri, R. M. (2006). MRI and fMRI analysis of oculomotor function. Progress in Brain Research, 151, 503–526, https://doi.org/10.1016/S0079-6123(05)51016-1.
Nagel, M., Sprenger, A., Zapf, S., Erdmann, C., Kömpf, D., Heide, W.,… Lencer, R. (2006). Parametric modulation of cortical activation during smooth pursuit with and without target blanking: An fMRI study. NeuroImage, 29 (4), 1319–1325, https://doi.org/10.1016/j.neuroimage.2005.08.050.
Neggers, S. F., & Bekkering, H. (2000). Ocular gaze is anchored to the target of an ongoing pointing movement. Journal of Neurophysiology, 83 (2), 639–651, https://doi.org/10.1152/jn.2000.83.2.639.
Neggers, S. F., & Bekkering, H. (2001). Gaze anchoring to a pointing target is present during the entire pointing movement and is driven by a non-visual signal. Journal of Neurophysiology, 86 (2), 961–970, https://doi.org/10.1152/jn.2001.86.2.961.
Niemann, T., & Hoffmann, K. P. (1997). The influence of stationary and moving textured backgrounds on smooth-pursuit initiation and steady state pursuit in humans. Experimental Brain Research, 115 (3), 531–540, https://doi.org/10.1007/PL00005723.
Nooij, S. A., Bos, J. E., & Groen, E. L. (2008). Orientation of Listing's plane after hypergravity in humans. Journal of Vestibular Research, 18 (2–3), 97–105.
O'Driscoll, G. A., Wolff, A. L. V., Benkelfat, C., Florencio, P. S., Lal, S., & Evans, A. C. (2000). Functional neuroanatomy of smooth pursuit and predictive saccades. NeuroReport, 11 (6), 1335–1340, https://doi.org/10.1097/00001756-200004270-00037.
Olson, I. R., Gatenby, J. C., Leung, H. C., Skudlarski, P., & Gore, J. C. (2004). Neuronal representation of occluded objects in the human brain. Neuropsychologia, 42 (1), 95–104, https://doi.org/10.1016/S0028-3932(03)00151-9.
Orban de Xivry, J. J., Bennett, S. J., Lefèvre, P., & Barnes, G. R. (2006). Evidence for synergy between saccades and smooth pursuit during transient target disappearance. Journal of Neurophysiology, 95 (1), 418–427, https://doi.org/10.1152/jn.00596.2005.
Orban de Xivry, J. J., & Lefèvre, P. (2007). Saccades and pursuit: Two outcomes of a single sensorimotor process. The Journal of Physiology, 584 (1), 11–23, https://doi.org/10.1113/jphysiol.2007.139881.
Orban de Xivry, J. J., Missal, M., & Lefèvre, P. (2008). A dynamic representation of target motion drives predictive smooth pursuit during target blanking. Journal of Vision, 8 (15): 6, 1–13, https://doi.org/10.1167/8.15.6. [PubMed] [Article]
O'Reilly, J. X., Mesulam, M. M., & Nobre, A. C. (2008). The cerebellum predicts the timing of perceptual events. The Journal of Neuroscience, 28 (9), 2252–2260, https://doi.org/10.1523/JNEUROSCI.2742-07.2008.
Pelz, J., Hayhoe, M., & Loeber, R. (2001). The coordination of eye, head, and hand movements in a natural task. Experimental Brain Research, 139 (3), 266–277, https://doi.org/10.1007/s002210100745.
Pola, J., & Wyatt, H. J. (1997). Offset dynamics of human smooth pursuit eye movements: Effects of target presence and subject attention. Vision Research, 37 (18), 2579–2595, https://doi.org/10.1016/S0042-6989(97)00058-8.
Russo, M., Cesqui, B., La Scaleia, B., Ceccarelli, F., Maselli, A., Moscatelli, A.,… D'Avella, A. (2017). Intercepting virtual balls approaching under different gravity conditions: Evidence for spatial prediction. Journal of Neurophysiology, 118 (4), 2421–2434, https://doi.org/10.1152/jn.00025.2017.
Santos, E. M., & Kowler, E. (2017). Anticipatory smooth pursuit eye movements evoked by probabilistic cues. Journal of Vision, 17 (13): 13, 1–16, https://doi.org/10.1167/17.13.13. [PubMed] [Article]
Sarpeshkar, V., Abernethy, B., & Mann, D. L. (2017). Visual strategies underpinning the development of visual-motor expertise when hitting a ball. Journal of Experimental Psychology: Human Perception and Performance, 43 (10), 1744–1772, https://doi.org/10.1037/xhp0000465.
Schmitt, C., Klingenhoefer, S., & Bremmer, F. (2018). Preattentive and predictive processing of visual motion. Scientific Reports, 8 (1), 12399, https://doi.org/10.1038/s41598-018-30832-9.
Senot, P., Zago, M., Lacquaniti, F., & McIntyre, J. (2005). Anticipating the effects of gravity when intercepting moving objects: Differentiating up and down based on nonvisual cues. Journal of Neurophysiology, 94 (6), 4471–4480, https://doi.org/10.1152/jn.00527.2005.
Senot, P., Zago, M., Le Seac'h, A., Zaoui, M., Berthoz, A., Lacquaniti, F., & McIntyre, J. (2012). When up is down in 0g: How gravity sensing affects the timing of interceptive actions. The Journal of Neuroscience, 32 (6), 1969–1973, https://doi.org/10.1523/JNEUROSCI.3886-11.2012.
Shagass, C., Roemer, R. A., & Amadeo, M. (1976). Eye-tracking performance and engagement of attention. Archives of General Psychiatry, 33 (1), 121–125, https://doi.org/10.1001/archpsyc.1976.01770010077015.
Souto, D., & Kerzel, D. (2013). Like a rolling stone: Naturalistic visual kinematics facilitate tracking eye movements. Journal of Vision, 13 (2): 9, 1–12, https://doi.org/10.1167/13.2.9. [PubMed] [Article]
Spering, M., & Gegenfurtner, K. R. (2007). Contextual effects on smooth-pursuit eye movements. Journal of Neurophysiology, 97 (2), 1353–1367, https://doi.org/10.1152/jn.01087.2006.
Spering, M., Schütz, A. C., Braun, D. I., & Gegenfurtner, K. R. (2011). Keep your eyes on the ball: Smooth pursuit eye movements enhance prediction of visual motion. Journal of Neurophysiology, 105 (4), 1756–1767, https://doi.org/10.1152/jn.00344.2010.
Suzuki, D. A., & Keller, E. L. (1988). The role of the posterior vermis of monkey cerebellum in smooth-pursuit eye movement control. II. Target velocity-related Purkinje cell activity. Journal of Neurophysiology, 59 (1), 19–40, https://doi.org/10.1152/jn.1988.59.1.19.
Tanabe, J., Tregellas, J., Miller, D., Ross, R. G., & Freedman, R. (2002). Brain activation during smooth-pursuit eye movements. NeuroImage, 17 (3), 1315–1324, https://doi.org/10.1006/nimg.2002.1263.
Worfolk, R., & Barnes, G. R. (1992). Interaction of active and passive slow eye movement systems. Experimental Brain Research, 90 (3), 589–598, https://doi.org/10.1007/BF00230943.
Xiao, Q., Barborica, A., & Ferrera, V. P. (2007). Modulation of visual responses in macaque frontal eye field during covert tracking of invisible targets. Cerebral Cortex, 17 (4), 918–928, https://doi.org/10.1093/cercor/bhl002.
Yee, R. D., Daniels, S. A., Jones, O. W., Baloh, R. W., & Honrubia, V. (1983). Effects of an optokinetic background on pursuit eye movements. Investigative Ophthalmology & Visual Science, 24 (8), 1115–1122. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/6874275.
Zago, M., Bosco, G., Maffei, V., Iosa, M., Ivanenko, Y. P., & Lacquaniti, F. (2004). Internal models of target motion: Expected dynamics overrides measured kinematics in timing manual interceptions. Journal of Neurophysiology, 91 (4), 1620–1634, https://doi.org/10.1152/jn.00862.2003.
Zago, M., Bosco, G., Maffei, V., Iosa, M., Ivanenko, Y. P., & Lacquaniti, F. (2005). Fast adaptation of the internal model of gravity for manual interceptions: Evidence for event-dependent learning. Journal of Neurophysiology, 93 (2), 1055–1068, https://doi.org/10.1152/jn.00833.2004.
Zago, M., Iosa, M., Maffei, V., & Lacquaniti, F. (2010). Extrapolation of vertical target motion through a brief visual occlusion. Experimental Brain Research, 201 (3), 365–384, https://doi.org/10.1007/s00221-009-2041-9.
Zago, M., La Scaleia, B., Miller, W. L., & Lacquaniti, F. (2011). Coherence of structural visual cues and pictorial gravity paves the way for interceptive actions. Journal of Vision, 11 (10): 13, 1–10, https://doi.org/10.1167/11.10.13. [PubMed] [Article]
Zago, M., McIntyre, J., Senot, P., & Lacquaniti, F. (2008). Internal models and prediction of visual gravitational motion. Vision Research, 48 (14), 1532–1538, https://doi.org/10.1016/j.visres.2008.04.005.
Zago, M., McIntyre, J., Senot, P., & Lacquaniti, F. (2009). Visuo-motor coordination and internal models for object interception. Experimental Brain Research, 192 (4), 571–604, https://doi.org/10.1007/s00221-008-1691-3.
Figure 1
 
Visual scenes for the ocular-tracking task. (A) Pictorial scenario. The scene reproduced a fly-ball play of a baseball game and spanned 42.86° × 26.79° visual angle. Ball motion (white circle, enlarged slightly for illustration purposes) started from the batter at the bottom left of the scene and landed on the right half of the scene following a parabolic path (magenta dotted trace, shown here for illustrative purposes but never appearing on-screen). Stationary graphic elements, such as the baseball field's perimeter, the players, and the landscape, provided perspective view and metric cues. (B) Neutral scenario. The same ball motion as presented in the pictorial scene was projected over a uniform gray background, with average luminance matched to the pictorial scenario (27 cd/m2). The oriented orange rectangle, located at the same pixel coordinates as the batter in the pictorial scene, symbolized a ball launcher.
Figure 1
 
Visual scenes for the ocular-tracking task. (A) Pictorial scenario. The scene reproduced a fly-ball play of a baseball game and spanned 42.86° × 26.79° visual angle. Ball motion (white circle, enlarged slightly for illustration purposes) started from the batter at the bottom left of the scene and landed on the right half of the scene following a parabolic path (magenta dotted trace, shown here for illustrative purposes but never appearing on-screen). Stationary graphic elements, such as the baseball field's perimeter, the players, and the landscape, provided perspective view and metric cues. (B) Neutral scenario. The same ball motion as presented in the pictorial scene was projected over a uniform gray background, with average luminance matched to the pictorial scenario (27 cd/m2). The oriented orange rectangle, located at the same pixel coordinates as the batter in the pictorial scene, symbolized a ball launcher.
Figure 2
 
Ball ballistic trajectories. The ascending segment was modeled by accounting for Earth gravity and air-drag effects, scaled to the metrics of the pictorial scene. The descending segment either retained the same level of gravity (unperturbed 1g trajectories, red traces) or was perturbed with simulated micro- (0g, blue traces) or hypergravity (2g, green traces) effects. Crosses and filled circles indicate perturbation and visual-occlusion onsets, respectively. Crosses indicating the perturbation onsets are illustrated also on unperturbed 1g trajectories, since they were used as temporal markers for the eye-movement analyses and to define the onsets of the visual occlusions. Visual occlusions began 550 ms after the temporal markers of the perturbation and lasted either 450 or 650 ms (the open circles and squares mark the end of the 450- and 650-ms intervals, respectively). (A) Trajectories perturbed 1,750 ms before landing. (B) Trajectories perturbed 1,500 ms before landing.
Figure 2
 
Ball ballistic trajectories. The ascending segment was modeled by accounting for Earth gravity and air-drag effects, scaled to the metrics of the pictorial scene. The descending segment either retained the same level of gravity (unperturbed 1g trajectories, red traces) or was perturbed with simulated micro- (0g, blue traces) or hypergravity (2g, green traces) effects. Crosses and filled circles indicate perturbation and visual-occlusion onsets, respectively. Crosses indicating the perturbation onsets are illustrated also on unperturbed 1g trajectories, since they were used as temporal markers for the eye-movement analyses and to define the onsets of the visual occlusions. Visual occlusions began 550 ms after the temporal markers of the perturbation and lasted either 450 or 650 ms (the open circles and squares mark the end of the 450- and 650-ms intervals, respectively). (A) Trajectories perturbed 1,750 ms before landing. (B) Trajectories perturbed 1,500 ms before landing.
Figure 3
 
Perturbation and occlusion time windows used for eye-movement analysis. Horizontal and vertical and eye-position traces (blue) recorded from one subject during tracking of one 0g trajectory (red traces) are illustrated in the top and bottom panels, respectively. Vertical solid, dashed, and dash-dotted lines indicate the ball-trajectory perturbation, occlusion, and reappearance events, respectively. The green transparency delimits the perturbation interval, from 100 ms after the perturbation onset until the ball disappearance. The red transparency corresponds to the occlusion interval, lasting 350 ms from the time 100 ms after the ball disappearance.
Figure 3
 
Perturbation and occlusion time windows used for eye-movement analysis. Horizontal and vertical and eye-position traces (blue) recorded from one subject during tracking of one 0g trajectory (red traces) are illustrated in the top and bottom panels, respectively. Vertical solid, dashed, and dash-dotted lines indicate the ball-trajectory perturbation, occlusion, and reappearance events, respectively. The green transparency delimits the perturbation interval, from 100 ms after the perturbation onset until the ball disappearance. The red transparency corresponds to the occlusion interval, lasting 350 ms from the time 100 ms after the ball disappearance.
Figure 4
 
Ocular-tracking parameters during the perturbation interval. (A) In the left panel, mean post-saccadic errors computed among subjects by pooling experimental conditions with the same gravity level are plotted against the gravity level (±SEM). The right panel shows the mean post-saccadic error (±SEM) computed, separately for G1 and G2 subjects, across experimental conditions of a given block with either the neutral or the pictorial scenario. Mean values in the pictorial and neutral scenarios are indicated by black filled circles and gray filled triangles, respectively. (B–D) Same layout as in (A) for, respectively, saccadic frequency, smooth-pursuit τ, and gain values.
Figure 4
 
Ocular-tracking parameters during the perturbation interval. (A) In the left panel, mean post-saccadic errors computed among subjects by pooling experimental conditions with the same gravity level are plotted against the gravity level (±SEM). The right panel shows the mean post-saccadic error (±SEM) computed, separately for G1 and G2 subjects, across experimental conditions of a given block with either the neutral or the pictorial scenario. Mean values in the pictorial and neutral scenarios are indicated by black filled circles and gray filled triangles, respectively. (B–D) Same layout as in (A) for, respectively, saccadic frequency, smooth-pursuit τ, and gain values.
Figure 5
 
Ocular-tracking parameters during the occlusion interval. Same layout as Figure 4.
Figure 5
 
Ocular-tracking parameters during the occlusion interval. Same layout as Figure 4.
Figure 6
 
Post-reappearance errors. In the left panel, mean post-reappearance errors computed among subjects by pooling experimental conditions with the same gravity level are plotted against the gravity level (±SEM). The right panel shows the mean post-reappearance error (±SEM) computed, separately for G1 and G2 subjects, across experimental conditions of a given block with either the neutral or the pictorial scenario. Mean values in the pictorial and neutral scenarios are indicated by black filled circles and gray filled triangles, respectively.
Figure 6
 
Post-reappearance errors. In the left panel, mean post-reappearance errors computed among subjects by pooling experimental conditions with the same gravity level are plotted against the gravity level (±SEM). The right panel shows the mean post-reappearance error (±SEM) computed, separately for G1 and G2 subjects, across experimental conditions of a given block with either the neutral or the pictorial scenario. Mean values in the pictorial and neutral scenarios are indicated by black filled circles and gray filled triangles, respectively.
Figure 7
 
Reaction-time responses. Mean reaction-time values (±SEM) in the pictorial (black filled circles) and visual (gray filled triangles) scenarios were computed separately for the two subject groups that experienced either the pictorial (G1) or the neutral (G2) scenario first.
Figure 7
 
Reaction-time responses. Mean reaction-time values (±SEM) in the pictorial (black filled circles) and visual (gray filled triangles) scenarios were computed separately for the two subject groups that experienced either the pictorial (G1) or the neutral (G2) scenario first.
Table 1
 
Results of generalized linear mixed model analyses on indexes derived from (A) saccadic movements (post-saccadic error and saccadic frequency) and (B) smooth-pursuit movements (τ and gain) during the perturbation temporal window. Statistically significant fixed effects (p < 0.01) are in bold.
Table 1
 
Results of generalized linear mixed model analyses on indexes derived from (A) saccadic movements (post-saccadic error and saccadic frequency) and (B) smooth-pursuit movements (τ and gain) during the perturbation temporal window. Statistically significant fixed effects (p < 0.01) are in bold.
Table 2
 
Results of generalized linear mixed model analyses on indexes derived from (A) saccadic movements (post-saccadic error and saccadic frequency) and (B) smooth-pursuit movements (τ and gain) during the occlusion temporal window. Statistically significant fixed effects (p < 0.01) are in bold.
Table 2
 
Results of generalized linear mixed model analyses on indexes derived from (A) saccadic movements (post-saccadic error and saccadic frequency) and (B) smooth-pursuit movements (τ and gain) during the occlusion temporal window. Statistically significant fixed effects (p < 0.01) are in bold.
Table 3
 
Results of generalized linear mixed model analyses on the post-reappearance error. Statistically significant fixed effects (p < 0.01) are in bold.
Table 3
 
Results of generalized linear mixed model analyses on the post-reappearance error. Statistically significant fixed effects (p < 0.01) are in bold.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×