January 2013
Volume 13, Issue 1
Free
Article  |   January 2013
Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task
Author Affiliations
Journal of Vision January 2013, Vol.13, 20. doi:https://doi.org/10.1167/13.1.20
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gabriel Diaz, Joseph Cooper, Constantin Rothkopf, Mary Hayhoe; Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task. Journal of Vision 2013;13(1):20. https://doi.org/10.1167/13.1.20.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.

Introduction
The existence of significant sensory-motor delays in the nervous system presents a particular problem to a reactive organism in a dynamically changing environment. It takes approximately 100 ms for movements to be updated on the basis of somatosensory feedback (Flanders & Cordo, 1989) and 150–200 ms for visual feedback (Georgopoulos, Kalaska, & Massey, 1981; Miall, Weir, Wolpert, & Stein, 1993; Saunders & Knill, 2003). How is it that, despite these delays, humans are able to intercept moving targets with a temporal precision of milliseconds and spatial precision of centimeters? One possible explanation is that humans can predict the future state of the environment using an experience-based model of how the current state is likely to change over time. For example, the proprioceptive consequences of a planned movement are predicted ahead of time using stored internal models of the body's dynamics (Mulliken & Andersen, 2009; Shadmehr, Smith, & Krakauer, 2010; Wolpert, Miall, & Kawato, 1998). It is also commonly assumed that prediction is a fundamental aspect of visual perception (Gregory, 1980; Von Helmholtz, 1962). However, the existence of visual prediction and the particular mechanisms underlying such prediction are still unclear (Enns & Lleras, 2008; W. Warren, 2006; Zago, McIntyre, Senot, & Lacquaniti, 2009). 
Some of the best evidence for prediction in vision comes from the oculomotor system. In this case, both smooth pursuit and saccadic eye movements reveal prediction of the future visual stimulus (Becker & Fuchs, 1985; Findlay, 1981; Kowler, Martins, & Pavel, 1984; Shelhamer & Joiner, 2003). For example, subjects attempting to pursue a target that is briefly occluded during movement in the fronto-parallel plane often make anticipatory movements to the point of target reappearance. Pursuit gain typically diminishes towards zero during the occlusion period, but recovers near the time when subjects expect the target to reappear and is scaled to the expected target velocity (Bennett & Barnes, 2003, 2004; Madelain & Krauzlis, 2003; J.-J. Orban de Xivry, Bennett, Lefèvre, & Barnes, 2006). Under certain circumstances, these predictions may also incorporate acceleration information (Bennett, Orban de Xivry, Barnes, & Lefèvre, 2007; Brouwer, Brenner, & Smeets, 2002). Cognitive factors may also play a role. When pursuing an object moving along a track and towards a junction with two branching target trajectories, subjects continue uninterrupted pursuit along the path implied by visible barriers or auditory cues (Kowler, 1989; Kowler & Martins, 1984). Pursuit also incorporates memory for target motion accumulated over previous experience (Barborica & Ferrera, 2003; Barnes & Collins, 2008; Tabata, Miura, & Kawano, 2007). For example, subjects demonstrate accurate pursuit when presented with a succession of targets that move along different trajectories and at different velocities in a predictable sequence that is learned over multiple trials (Barnes, Collins, & Arnold, 2005). Evidence of prediction has also been found in the saccadic system. If a target is briefly occluded, subjects make predictive saccades to the expected point of target reappearance (Barborica & Ferrera, 2003; Bennett & Barnes, 2006). This is also true for targets moving along curvilinear trajectories (Mrotek & Soechting, 2007; J. J. Orban de Xivry, Missal, & Lefèvre, 2008) and when a transiently occluded target reflects off of an angled line during occlusion (Ferrera & Barborica, 2010). 
Although the majority of research on the basis of prediction has focused on pursuit movements, recent evidence suggests that the locus of the information used to guide predictive pursuit and saccadic eye movements is likely shared. For example, the frontal eye fields (FEF), an area associated with predictive smooth pursuit (Fukushima, Akao, Kurkin, Kaneko, & Fukushima, 2006), have also been implicated in the guidance of predictive saccades (Barborica & Ferrera, 2003; Ferrera & Barborica, 2010; Xiao, Barborica, & Ferrera, 2007). Efforts to identify the locus of these predictive mechanisms is complicated by shared reciprocal connections between FEF and the supplementary eye fields (SEF; Huerta, Krubitzer, & Kass, 1987), an area thought to be involved with both predictive pursuit (Shichinohe et al., 2009; de Hemptinne, Lefèvre, & Missal, 2008) and predictive saccades (Nyffeler, Rivaud-Pechoux, Wattiez, & Gaymard, 2008). Based upon the involvement of both SEF and FEF in pursuit and saccadic eye-movements, it has been suggested that the locus of prediction may exist at a higher, supramodal level (Nyffeler et al., 2008). 
In summary, although both behavioral and physiological evidence suggest a predictive process that is more complicated than the extrapolation of simple target trajectories, however, the majority of published work involves targets that travel along planar trajectories within a visibly defined frame-of-reference. Conversely, only a few studies have examined the more natural setting, in which the reliability of information used for simple extrapolation is affected by the freedom to make head movements and movement of the target along realistic trajectories in depth. A notable exception is that athletes playing cricket, table-tennis, and squash, have been found to make predictive eye movements to the ball's future location along a ball's trajectory (M. M. Hayhoe, McKinney, Chajka, & Pelz, 2012; Land & Furneaux, 1997; Land & McLeod, 2000). For example, Land and McLeod (2000) found that experienced cricket batsmen made a saccade to the anticipated bounce point of the ball, arriving 100–200 ms before the ball bounces. However, this and other in-situ studies also raised several important questions that remain unaddressed. Most importantly, because the real-world paradigm precludes manipulation of physical parameters of the ball's flight, little could be done to investigate what information was being used for prediction. 
In the present experiment we sought to address these questions and to further understand the informational basis of prediction in a complex interception task under conditions where we could control the path of the ball. In particular, we asked what location subjects are targeting and whether this target depends only on visual information in the current trajectory or if it depends in addition on learnt properties of ball dynamics. To do this, we created an immersive virtual racquetball environment, shown in Figure 1a, that affords parametric manipulation of ball trajectories, approximates the natural world, and that affords movement of both the eye and body. In order to test whether prediction is a central aspect of normal visual behavior, we examined unskilled players. Subjects used a racquet tracked by motion capture to hit virtual balls that were projected in a parabolic arc consistent with the effects of gravity and which bounced once before their arrival. Balls varied in launch point, the location of the bounce point on the court, and the location where they passed the observer. To manipulate the balls' prebounce trajectory, each ball was launched so that the vertical component of the ball's trajectory was one of three speeds (7.5, 8.25, or 9 m/s) at the time of the bounce (hereafter, bounce speed). To test whether performance reflects experience of ball dynamics, ball elasticity changed once, halfway through the experiment (from 0.58 to 0.73, or vice versa). Analysis focuses on saccades made just before the bounce that were followed by a fixation that was maintained across the bounce, as shown in Figure 2. Importantly, two balls with the same bounce speed will have identical prebounce trajectories. However, if these balls differ in elasticity, they will follow different trajectories after the bounce. Therefore, if a subject's prebounce saccades are predictive of the post-bounce trajectory, they should vary with ball elasticity. Indeed, spontaneously and without instruction, subjects made highly precise saccades that accounted for changes due to both ball velocity and elasticity to predict the time and location of the ball after the bounce. This suggests that prediction based on learnt internal models of dynamical properties is an important component both of eye movement targeting and of ordinary interceptive behavior. 
Figure 1
 
(a) Subjects viewed a virtual racquetball court in a wide field-of-view head-mounted display. Launched balls were hit with a racquetball racquet that was tracked by a motion-capture system and represented visually in the virtual world. The image of the eye and crosshair depicting gaze-point, shown in the inset, were overlaid post-hoc, and were not visible to the subject. (b). A side-view depicting a possible set of distributions of ball trajectories for a single session. Three distributions of prebounce trajectories differ in the vertical component of the ball's speed upon bounce. For each prebounce trajectory, there were two possible post-bounce trajectories depending on the level of elasticity, shown by solid and dotted lines.
Figure 1
 
(a) Subjects viewed a virtual racquetball court in a wide field-of-view head-mounted display. Launched balls were hit with a racquetball racquet that was tracked by a motion-capture system and represented visually in the virtual world. The image of the eye and crosshair depicting gaze-point, shown in the inset, were overlaid post-hoc, and were not visible to the subject. (b). A side-view depicting a possible set of distributions of ball trajectories for a single session. Three distributions of prebounce trajectories differ in the vertical component of the ball's speed upon bounce. For each prebounce trajectory, there were two possible post-bounce trajectories depending on the level of elasticity, shown by solid and dotted lines.
Figure 2
 
The time-course of predictive gaze-patterns on a representative trial from the time of launch until the time of interception. The solid green line depicts the pitch of the ball from the plane that was located at eye-height and parallel with the floor. The dotted blue line depicts the pitch of the gaze-in-world vector extending from the eye, with solid portion reflecting a prebounce saccade and fixation that were predictive of the ball's post-bounce location.
Figure 2
 
The time-course of predictive gaze-patterns on a representative trial from the time of launch until the time of interception. The solid green line depicts the pitch of the ball from the plane that was located at eye-height and parallel with the floor. The dotted blue line depicts the pitch of the gaze-in-world vector extending from the eye, with solid portion reflecting a prebounce saccade and fixation that were predictive of the ball's post-bounce location.
Methods
Participants
Seven males and one female with normal or corrected-to-normal vision participated in the experiment. All subjects were unskilled at racquetball, having either never played or played only an occasional game. Prior to participation, all subjects signed consent forms that had been approved by the integrity review board at the University of Texas at Austin. 
Apparatus
Subjects viewed the virtual racquetball court through an NVis SX111 head mounted display with a FOV that stretched 102° along the horizontal and 64° along the vertical. The simulated court spanned 15 meters in width and 12 meters from the back wall to the front wall. The position of a racquetball racquet, the VR helmet, and the subject's free hand were monitored using a 14 camera Phasespace motion capture system running at 480 Hz. The total latency before the physical movement of the head or racquet was updated onscreen was less than 80 ms. The motion of the balls and collisions with surfaces and with the virtual racquet were simulated using Open Dynamics Engine (ODE). To ensure accurate collisions, the physics engine updated at a rate of 600 Hz, 10 times the 60 Hz visual refresh rate. Subjects were instructed to use a virtual racquet to hit balls at a target of concentric circles drawn on the front wall of the court. The ball diameter was 5.7 cm, and the height, width, and depth of the visible virtual racquet were 0.6, 0.3, and 0.01 m, respectively. However, to prevent tunneling issues in the detection of ball-to-racquet collisions, the physical thickness of the virtual racquet used for the detection of collisions was exaggerated to .1 m. 
Procedure
Immediately after in-helmet calibration of the eye-tracker, subjects were presented with a view of the virtual racquetball court. Subjects began each trial with their head inside a semi-opaque golden column of dimensions 1 m × 0.5 m × 1.7 m and which was centered 9 m from the front wall. The box was offset .75 m to the left of the room's midline for right handed subjects and .75 m to the right for left handed subjects. Because all balls passed near to the room's midline, this ensured that all swings were forehand swings. Subjects triggered the timing of launches with their free hand. Subjects readied a ball for launch by standing inside the golden column and touching the thumb and middle finger on their gloved hand together for a minimum duration of 500 ms. The golden column would disappear upon the gesture's initiation and the ball would appear up-court at the location from which the ball's trajectory would begin. The ball would launch upon the gesture's release and would be accompanied by auditory feedback. Subjects were informed that each ball would bounce once before its arrival and instructed that their primary goal was to hit the ball as close as possible to the center of a series of concentric circular targets visible on the far wall. 
The experiment began with a practice set of nine trials, which were followed by four experimental blocks. On each trial of the experiment the ball was tossed into the air and landed while travelling at one of three values of ball speed in the vertical direction at the time of bounce, −7, −8.25, −9 m/s, until subjects had completed 18 repetitions of each possible value of bounce speed, for a total of 54 trials per block and 216 trials throughout the experiment. The ball's elasticity, which determines the ratio of the ball's prebounce to post-bounce velocity, was changed between the second and third blocks from 0.58 to 0.73, or vice versa, so that the direction of change was counterbalanced across subjects. Because the floor's mass was set to infinity, the coefficient of restitution was solely determined by the elasticity of the ball and the two terms may be used interchangeably. Although the change in elasticity occurred without the subject's knowledge, the change in post-bounce trajectories brought about by the change in elasticity was not subtle and was likely recognized after only a few trials. 
Ball trajectories
The algorithm for the generation of ball trajectories was designed to produce three distinct distributions of parabolic prebounce ball trajectories. Although these trajectories were consistent with the effects of −9.8 m/s2 of acceleration due to gravity, neither drag due to air resistance nor Magnus forces were incorporated into the simulation of ball trajectories. To produce a wide range of ball trajectories, we also randomized several aspects of the balls' initial conditions uniformly across conditions. Initial conditions included a bounce-point that was drawn from a two-dimensional Gaussian distribution of SD .25 m in width and .1 m in depth that was truncated at two standard deviations and located 5.75 m from the front wall. The distance of the ball's initial position from the bounce-point was randomly selected from a range of 5 to 5.5 meters so that the ball subtended approximately .38 visual degrees of the subject's visual field at the time of launch (SD = 0.01). To ensure that that all balls were within the subjects' reach, the ball's approach angle was selected so that, after bouncing, when the ball passed near to the observer (9 m from the front wall), it would be within 10 cm of the room's midline and .75 m from the box in which subjects were standing at the time of launch. Initial launch height varied from 1.35 to 2 m. Note that, given a fixed starting location, there always existed two solutions for the initial vertical component of the ball's velocity that would bring about the predetermined bounce speed, each corresponding to a unique solution for a single quadratic equation. One solution corresponded to a trajectory beginning with the vertical component velocity in the negative direction and the other with the vertical component of velocity in the positive direction. To maintain a reasonable level of difficulty for the nonathletic subjects used in our task, launches were always defined by the positive solution which had the effect of producing more arcing trajectories. Prebounce flight duration was calculated using the known values of the ball's initial height, vertical velocity, and distance from the bounce-point. Subsequently, the X and Y components of the ball's initial velocity were chosen so that the ball would collide with the ground-plane at the predetermined bounce-point. On each practice trial, the ball was launched along a head-on trajectory from a starting height of 2 m, bounced 7.75 meters from the front wall with a bounce speed of −8.25 m/s and with an elasticity of .65. 
Gaze analysis
At the beginning of the experiment, the Arrington infrared monocular eye-tracking system was calibrated using a nine-point calibration grid that was centered in the subject's field of view. Calibration took place at the beginning of the experiment, and its accuracy was verified in the middle of each of four experimental blocks. If necessary, subjects were recalibrated to ensure accuracy. After calibration, the average distance from the subject's fixation point to the calibration point when measured at each at each of the four corners of the calibration grid was 1.11 visual degrees (SD = 0.33°). During the experiment, the Arrington Viewpoint software provided the x and y pixel centered at the subject's point of visual regard based on the orientation of the subject's left eye. Eye-tracking data was initially sampled at 120 Hz. During analysis, each frame of the visual scene (updated at 60 Hz) was paired with the first sample of gaze-data to appear after the frame was presented to the subject. This had the effect of producing a mean temporal distance between scene-frame and associated gaze-data of .91 ms and a mean within-subject standard deviation of 7 ms. Gaze location was reported as pixel values that were subject to a median filter with a width of four frames followed by a two frame moving average filter. Subsequently, the point-of-regard data was combined with motion capture data concerning eye position and head orientation to identify a single point along the subject's line of sight through the three-dimensional world. The unit vector extended from the subject's left-eye towards this point defined the orientation of the subject's gaze within a world-centered reference frame and will hereafter be referred to as gaze-in-world. Because tracking was monocular, vergence angle was not calculated. 
Saccades were identified using a second-order finite impulse response (FIR) filter, similar to the methods proposed by Duchowski et al. (2002), which involve filtering data with a kernel representative of a paradigmatic saccade, e.g., [0 1 2 3 2 1 0], producing a smoothed signal in which saccade amplitude and width are largely unaffected while erroneous signals are diminished. However, we chose to adopt a modified kernel [−1 0 1 2 3 2 1 0 −1], which had additional benefit of producing exaggerated valleys in the gaze velocity signal just before and after the saccade, facilitating their subsequent identification. As a first step, the algorithm identified isolated peaks of gaze-velocity greater than 40°/s. To identify the starts and ends of saccades, the filtered velocity signal was differenced to produce a measure of gaze acceleration. Saccades extended from the first frame prior to peak saccade velocity in which the acceleration signal rose above 20°/s2 until the frame on which the acceleration signal dropped below 20°/s2. Fixations were defined as periods in which gaze-velocity was below a threshold of 30°/s for a minimum of four frames (∼66 ms). To compensate for the influence of tracker noise, fixations that were separated temporally by less than three frames (∼50 ms) and spatially by less than 3° were grouped together as a single fixation and isolated periods of fixation of less than 100 ms in duration were disregarded. 
Periods of pursuit were classified on the basis of two criteria. The first criterion necessitates the difference between the subject's gaze vector and the vector extending from the position of the eye-in-space to the ball is equal to or less than 5° (This relatively large permissible error accommodates potential mis-calibrations, as well as the somewhat variable performance in one might expect in natural behavior). The second criterion concerned pursuit gain, defined as the ratio of the ball's retinal velocity to the component of the gaze-in-world velocity that was in-line with the balls' movement vector. Pursuit was restricted to a period in which pursuit gain was between .3 and 1.2. Periods of pursuit separated by more than four frames were clumped together as a single pursuit, and isolated periods of pursuit less than 100 ms in duration were disregarded. 
Analysis was restricted to these trials containing prebounce saccades that immediately transitioned into a fixation, with the criterion that the sequence must extend across the time of bounce, thus ensuring that the predictive movements were not influenced by post-bounce ball trajectory. Furthermore, we selected only those trials in which there was less than 33 ms between the end of the saccade and the subsequently identified fixation, with the intention of eliminating trials where the fixation location was adjusted by a small corrective saccade near the time of the bounce. Over 83.6% of trials (SD = 12.1) satisfied these criteria. On remaining trials the most common occurrence was a large saccade prior to the bounce, followed by one or more corrective post-bounce saccades that were presumably influenced by post-bounce information. On a small proportion of trials, the classification was made difficult by a poor tracker signal or a large vestibular-ocular component. 
Results
Gaze patterns were highly consistent both within and across subjects. An illustration of the typical sequence of movements is shown in Figure 2. Subjects initially looked at the launch point and then tracked the ball with a combination of pursuit and saccades before initiating a large saccade 150–200 ms prior to the bounce. The subsequent fixation was maintained as the ball bounced and approached the gaze location. Subjects then resumed tracking with a combination of smooth pursuit and catch-up saccades until shortly before contact with the racquet. 
The analysis focuses on the prebounce saccades and the subsequent fixation. The time that the prebounce saccade was made, relative to the bounce, is plotted in Figure 3 for the six conditions. The earliest saccades were initiated approximately 200 ms before the bounce, with earlier saccades for balls with greater bounce-speed, F(2, 12) = 3.988, p < 0.048, and for more elastic balls, F(1, 6) = 18.99, p < 0.01. The interaction between bounce speed and elasticity, although suggestive, was not significant (p = 0.064). The standard deviation of the initiation time averaged across subjects was 28 ms (± 15 s). 
Figure 3
 
Saccade initiation time relative to the time of the bounce for balls of different vertical ball speeds at bounce (bounce speed) and elasticity. Negative values indicate the time before the bounce. Balls with a greater elasticity are represented as triangles and those with lesser elasticity as circles. Error bars are ±1 standard error of the mean between subjects. For clarity, the data points have been slightly offset along the abscissa.
Figure 3
 
Saccade initiation time relative to the time of the bounce for balls of different vertical ball speeds at bounce (bounce speed) and elasticity. Negative values indicate the time before the bounce. Balls with a greater elasticity are represented as triangles and those with lesser elasticity as circles. Error bars are ±1 standard error of the mean between subjects. For clarity, the data points have been slightly offset along the abscissa.
Subjects did not appear to be targeting the neighborhood of the bounce point but instead directed saccades to a location well above the bounce-point, as depicted in Figure 4a for a single subject. Average data for all subjects are shown in Figure 5a, which plots mean location of the fixation relative to the ball at the time of the bounce (solid lines). Subjects positioned fixation at a higher location when the ball was more elastic, F(1, 6) = 20.72, p < 0.005, and when bounce-speed was greater F(2, 12) = 14.00, p < 0.002. The between-subjects standard error is quite small, indicating that different subjects adopted a similar strategy. The dotted lines in Figure 5a show the location of the ball relative to the fixation at the point in time when the ball came closest to the fixation point after the bounce (For a more comprehensive definition of the process of saccade and fixation identification, see Methods section on Gaze analysis). The ball passed within about 1.5° of the fixation point, irrespective of the condition. The within subject standard deviation across trials for a given condition was 0.57° (averaged over conditions and subjects). Thus, subjects accurately adjusted the location targeted by the predictive saccade to compensate for the different trajectories followed by the different combinations of bounce speed and elasticity. 
Figure 4
 
Ball position is plotted relative to the fixation location. (a) Circles show ball location relative to the fixation location at the time of the bounce for individual trials in one condition for one subject. Angles of pitch reflect the angle from the eye to a plane that is located at eye-height and parallel to the floor. Negative values reflect angles directed towards the floor. Angles of yaw reflect rotation about the vertical axis that is in-line with gravity. Crosses show ball locations relative to gaze at the point in time when the ball comes closest to gaze during the post-bounce portion of the fixation. This minimum distance occurred on average 170 ms after the bounce (b) Ball position from eye-height along the room's vertical axis is plotted against gaze position 170 ms after the bounce, for one subject.
Figure 4
 
Ball position is plotted relative to the fixation location. (a) Circles show ball location relative to the fixation location at the time of the bounce for individual trials in one condition for one subject. Angles of pitch reflect the angle from the eye to a plane that is located at eye-height and parallel to the floor. Negative values reflect angles directed towards the floor. Angles of yaw reflect rotation about the vertical axis that is in-line with gravity. Crosses show ball locations relative to gaze at the point in time when the ball comes closest to gaze during the post-bounce portion of the fixation. This minimum distance occurred on average 170 ms after the bounce (b) Ball position from eye-height along the room's vertical axis is plotted against gaze position 170 ms after the bounce, for one subject.
Figure 5
 
Solid lines show average distance from the gaze vector to the ball at the time of bounce. The dotted lines show the minimum distance from the subject's gaze vector to the ball during fixation. Circles represent less elastic balls, and triangles represent more elastic balls. Symbol shading reflects bounce speed, consistent with values along the abscissa of Figure 5a. Individual subject data were averaged over trials, and these values were averaged across subjects. Error bars are ±1 SEM between subjects. For clarity, the data points have been slightly offset along the abscissa. (a) The abscissa shows the vertical component of the ball's speed at bounce in meters per second. In (b), the abscissa shows the ball's speed 170 ms after the bounce.
Figure 5
 
Solid lines show average distance from the gaze vector to the ball at the time of bounce. The dotted lines show the minimum distance from the subject's gaze vector to the ball during fixation. Circles represent less elastic balls, and triangles represent more elastic balls. Symbol shading reflects bounce speed, consistent with values along the abscissa of Figure 5a. Individual subject data were averaged over trials, and these values were averaged across subjects. Error bars are ±1 SEM between subjects. For clarity, the data points have been slightly offset along the abscissa. (a) The abscissa shows the vertical component of the ball's speed at bounce in meters per second. In (b), the abscissa shows the ball's speed 170 ms after the bounce.
To investigate trial-by-trial correlations for variations of ball trajectory within a single condition, we calculated correlations between the pitch of the gaze-in-word vector with that of the vector extending from the eye to the ball at a fixed time after the bounce, where pitch is defined as the angle of the gaze or eye-to-ball vector from a plane that is located at eye-height and that is parallel to the floor plane. Measurements of ball pitch were taken 170 ms after the bounce, which is the average time at which the ball reached the minimum distance from the gaze vector—a point that is later revisited. Figure 4b plots this relationship for a single representative subject, for whom gaze position was the most highly correlated with ball position. However, correlations varied widely between subjects, with correlation coefficients ranging from .76 to .4, with an average value of .59 and between-subject standard deviation of .13. Thus, despite the uniformly high spatial accuracy in positioning gaze near to the ball's post-bounce trajectory on average, not all subjects appear to have been accurately adjusting gaze with the ball's post-bounce trajectory on a trial-by-trial basis. 
Note that there is some potential ambiguity in the calculation of the minimum distance from the fixation location to the ball. Because all balls approached from the general direction of the front wall, the descending portion of the ball's prebounce trajectory before the bounce, when viewed from the subject's perspective, was often close to the ascending portion of the ball's post-bounce trajectory. Thus, one might argue that fixation locations were directed only at the prebounce portion of the ball's trajectory. However, this claim is inconsistent with the finding that the gaze-point varied with elasticity, the effects of which were not visible at the time the saccade was initiated. Furthermore, subjects maintained fixation until the ball passed close to the fixation point, about 79 ms later (SD = 22 ms). This suggests that fixation locations reflect prediction of the ball's post-bounce trajectory. 
To further investigate the claim that subjects' fixations were based on a prediction of the ball's post-bounce location, Figure 5b shows that data from Figure 5a, plotted as a function of the ball speed 170 ms after the bounce, which is the average time at which the ball reached its minimum distance from the fixation location. The circles reflect less elastic balls and triangles reflect the more elastic balls. The plot shows that the location of fixation at the time of the bounce is linearly related to post-bounce ball speed, F(5, 30) = 19.7, p < 0.001, indicating that gaze location is guided by a prediction of the ball's post-bounce trajectory. Note that subjects fixated at the same height for the two data points that shared a post-bounce speed of approximately 45°/s after the bounce. Importantly, subject behavior is similar, even though these points had two different prebounce speeds and thus reflect two very different distributions of prebounce trajectories. Conversely, although the conditions indicated by the leftmost circle and the leftmost triangle share identical distributions of prebounce trajectories, differences in elasticity bring about different post-bounce ball speeds and thus different fixation locations for the two conditions. Thus, some time before the bounce, subjects are able to predict where the ball will be after the bounce on the basis of a combination of both the prebounce ball speed and elasticity, the former information deriving from the visual field and the latter from prior experience. Despite this scaling of the distance between the fixation and bounce locations and the bounce point to post-bounce ball speed, the minimum distance between the ball and fixation remained at a very low value of approximately 1.5° and was independent of the ball's properties, as is demonstrated by the dotted line in Figure 5b
Since subjects were not told that the elasticity of the ball would change, they must have adjusted their performance as a consequence of experience with the new ball when it changed halfway through the session. However, attempts to identify changes in gaze strategy immediately after the change in ball elasticity were unsuccessful. Results were likely complicated by rapid learning; in a real-world ball catching task subjects adapted fixation locations to a new ball elasticity in as few as three trials (M. Hayhoe, Mennie, Sullivan, & Gorgos, 2005). Analysis was also complicated by small effect sizes; the visual distance between balls 170 ms after the bounce was similar in magnitude to random variation in gaze position for a single condition. Thus, it's unlikely that trial-by-trial changes due to learning surpassed random fluctuations over such a small number of trials. 
In addition to high spatial accuracy, subjects also demonstrated temporal accuracy in prediction of the time of the ball's arrival at the fixation point (Figure 6). The within subject standard deviation of the time of the minimum, calculated across trials for a given condition, was 67 ms (averaged over conditions and subjects). Despite this variability, the average time that the ball comes closest to the gaze point is approximately 170 ms across all conditions. There was a significant interaction between elasticity and bounce speed, F(2, 12) = 4.59, p = 0.033, that was likely caused by a tendency for the minimum to occur sooner for less elastic balls with the lowest bounce speed. Even so, mean values fit within a tight range of approximately 150 to 200 ms. The small range of the mean fixation time and the lack of a strong positive slope suggest that subjects were able to predict both the location and the time of the ball's arrival at the fixation location after the bounce. To better understand why this is a valid claim, it helps to consider the hypothetical situation in which subjects did not raise fixation height with post-bounce ball speed but instead placed the fixation point at a constant height above the bounce point. Simulated ball arrival times for a constant fixation height of 9° above the bounce-point (the average value in Figure 5b) are shown in Figure 6b. For a constant fixation height, more elastic balls with a higher bounce-speed arrive at the fixation point sooner than less elastic balls with a lower bounce-speed. Note that the subject data in Figure 6a differ from the simulated data in two notable ways: The data in Figure 6a lack both the negative slope observed within each value of elasticity and also the large absolute difference between trajectories that share a common bounce speed but differ in elasticity. Subjects were able to produce the constant duration arrival time across all conditions, as seen in Figure 6a, only by scaling the height of the fixation with a prediction of how quickly the ball will be moving after the bounce, as is demonstrated by the positive linear relationship seen in Figure 5b
Figure 6
 
The duration between the bounce and the time at which the ball arrived at the subject's fixation location. Circles represent less elastic balls, and triangles represent more elastic balls. (a) Mean values for all subjects. Error bars are ±1 SEM between subjects. (b) Simulated values if gaze were held 9° directly above the bounce-point on every trial and if all trials came from a head-on trajectory.
Figure 6
 
The duration between the bounce and the time at which the ball arrived at the subject's fixation location. Circles represent less elastic balls, and triangles represent more elastic balls. (a) Mean values for all subjects. Error bars are ±1 SEM between subjects. (b) Simulated values if gaze were held 9° directly above the bounce-point on every trial and if all trials came from a head-on trajectory.
The total time from the initiation of the saccade to the time the ball reaches the closest point is between 300 and 400 ms. The values are a little greater for the more elastic balls, reflecting the tendency for subjects to initiate saccades earlier when the ball was more elastic. Thus the ball's post-bounce location is predicted at least 300 ms ahead of time and accounts for changes in trajectory, speed, and elasticity. Since that value does not take into account the time necessary to program the saccade, prediction presumably occurs even earlier than this. 
Discussion
This experiment was designed to better understand the basis of prediction in vision. As mentioned previously, the idea of visual prediction has long been considered a central aspect of perceptual function (Gregory, 1980), but there is relatively little evidence for explicit prediction. We took advantage of anticipatory saccadic eye-movements to moving targets as an indicator of prediction, assuming that the information used for targeting is essentially visual. We found that naïve subjects make predictive saccades on almost every trial when attempting to intercept a quickly moving target. Shortly before the bounce, subjects initiated a saccade ahead of the moving ball at a location above the bounce point where the ball would arrive shortly after the bounce and at a point in space where the ball would pass 300 to 400 ms later. The finding that this behavior was characteristic of all subjects, despite their lack of experience with racquet sports, suggests that such prediction is an intrinsic component of normal interceptive actions and an important feature of normal saccadic programming. It also suggests that prediction of the future visual state is important, just as prediction of somatosensory feedback is important in the control of body movements. 
By independently manipulating the prebounce and post-bounce portion of the ball's trajectory, we distinguished between behavior that was reactive to prebounce visual information and behavior that was predictive of the ball's post-bounce position. To manipulate prebounce ball trajectory, balls were launched so that the vertical component of their velocity was one of three values at the time of bounce, thus producing three unique distributions of prebounce ball trajectories. To manipulate the post-bounce portion of the ball's trajectories, we varied ball elasticity halfway through the experiment. Because each prebounce trajectory could have either of two post-bounce trajectories depending on the elasticity of the ball, it was not possible to predict the ball's trajectory after the bounce using the prebounce speed alone—elasticity must also be taken into account. 
The finding that subjects were able to compensate for variations in both ball speed and elasticity to form accurate predictions of where the ball would be after the bounce demonstrates that subjects in our task used information beyond what was available in the visual field to predict the future location of the ball. The vertical component of the ball's velocity at the time of bounce varied from trial to trial, and the location of the fixation moved higher to accommodate the higher trajectory of the faster balls. When ball elasticity was varied between blocks of trials, the location of the predictive saccades again was adjusted to accommodate the different post-bounce trajectory. Subjects were accurate to within about 1.5° of visual angle despite the wide range of ball trajectories that were brought about by varying the approach angle of the ball, the location of the bounce point, and the distance and height of the ball at launch. This behavior was observed in all subjects with only modest variability between subjects. This finding, that both visual and nonvisual sources of information are combined to guide performance, bears similarity to the findings of Battaglia, Schrater, and Kersten (2005) and Lopez-Moliner and Keil (2012) who show that subjects take into account familiar size of objects during manual interceptive behavior. 
The data also suggest that subject's spatial strategies were closely related to the time at which the ball would arrive at the fixation location. As combinations of bounce speed and elasticity changed the trajectory after the bounce, subjects raised their fixation locations to maintain an average duration of about 170 ms between the time of bounce and the ball's arrival at the gaze point across all conditions. Such a constant-duration strategy necessitates that the distance between subjects' fixation locations and the bounce-point was scaled linearly with the ball's speed after the bounce. This predictive scaling was based on the information that was available at least 250 ms before the bounce and at least 400 ms before the time the ball arrived at the predicted location. It is unclear, however, how the duration of prediction was affected by the 80 ms of visuomotor lag that existed between physical movement and the subsequent updating of the visual display. The timing of the prediction also varied with velocity and elasticity. The finding that the saccade was initiated slightly earlier for more elastic balls indicates that the saccade was not simply triggered when the ball speed became too high to easily pursue just prior to the bounce. A better explanation is that saccades were triggered by subjects' predictions of post-bounce conditions, rather than the current visual information. 
Despite uniform accuracy in positioning gaze near to the ball's post-bounce trajectory on average, for a subset of subjects, trial-by-trial variations in gaze position were only moderately correlated with ball position 170 ms after the bounce. One explanation is that subjects with weaker correlations were able to achieve low average error by targeting the average location of the ball at a fixed time after the bounce for each combination of bounce speed and ball elasticity and by ignoring variations within each condition that occur on a trial by trial basis. All subjects, however, adjusted their predictions to account for elasticity in addition to velocity, thereby revealing sensitivity to information that was not present in the prebounce trajectory but reflected prior experience. 
The precise purpose of the predictive saccades is not yet entirely clear. One reason to saccade ahead of the ball is simply that it is not possible to track the ball through a discontinuity. While this is a reason for an anticipatory saccade of some kind, it does not account for the spatial and temporal precision of the predictions. Since subjects acquired pursuit of the ball within about 50 ms of the ball's passage through the closest point, we might speculate that positioning gaze on the predicted trajectory facilitates subsequent pursuit (M. Hayhoe et al., 2005). Although interceptions do not require pursuit, it is likely to be a good strategy since it has been shown that pursuit movements facilitate prediction of future location (Spering, Schütz, Braun, & Gegenfurtner, 2011). Thus the saccade to position gaze on the predicted path may facilitate the goal of tracking the ball after it bounces, which in turn may facilitate interception. 
Although our findings share similarity with those of Land and McLeod (2000), Land and McLeod found that subjects saccade directly to the bounce-point, rather than above it. This may be a consequence of the cricket ball's trajectory, which makes it hard to distinguish between fixations on the bounce point itself versus somewhere above the bounce point. In other contexts, subjects have also been found to make anticipatory saccades to locations above the bounce point, consistent with our findings. When catching (M. Hayhoe et al., 2005), playing squash (M. M. Hayhoe et al., 2012), and when playing table-tennis (Land & Furneaux, 1997) subjects also target points beyond the bounce. 
This study raises questions about the nature of memory representations that guided predictive movements. It is possible that subjects learn physical properties of the outside world, such as the coefficient of restitution that determined the dynamics of the ball's bounce, that subsequently facilitate accurate simulation of the physical or visual environment. This kind of explanation is usually framed as an internal model. An alternative solution is to learn a visual mapping between prebounce visual information and the predicted post-bounce visual state and, by scaling this visual mapping with a coefficient that is proportional to ball elasticity, accurately compensate for the way that the visual transformation is influenced by the dynamics of the ball's bounce. For example, Warren, Kim, and Husney (1987) previously suggested that a coefficient representative of elasticity might be perceived on the basis of relative bounce height, the velocity before and after the bounce, or by comparison of the duration between subsequent bounces. This information might then be used to predict the amount of change in a visual source of information that is sampled at just before and after the bounce. Candidate sources of visual information include, but are not limited to the visual angle subtended by the ball, the optical gap between the ball, and a fixed location in the environment, their derivatives, or some combination of such variables (for a review of additional information sources, see Zago et al., 2009). However, without a precise specification of what is meant by an internal model or what optical variables might guide the visual scaling, it is hard to distinguish between these two kinds of explanations. What is clear is that subjects are able to take into account the way balls are affected by gravity and elasticity in order to accurately target future location a short duration after a bounce. 
Although FEF and SEF have been implicated in the encoding of future target location (see Introduction), it seems likely that the information guiding prediction exists at a higher supramodal level (Nyffeler et al., 2008). There are several candidate areas. The parietal cortex shows activity corresponding to occluded target motion (Assad & Maunsell, 1995). Evidence from fMRI in humans implicates dorso-lateral prefrontal cortex (Pierrot-Deseilligny, Müri, Nyffeler, & Milea, 2005) and a distributed cortico-subcortical memory system including prefronto-striatal circuitry (Simó, Krisky, & Sweeney, 2005). Since the cerebellum has long been implicated in internal proprioceptive models, it is also likely that visual signals in the cerebellum play a role (Blakemore, Frith, & Wolpert, 2001; Fautrelle, Pichat, Ricolfi, Peyrin, & Bonnetblanc, 2011; Tseng, Diedrichsen, Krakauer, Shadmehr, & Bastian, 2007). It should be noted that the predictive role of the cerebellum takes advantage of an efference copy signal of the motor command, which is thought to be used by an internal model of the body. An efference copy signal is also thought to be necessary for the transient shifting of receptive fields prior to a saccade in a number of visual areas (Duhamel, Colby, & Goldberg, 1991; Merriam, Genovese, & Colby, 2007), that is presumably involved in the computation of the expected visual consequences of an eye movement (Melcher & Colby, 2008). However, in the absence of an efference copy about the motion of an external object, the prediction in the current experiment must be based on some kind of internal model or learned mapping that relates the current visual state of the quickly moving object to a likely future visual state. 
In the present context, movements are based on some combination of the visual information specific to a particular trajectory with a memory-based component. In reaching, there is evidence for the optimal Bayesian integration of current visual information with stored priors reflecting learnt statistics of the visual image (Körding & Wolpert, 2004; Tassinari, Hudson, & Landy, 2006). The present results raise the possibility that a similar combination of information sources occurs with saccadic eye movements. Note that the proposition that the saccadic targeting mechanism computes a weighted combination of visual and memory signals differs from the idea that predictive saccades and visually guided (reactive) saccades derive from separate control strategies (Findlay, 1981; Milner & Goodale, 2008). It should also be noted that the existence of pervasive prediction in eye movement targeting is in marked contrast to image-based models of target selection, such as those based on image salience (Itti & Koch, 2001; Zhang, Tong, Marks, & Cottrell, 2008; Zhao & Koch, 2011). Although the current visual image has an important role to play, predictive eye movements reveal that the observer's best guess at the future state of the environment is based, in part, on representations that reflect learnt statistical properties of dynamic visual environments. 
Acknowledgments
This research benefitted from the insightful critique of Brett Fajen, the programming support of John Stone, and diligent support of James Wyatt Ray III throughout the process of data collection. This research was supported by NIH grants EY05729 and EY 019174. 
Commercial relationships: none. 
Corresponding author: Gabriel Jacob Diaz. 
Email: gdiaz@mail.cps.utexas.edu 
Address: Center for Perceptual SystemsUniversity of Texas AustinAustin, TX, USA. 
References
Assad J. Maunsell J. (1995). Neuronal correlates of inferred motion in primate posterior parietal cortex. Nature, 373(9), 518–521. [CrossRef] [PubMed]
Barborica A. Ferrera V. P. (2003). Estimating invisible target speed from neuronal activity in monkey frontal eye field. Nature Neuroscience,6(1), 66–74, doi:10.1038/nn990. [PubMed]
Barnes G. R. Collins C. J. S. (2008). Evidence for a link between the extra-retinal component of random-onset pursuit and the anticipatory pursuit of predictable object motion. Journal of Neurophysiology,100(2), 1135–1146, doi:10.1152/jn.00060.2008. [CrossRef] [PubMed]
Barnes G. R. Collins C. J. S. Arnold L. R. (2005). Predicting the duration of ocular pursuit in humans. Experimental Brain Research,160(1), 10–21, doi:10.1007/s00221-004-1981-3. [CrossRef] [PubMed]
Battaglia P. W. Schrater P. R. Kersten D. J. (2005). Auxiliary object knowledge influences visually-guided interception behavior. APGV '05 Proceedings of the 2nd Symposium on Applied Perception in Graphics and Visualization,95, 145–152. doi:10.1145/1080402.1080430.
Becker W. Fuchs A. (1985). Prediction in the oculomotor system: Smooth pursuit during transient disappearance of a visual target. Experimental Brain Research,57, 562–575. [CrossRef] [PubMed]
Bennett S. J. Barnes G. R. (2003). Human ocular pursuit during the transient disappearance of a visual target. Journal of Neurophysiology,90(4), 2504–2520, doi:10.1152/jn.01145.2002. [CrossRef] [PubMed]
Bennett S. J. Barnes G. R. (2004). Predictive smooth ocular pursuit during the transient disappearance of a visual target. Journal of Neurophysiology,92(1), 578–590, doi:10.1152/jn.01188.2003. [CrossRef] [PubMed]
Bennett S. J. Barnes G. R. (2006). Combined smooth and saccadic ocular pursuit during the transient occlusion of a moving visual object. Experimental Brain Research,168(3), 313–321, doi:10.1007/s00221-005-0101-3. [CrossRef] [PubMed]
Bennett S. J. Orban de Xivry J. J. Barnes G. R. Lefèvre P. (2007). Target acceleration can be extracted and represented within the predictive drive to ocular pursuit. Journal of Neurophysiology,98(3), 1405–1414, doi:10.1152/jn.00132.2007. [CrossRef] [PubMed]
Blakemore S. J. Frith C. D. Wolpert D. M. (2001). The cerebellum is involved in predicting the sensory consequences of action. Neuroreport,12(9), 1879–1884. [CrossRef] [PubMed]
Brouwer A. Brenner E. Smeets J. B. J. (2002). Perception of acceleration with short presentation times: Can acceleration be used in interception?Perception & Psychophysics,64(7), 1160–1168. [CrossRef] [PubMed]
de Hemptinne C. Lefèvre P. Missal M. (2008). Neuronal bases of directional expectation and anticipatory pursuit. The Journal of Neuroscience,28(17), 4298–4310, doi:10.1523/JNEUROSCI.5678-07.2008. [CrossRef] [PubMed]
Duchowski A. Medlin E. Cournia N. Murphy H. Gramopadhye A. Nair S. (2002). 3-D eye movement analysis. Behavior Research Methods, Instruments, & Computers: A Journal of the Psychonomic Society, Inc,34(4), 573–591. [CrossRef] [PubMed]
Duhamel J. Colby C. L. Goldberg M. E. (1991). The updating of the representation of visual space in parietal cortex by intended eye movements. Science, 255(1990), 1989–1991.
Enns J. T. Lleras A. (2008). What's next? New evidence for prediction in human vision. Trends in Cognitive Sciences,12(9), 327–333, doi:10.1016/j.tics.2008.06.001. [CrossRef] [PubMed]
Fautrelle L. Pichat C. Ricolfi F. Peyrin C. Bonnetblanc F. (2011). Catching falling objects: The role of the cerebellum in processing sensory-motor errors that may influence updating of feedforward commands. An fMRI study. Neuroscience,190, 135–144, doi:10.1016/j.neuroscience.2011.06.034. [CrossRef] [PubMed]
Ferrera V. P. Barborica A. (2010). Internally generated error signals in monkey frontal eye field during an inferred motion task. Journal of Neuroscience,30(35), 11 612–11 623, doi:10.1523/JNEUROSCI.2977-10.2010.
Findlay J. (1981). Spatial and temporal factors in the predictive generation of saccadic eye movements. Vision Research,21, 347–351. [CrossRef] [PubMed]
Flanders M. Cordo P. J. (1989). Kinesthetic and visual control of a bimanual task: specification of direction and amplitude. The Journal of Neuroscience,9(2), 447–453. [PubMed]
Fukushima J. Akao T. Kurkin S. Kaneko C. Fukushima K. (2006). The vestibular-related frontal cortex and its role in smooth-pursuit eye movements and vestibular-pursuit interactions. Journal of Vestibular Research,16, 1–22. [PubMed]
Georgopoulos A. P. Kalaska J. F. Massey J. T. (1981). Spatial trajectories and reaction times of aimed movements: Effects of practice, uncertainty, and change in target location. Journal of Neurophysiology,46(4), 725–743. [PubMed]
Gregory R. L. (1980). Perceptions as hypotheses. Philosophical Transactions of the Royal Society of London,290(1038), 181–197. [CrossRef] [PubMed]
Hayhoe M. M. McKinney T. Chajka K. Pelz J. B. (2012). Predictive eye movements in natural vision. Experimental Brain Research,217(1), 125–36, doi:10.1007/s00221-011-2979-2. [CrossRef] [PubMed]
Hayhoe M. Mennie N. Sullivan B. Gorgos K. (2005, Fall).The role of internal models and prediction in catching balls. InTechnical Report FS-05-05 (Castelfranchi C. Balkenius C. Butz M. Ortony A.). Proceedings of AAAI. Symposium conducted at the meeting of AAI, Menlo Park, CA, USA.
Huerta M. F. Krubitzer L. A. Kass J. H. (1987). Frontal eye field as defined by intracortical microstimulation in squirrel monkeys, owl monkeys, and macaque monkeys. II. Cortical connections. Journal of Computational Neurology, 265(3), 332–361. [CrossRef]
Itti L. Koch C. (2001). Computational modelling of visual sttention. Nature Reviews Neuroscience,2(February), 1–11.
Kowler E. (1989). Cognitive expectations, not habits, control anticipatory smooth oculomotor pursuit. Vision Research,29(9), 1049–1057. [CrossRef] [PubMed]
Kowler E. Martins A. (1984). The effect of expectations on slow oculomotor control--IV. Anticipatory smooth eye movements depend on prior target motions. Vision Research, 24(3), 197–210. [CrossRef] [PubMed]
Kowler E. Martins A. Pavel M. (1984). The effect of expectations on slow oculomotor control--IV. Anticipatory smooth eye movements depend on prior target motions. Vision Research, 24(3), 197–210. [CrossRef] [PubMed]
Körding K. P. Wolpert D. M. (2004). Bayesian integration in sensorimotor learning. Nature,427(6971), 244–247, doi:10.1038/nature02169. [CrossRef] [PubMed]
Land M. F. Furneaux S. (1997). The knowledge base of the oculomotor system. Philosophical Transactions of the Royal Society of London,352(1358), 1231–1239, doi:10.1098/rstb.1997.0105. [CrossRef] [PubMed]
Land M. F. McLeod P. (2000). From eye movements to actions: How batsmen hit the ball. Nature Neuroscience,3(12), 1340–1345, doi:10.1038/81887. [CrossRef] [PubMed]
Lopez-Moliner J. Keil M. S. (2012). People favour imperfect catching by assuming a stable world. PLOS One,7(4), doi:10.1371/Citation.
Madelain L. Krauzlis R. J. (2003). Effects of learning on smooth pursuit during transient disappearance of a visual target. Journal of Neurophysiology,90(2), 972–982, doi:10.1152/jn.00869.2002. [CrossRef] [PubMed]
Melcher D. Colby C. L. (2008). Trans-saccadic perception. Trends in Cognitive Sciences,12(12), 466–473, doi:10.1016/j.tics.2008.09.003. [CrossRef] [PubMed]
Merriam E. P. Genovese C. R. Colby C. L. (2007). Remapping in human visual cortex. Journal of Neurophysiology,97(2), 1738–1755, doi:10.1152/jn.00189.2006. [PubMed]
Miall R. C. Weir D. J. Wolpert D. M. Stein J. F. (1993). Is the cerebellum a smith predictor?Journal of Motor Behavior,25(3), 203–216, doi:10.1080/00222895.1993.9942050. [CrossRef] [PubMed]
Milner A. Goodale M. (2008). Two visual systems re-viewed. Neuropsychologia,46(3), 774–785, doi:10.1016/j.neuropsychologia.2007.10.005. [CrossRef] [PubMed]
Mrotek L. A. Soechting J. F. (2007). Predicting curvilinear target motion through an occlusion. Experimental Brain Research,178(1), 99–114, doi:10.1007/s00221-006-0717-y. [CrossRef] [PubMed]
Mulliken G. H. Andersen R. A. (2009). Forward models and state estimation in posterior parietal cortex. InGazzaniga M. S.(Ed.),The cognitive neurosciences IV (pp. 599–611). Cambridge, MA: MIT Press.
Nyffeler T. Rivaud-Pechoux S. Wattiez N. Gaymard B. (2008). Involvement of the supplementary eye field in oculomotor predictive behavior. Journal of Cognitive Neuroscience,20(9), 1583–1594, doi:10.1162/jocn.2008.20073. [CrossRef] [PubMed]
Orban de Xivry J.-J. Bennett S. J. Lefèvre P. Barnes G. R. (2006). Evidence for synergy between saccades and smooth pursuit during transient target disappearance. Journal of Neurophysiology, 95(1), 418–427, doi:http://www.journalofvision.org/content/8/15/6, doi:10.1167/8.15.6. [PubMed] [Article] [CrossRef] [PubMed]
Orban de Xivry J. J. Missal M. Lefèvre P. (2008). A dynamic representation of target motion drives predictive smooth pursuit during target blanking. Journal of Vision,8(15):6, 1–13, http://www.journalofvision.org/content/8/15/6, doi:10.1167/8.15.6. [PubMed] [Article] [CrossRef] [PubMed]
Pierrot-Deseilligny C. Müri R. M. Nyffeler T. Milea D. (2005). The role of the human dorsolateral prefrontal cortex in ocular motor behavior. Annals of the New York Academy of Sciences,1039, 239–251, doi:10.1196/annals.1325.023. [CrossRef] [PubMed]
Saunders J. A Knill D. C. (2003). Humans use continuous visual feedback from the hand to control fast reaching movements. Experimental Brain Research,152(3), 341–352, doi:10.1007/s00221-003-1525-2. [CrossRef] [PubMed]
Shadmehr R. Smith M. Krakauer J. (2010). Error correction, sensory prediction, and adaptation in motor control. Annual Review of Neuroscience,33, 89–108, doi:10.1146/annurev-neuro-060909-153135. [CrossRef] [PubMed]
Shelhamer M. Joiner W. M. (2003). Saccades exhibit abrupt transition between reactive and predictive; predictive saccade sequences have long-term correlations. Journal of Neurophysiology,90(4), 2763–2769, doi:10.1152/jn.00478.2003. [CrossRef] [PubMed]
Shichinohe N. Akao T. Kurkin S. Fukushima J. Kaneko C. R. S. Fukushima K. (2009). Memory and decision making in the frontal cortex during visual motion processing for smooth pursuit eye movements. Neuron,62(5), 717–732, doi:10.1016/j.neuron.2009.05.010. [CrossRef] [PubMed]
Simó L. S. Krisky C. M. Sweeney J. A. (2005). Functional neuroanatomy of anticipatory behavior: Dissociation between sensory-driven and memory-driven systems. Cerebral Cortex,15(12), 1982–1991, doi:10.1093/cercor/bhi073. [CrossRef] [PubMed]
Spering M. Schütz A. C. Braun D. I. Gegenfurtner K. R. (2011). Keep your eyes on the ball: Smooth pursuit eye movements enhance prediction of visual motion. Journal of Neurophysiology,105(4), 1756–1767, doi:10.1152/jn.00344.2010. [CrossRef] [PubMed]
Tabata H. Miura K. Kawano K. (2007). Preparation for smooth pursuit eye movement based on expectation in humans. Systems and Computers in Japan,38(6), 1–9, doi:10.1002/scj.20677. [CrossRef]
Tassinari H. Hudson T. E. Landy M. S. (2006). Combining priors and noisy visual cues in a rapid pointing task. The Journal of Neuroscience,26(40), 10 154–10 163, doi:10.1523/JNEUROSCI.2779-06.2006.
Tseng Y.-W. Diedrichsen J. Krakauer J. W. Shadmehr R. Bastian A. J. (2007). Sensory prediction errors drive cerebellum-dependent adaptation of reaching. Journal of Neurophysiology,98(1), 54–62, doi:10.1152/jn.00266.2007. [CrossRef] [PubMed]
Von Helmholtz H. (1962). Helmholtz's treatise on physiological optics,Southall James P. C.(Ed.). New York: Dover Publishing.
Warren W. (2006). The dynamics of perception and action. Psychological Review,113(2), 358–389, doi:10.1037/0033-295X.113.2.358. [CrossRef] [PubMed]
Warren W. H. Kim E. E. Husney R. (1987). The way the ball bounces: Visual and auditory perception of elasticity and control of the bounce pass. Perception,16, 309–336. [CrossRef] [PubMed]
Wolpert D. M. Miall R. C. Kawato M. (1998). Internal models in the cerebellum. Trends in Cognitive Sciences, 2(9), 338–347. [CrossRef] [PubMed]
Xiao Q. Barborica A. Ferrera V. P. (2007). Modulation of visual responses in macaque frontal eye field during covert tracking of invisible targets. Cerebral Cortex,17(4), 918–928, doi:10.1093/cercor/bhl002. [PubMed]
Zago M. McIntyre J. Senot P. Lacquaniti F. (2009). Visuo-motor coordination and internal models for object interception. Experimental Brain Research,192(4), 571–604, doi:10.1007/s00221-008-1691-3. [CrossRef] [PubMed]
Zhang L. Tong M. H. Marks T. K. Cottrell G. W. (2008). SUN: A Bayesian framework for saliency using natural statistics. Journal of Vision, 8(7):32, 1–20, http://www.journalofvision.org/content/8/7/32, doi:10.1167/8.7.32. [PubMed] [Article] [CrossRef] [PubMed]
Zhao Q. Koch C. (2011). Learning a saliency map using fixated locations in natural scenes. Journal of Vision, 11(3):9, 1–15, http://www.journalofvision.org/content/11/3/9, doi:10.1167/11.3.9. [PubMed] [Article] [CrossRef] [PubMed]
Figure 1
 
(a) Subjects viewed a virtual racquetball court in a wide field-of-view head-mounted display. Launched balls were hit with a racquetball racquet that was tracked by a motion-capture system and represented visually in the virtual world. The image of the eye and crosshair depicting gaze-point, shown in the inset, were overlaid post-hoc, and were not visible to the subject. (b). A side-view depicting a possible set of distributions of ball trajectories for a single session. Three distributions of prebounce trajectories differ in the vertical component of the ball's speed upon bounce. For each prebounce trajectory, there were two possible post-bounce trajectories depending on the level of elasticity, shown by solid and dotted lines.
Figure 1
 
(a) Subjects viewed a virtual racquetball court in a wide field-of-view head-mounted display. Launched balls were hit with a racquetball racquet that was tracked by a motion-capture system and represented visually in the virtual world. The image of the eye and crosshair depicting gaze-point, shown in the inset, were overlaid post-hoc, and were not visible to the subject. (b). A side-view depicting a possible set of distributions of ball trajectories for a single session. Three distributions of prebounce trajectories differ in the vertical component of the ball's speed upon bounce. For each prebounce trajectory, there were two possible post-bounce trajectories depending on the level of elasticity, shown by solid and dotted lines.
Figure 2
 
The time-course of predictive gaze-patterns on a representative trial from the time of launch until the time of interception. The solid green line depicts the pitch of the ball from the plane that was located at eye-height and parallel with the floor. The dotted blue line depicts the pitch of the gaze-in-world vector extending from the eye, with solid portion reflecting a prebounce saccade and fixation that were predictive of the ball's post-bounce location.
Figure 2
 
The time-course of predictive gaze-patterns on a representative trial from the time of launch until the time of interception. The solid green line depicts the pitch of the ball from the plane that was located at eye-height and parallel with the floor. The dotted blue line depicts the pitch of the gaze-in-world vector extending from the eye, with solid portion reflecting a prebounce saccade and fixation that were predictive of the ball's post-bounce location.
Figure 3
 
Saccade initiation time relative to the time of the bounce for balls of different vertical ball speeds at bounce (bounce speed) and elasticity. Negative values indicate the time before the bounce. Balls with a greater elasticity are represented as triangles and those with lesser elasticity as circles. Error bars are ±1 standard error of the mean between subjects. For clarity, the data points have been slightly offset along the abscissa.
Figure 3
 
Saccade initiation time relative to the time of the bounce for balls of different vertical ball speeds at bounce (bounce speed) and elasticity. Negative values indicate the time before the bounce. Balls with a greater elasticity are represented as triangles and those with lesser elasticity as circles. Error bars are ±1 standard error of the mean between subjects. For clarity, the data points have been slightly offset along the abscissa.
Figure 4
 
Ball position is plotted relative to the fixation location. (a) Circles show ball location relative to the fixation location at the time of the bounce for individual trials in one condition for one subject. Angles of pitch reflect the angle from the eye to a plane that is located at eye-height and parallel to the floor. Negative values reflect angles directed towards the floor. Angles of yaw reflect rotation about the vertical axis that is in-line with gravity. Crosses show ball locations relative to gaze at the point in time when the ball comes closest to gaze during the post-bounce portion of the fixation. This minimum distance occurred on average 170 ms after the bounce (b) Ball position from eye-height along the room's vertical axis is plotted against gaze position 170 ms after the bounce, for one subject.
Figure 4
 
Ball position is plotted relative to the fixation location. (a) Circles show ball location relative to the fixation location at the time of the bounce for individual trials in one condition for one subject. Angles of pitch reflect the angle from the eye to a plane that is located at eye-height and parallel to the floor. Negative values reflect angles directed towards the floor. Angles of yaw reflect rotation about the vertical axis that is in-line with gravity. Crosses show ball locations relative to gaze at the point in time when the ball comes closest to gaze during the post-bounce portion of the fixation. This minimum distance occurred on average 170 ms after the bounce (b) Ball position from eye-height along the room's vertical axis is plotted against gaze position 170 ms after the bounce, for one subject.
Figure 5
 
Solid lines show average distance from the gaze vector to the ball at the time of bounce. The dotted lines show the minimum distance from the subject's gaze vector to the ball during fixation. Circles represent less elastic balls, and triangles represent more elastic balls. Symbol shading reflects bounce speed, consistent with values along the abscissa of Figure 5a. Individual subject data were averaged over trials, and these values were averaged across subjects. Error bars are ±1 SEM between subjects. For clarity, the data points have been slightly offset along the abscissa. (a) The abscissa shows the vertical component of the ball's speed at bounce in meters per second. In (b), the abscissa shows the ball's speed 170 ms after the bounce.
Figure 5
 
Solid lines show average distance from the gaze vector to the ball at the time of bounce. The dotted lines show the minimum distance from the subject's gaze vector to the ball during fixation. Circles represent less elastic balls, and triangles represent more elastic balls. Symbol shading reflects bounce speed, consistent with values along the abscissa of Figure 5a. Individual subject data were averaged over trials, and these values were averaged across subjects. Error bars are ±1 SEM between subjects. For clarity, the data points have been slightly offset along the abscissa. (a) The abscissa shows the vertical component of the ball's speed at bounce in meters per second. In (b), the abscissa shows the ball's speed 170 ms after the bounce.
Figure 6
 
The duration between the bounce and the time at which the ball arrived at the subject's fixation location. Circles represent less elastic balls, and triangles represent more elastic balls. (a) Mean values for all subjects. Error bars are ±1 SEM between subjects. (b) Simulated values if gaze were held 9° directly above the bounce-point on every trial and if all trials came from a head-on trajectory.
Figure 6
 
The duration between the bounce and the time at which the ball arrived at the subject's fixation location. Circles represent less elastic balls, and triangles represent more elastic balls. (a) Mean values for all subjects. Error bars are ±1 SEM between subjects. (b) Simulated values if gaze were held 9° directly above the bounce-point on every trial and if all trials came from a head-on trajectory.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×