February 2019
Volume 19, Issue 2
Open Access
Article  |   February 2019
Decoding go/no-go decisions from eye movements
Author Affiliations
  • Jolande Fooken
    Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada
    Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
    fooken@cs.ubc.ca
  • Miriam Spering
    Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada
    Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
    Center for Brain Health, University of British Columbia, Vancouver, Canada
    Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, Canada
    miriam.spering@ubc.ca
Journal of Vision February 2019, Vol.19, 5. doi:https://doi.org/10.1167/19.2.5
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jolande Fooken, Miriam Spering; Decoding go/no-go decisions from eye movements. Journal of Vision 2019;19(2):5. https://doi.org/10.1167/19.2.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Neural activity in brain areas involved in the planning and execution of eye movements predicts the outcome of an upcoming perceptual decision. Many real-world decisions, such as whether to swing at a baseball pitch, are accompanied by characteristic eye-movement behavior. Here we ask whether human eye-movement kinematics can sensitively predict decision outcomes in a go/no-go task requiring rapid interceptive hand movements. Observers (n = 45) viewed a moving target that passed through or missed a designated strike box. Critically, the target disappeared briefly after launch, and observers had to predict the target's trajectory, withholding a hand movement if it missed (no-go) or intercepting inside the strike box (go). We found that go/no-go decisions were reflected in distinct eye-movement responses on a trial-by-trial basis: Eye-position error and targeting-saccade dynamics predicted decision outcome with 76% accuracy across conditions. Model prediction accuracy was related to observers' decision accuracy across different levels of task difficulty and sensory-signal strength. Our findings suggest that eye movements provide a sensitive and continuous readout of internal neural decision-making processes and reflect decision-task requirements in human observers.

Introduction
Every baseball fan loves the sound of a hitter's bat colliding with the baseball to hit a home run. Just prior to this magical moment, the batter has to decide whether to swing at the pitch by rapidly decoding and predicting the ball's motion trajectory. Perceptual decisions in such situations rely on a hierarchy of brain areas involved with sensory processing and motor control (Gold & Shadlen, 2007; Hanks & Summerfield, 2017; Heekeren, Marrett, & Ungerleider, 2008; Schall, 2013). Importantly, activity in these brain areas is altered prior to the choice response. Reliable neural signatures, reflecting the outcome of an upcoming perceptual decision, have been observed across different tasks and species (Bennur & Gold, 2011; Crapse, Lau, & Basso, 2018; Ding & Gold, 2013; Gold & Shadlen, 2000; Heinen, Rowland, Lee, & Wade, 2006; Kim, Badler, & Heinen, 2005; Liu & Pleskac, 2011; Pape & Siegel, 2016; Pho, Goard, Woodson, Crawford, & Sur, 2018; Shadlen & Newsome, 1996; Yates, Park, Katz, Pillow, & Huk, 2017). However, the link between decision signals and continuous motor actions such as smooth-pursuit eye movements is less well understood. 
Here we ask whether decision outcome in a rapid go/no-go interception task can be reflected in humans' eye-movement responses on a trial-by-trial basis. Many of the brain areas involved in the control of eye movements also carry decision signals. In natural tasks, these decision signals are ultimately linked to the action outcome—for example, batters will only swing at pitches they judge to be hittable. Eye movements closely reflect task requirements and action goals and provide a continuous update of the action space (Brenner & Smeets, 2017; Hayhoe, 2017; Hayhoe, McKinney, Chajka, & Pelz, 2012; Johansson, Westling, Bäckström, & Flanagan, 2001; Land, Mennie, & Rusted, 1999; Smeets, Hayhoe, & Ballard, 1996). Moreover, eye movements are modulated by decision formation even when the eye movements are task irrelevant—for example, when they are not indicating the choice response in a visual-discrimination task (Joo, Katz, & Huk, 2016). Given the close link between neural activity in oculomotor areas and decision formation, and between perceptual decisions and action goals, we propose that eye movements might be a sensitive indicator of decision outcome. 
We developed a rapid interception task called EyeStrike, in which observers had to make a perceptual decision and predict whether a briefly presented moving target would pass or miss a designated strike box (Figure 1A). Similar to ocular baseball, a paradigm developed by Heinen and colleagues (Kim et al., 2005), observers were instructed to withhold an action if they judged the target to be outside the strike box (no-go) and to otherwise initiate an action (go). In contrast to ocular baseball, observers in EyeStrike were asked to track the visual target during decision formation with their eyes and to indicate their choice by withholding or initiating an interceptive hand movement. This allowed us to decode decision making from a continuous natural eye-movement response. We related observers' eye movements to the decision outcome (go vs. no-go). Congruent with decision signatures in neural activity, we found that go/no-go decisions were reflected in distinct eye-movement responses on a trial-by-trial basis, and that eye-movement-based prediction accuracy was related to observers' decision accuracy. Model prediction accuracy was higher for easy versus hard task versions and increased with increasing signal strength, suggesting that eye-movement signatures also reflect sensory-signal accumulation toward a decision threshold. 
Figure 1
 
Experimental procedure and design. (A) Observers were asked to fixate on a small black Gaussian dot (±15° from screen center). After 0.5–1 s, the target moved along a diagonal linear path and disappeared after being shown briefly (100–300 ms). Observers had to withhold a hand movement if the target missed a strike box (no-go) and intercept the target inside the strike box if it passed through (go). Observers received feedback about their interception position (red disk) in pass trials and about the target's final position (black X) in all trials. (B) Paradigm design. The target launched either upward or downward at one of four angles (5°, 7°, 10°, or 12°). Trajectories that passed or missed close to the corners of the strike box (7° and 10°) were more difficult.
Figure 1
 
Experimental procedure and design. (A) Observers were asked to fixate on a small black Gaussian dot (±15° from screen center). After 0.5–1 s, the target moved along a diagonal linear path and disappeared after being shown briefly (100–300 ms). Observers had to withhold a hand movement if the target missed a strike box (no-go) and intercept the target inside the strike box if it passed through (go). Observers received feedback about their interception position (red disk) in pass trials and about the target's final position (black X) in all trials. (B) Paradigm design. The target launched either upward or downward at one of four angles (5°, 7°, 10°, or 12°). Trajectories that passed or missed close to the corners of the strike box (7° and 10°) were more difficult.
Methods
Observers
We collected data from 45 male observers (26 members of the University of British Columbia male varsity baseball team and 19 age- and gender-matched nonathletes; mean age: 20.6 ± 1.9 years) with normal or corrected-to-normal visual acuity; 39 were right-handed, six were left-handed (dominant hand was defined as the throwing hand). All observers were unaware of the purpose of the experiment. The experimental protocol adhered to the Declaration of Helsinki and was approved by the University of British Columbia Behavioral Research Ethics Board; observers gave written informed consent prior to participation. 
EyeStrike paradigm
Observers were asked to track a moving target, a black Gaussian dot (SD = 0.38°) with a diameter of 2° of visual angle, and to predict whether the target would pass (“go” response required) or miss (“no-go” required) a designated strike box (Figure 1A and 1B). We instructed observers to withhold a hand movement in miss trajectories and to intercept the ball with their index finger while it was in the strike box in pass trajectories. Depending on the target speed, observers had a time window of 150–170 ms to intercept the target inside the box. Each interception started from a table-fixed position and was made with the dominant hand. 
Each trial started with drift correction during fixation on a target presented 15° to the left or right of the screen center. During drift correction, the eye had to be within a 1.4° radius of the fixation target for 0.5–1 s. Stimulus motion was always into the ipsilateral field—that is, for right-handed observers, stimulus motion was from left to right (see example trial in Figure 1A), and vice versa. Then the stimulus followed a linear, diagonal path that either hit or missed a darker gray (31.5 cd/m2) strike box that was 6° × 10° in size and offset by 12° from the center to the side of interception (Figure 1B). Stimulus velocity followed natural forces (gravity, drag force, Magnus effect; Fooken, Yeo, Pai, & Spering, 2016). Easy trajectories clearly passed through (launch angle: ±5°) or clearly missed the strike box (launch angle: ±12°). Difficult trajectories passed (launch angle: ±7°) or missed (launch angle: ±10°) the strike box close to its corners. Importantly, the target disappeared shortly after launch, yielding different degrees of motion-signal strength. A combination of different viewing durations (100–300 ms) and stimulus speeds (36°/s or 41°/s) resulted in visible trajectory lengths ranging from 3.6° (short or weak signal strength) to 12.3° (long or strong signal strength). All conditions were randomized and equally balanced. We instructed observers to track the target with their eyes and to follow its assumed trajectory even after it had disappeared. Each trial ended either when observers intercepted the target or when the target reached the end of the screen (1–1.1 s). At the end of each trial observers received feedback about their performance; target end position was shown, and correct or incorrect decisions were indicated (see Figure 1A). Each observer performed a familiarization session (16 trials; full trajectory visible), followed by 384 experimental trials in which the target disappeared. 
We defined four response types following conventions in the literature (Kim et al., 2005; Yang, Hwang, Ford, & Heinen, 2010). Trials were classified as correct go if observers made an interception (i.e., touched the screen) in response to a pass trajectory and as incorrect go if they moved their hands more than halfway to the screen during a miss trajectory. Trials were classified as correct no-go or incorrect no-go if observers withheld a hand movement or moved their hand less than halfway to the screen in response to a miss or pass trajectory, respectively. Decision accuracy was calculated as the percentage of all correct go and no-go responses. 
Visual display and apparatus
The visual target was shown at a luminance of 5.4 cd/m2 on a uniform gray background (35.9 cd/m2). Stimuli were back-projected onto a translucent screen with a PROPixx video projector (VPixx Technologies, Saint-Bruno, Canada; refresh rate: 60 Hz; resolution: 1,280 × 1,024 pixels). The displayed window was 44.5 × 36 cm or 55° × 45° in size. Stimulus display and data collection were controlled by a PC (NVIDIA GeForce GT 430 graphics card), and the experiment was programmed in Matlab 7.1 using Psychtoolbox 3.0.8 (Brainard, 1997; Kleiner et al., 2007; Pelli, 1997). Observers were seated in a dimly lit room at 46 cm distance from the screen with their head supported by a combined chin and forehead rest. 
Eye- and hand-movement recordings and preprocessing
Eye-position signals from the right eye were recorded with a video-based eye tracker (EyeLink 1000 tower mount; SR Research Ltd., Ottawa, Canada) and sampled at 1000 Hz. Eye movements were analyzed off-line using custom-made routines in Matlab. Eye velocity profiles were filtered using a low-pass, second-order Butterworth filter with cutoff frequencies of 15 Hz (position) and 30 Hz (velocity). Saccades were detected based on a combined velocity and acceleration criterion: Five consecutive frames had to exceed a fixed velocity criterion of 50°/s; saccade on- and offsets were then determined as acceleration minima and maxima, respectively, and saccades were excluded from pursuit analysis. Pursuit onset was detected in individual traces using a piecewise linear function fitted to the filtered position trace (Fooken et al., 2016). 
Finger position was recorded with a magnetic tracker (3D Guidance trakSTAR; Ascension Technology Corp., Shelburne, VT) at a sampling rate of 240 Hz; a lightweight sensor was attached to the index fingertip of the observer's dominant hand with a small Velcro strap. The 2-D finger interception position was recorded in screen-centered x- and y-coordinates. Each trial was manually inspected, and a total of 345 trials (2%) were excluded across all observers due to eye- or hand-tracker signal loss. 
Eye-movement data analyses
The stimulus characteristics in this paradigm triggered tracking behavior that most closely resembled short periods of smooth pursuit and catch-up saccades (de Brouwer, Yuksel, Blohm, Missal, & Lefèvre, 2002; Fooken et al., 2016). To evaluate this tracking behavior we analyzed eye-movement position and velocity relative to target position and velocity and extracted the following pursuit measures: horizontal eye-position error, defined as the mean deviation across the entire trial (from eye-movement onset to stimulus offset) of horizontal eye position relative to horizontal target position; relative eye velocity (gain) during the closed-loop phase (140 ms after pursuit onset until interception); pursuit latency; and open-loop velocity (pursuit onset until the start of the closed-loop phase). We further extracted the number of catch-up saccades, the latency of the initiation saccade and each trial's targeting saccade (i.e., in pass trials this is the saccade into the strike box), and average saccade amplitude. 
Statistical analysis
Varsity baseball players and nonathletes did not differ significantly in overall eye- and hand-movement accuracy. These results were averaged across groups. However, group differences are reported for task-related performance in EyeStrike. Effects of task difficulty and signal strength on decision-making accuracy were examined using repeated-measures analysis of variance with the between-subjects factor of player (baseball player vs. nonathlete) and within-subjects factors of difficulty and strength. Differences between conditions (e.g., easy vs. hard) were evaluated using Welch's two-sample t tests. All statistical analyses were performed in R. To identify the pursuit and saccade measure that predicted decision outcome best, we ran a logistic regression model. We trained trial-by-trial data to be fitted to a binomial categorization (go vs. no-go) using a generalized linear model implemented with the Caret package in R (Kuhn, 2008). Variable importance was evaluated using the Caret function varImp. To evaluate the relationship between single eye-movement predictors (position error and targeting-saccade latency) and behavior (go vs. no-go), we report the accuracy of all cross-validation iterations and Cohen's unweighted kappa, a measure of agreement for categorical prediction (Kuhn, 2008). 
Choice probability
To simulate decision outcomes based on eye-movement behavior in EyeStrike, we adopted a method based on signal-detection theory (Green & Swets, 1966). In this framework, the paradigm can be viewed as a two-alternative choice task in which hand movement (go vs. no-go) is the decision outcome and eye-movement behavior is the measure used to predict the response. To evaluate the validity of eye movements as decision predictor, we calculated receiver operating characteristic (ROC) curves. The method to calculate the ROC curve of a given eye-movement measure was as follows: We divided the range between minimum and maximum measured value in 100 equal steps. We then formulated a set of decision rules: We predicted that observers would move their hands based on a continuous criterion starting with the maximum value (100% go prediction) and decreasing by equal step sizes until the minimum value was reached (100% no-go prediction). Choice probability can then be evaluated by calculating the area under the curve (AUC), which yields an estimate of the probability that the observer's behavior (go vs. no-go) has been predicted correctly (Bamber, 1975; Kang & Maunsell, 2012). The AUC was calculated using trapezoidal numerical integration using the Matlab function trapz. Permutation tests were used to assess the significance of choice-probability data (Mayo & Sommer, 2013). 
Results
We related eye movements to rapid go/no-go decisions using our EyeStrike paradigm, in which a visual stimulus passed or missed a designated strike box. We assessed interceptive hand movements (decision outcome) and eye movements under different levels of uncertainty and motion-signal strength. Uncertainty was manipulated by varying task difficulty; different target launch angles resulted in trajectories that either clearly or closely passed or missed the strike box, resulting in easy or hard trajectories (Figure 1B). Motion-signal strength was manipulated by limiting target-viewing duration. Different combinations of viewing durations and target speeds resulted in visible trajectory lengths ranging from short (3.6°), producing a noisy or weak motion estimate, to long (12.3°), producing a less noisy, or strong, motion-signal estimate. 
Decision-making accuracy
We defined decision-making accuracy as the ratio of correct go and no-go decisions across all trials for each observer. Observers across groups performed well in EyeStrike, with 82.3% (SD = 1.2%) average decision accuracy. Overall, decision accuracy was higher for pass trajectories (M = 90.4% ± 0.8%) than for miss trajectories (M = 74.1% ± 2.0%), t(44) = 6.86, p < 0.001 (compare data points in light gray vs. dark gray shaded areas in Figure 2A). Observers made more accurate decisions for easy trajectories (M = 90.0% ± 1.3%) compared to hard ones (M = 74.7% ± 1.3%; Figure 2B), reflected in a significant main effect of difficulty, F(1, 43) = 466.0, p < 0.001. Observers also made more accurate decisions with increasing visible trajectory length: Decision accuracy varied between 76.3% ± 1.5% for the shortest trajectory and 83.5% ± 1.4% for the longest trajectory, reflected in a main effect of strength, F(5, 215) = 20.8, p < 0.001 (Figure 2C). This improvement with increasing motion-signal strength was stronger in easy than in difficult trials, reflected in a significant Difficulty × Strength interaction, F(5, 215) = 9.4, p < 0.001. 
Figure 2
 
Decision-making accuracy in EyeStrike. (A) Decision accuracy (ratio of all correct go and no-go responses to all trials) and incorrect responses for each launch angle; each open circle reflects the average for one observer; horizontal lines represent group averages (N = 45). (B) Decision accuracy for easy compared to hard decisions. Open circles reflect the observer average, separated by baseball players (black) and nonathletes (purple). (C) Decision accuracy for short, medium, and long target presentations. Averages of all observers are indicated by bar height.
Figure 2
 
Decision-making accuracy in EyeStrike. (A) Decision accuracy (ratio of all correct go and no-go responses to all trials) and incorrect responses for each launch angle; each open circle reflects the average for one observer; horizontal lines represent group averages (N = 45). (B) Decision accuracy for easy compared to hard decisions. Open circles reflect the observer average, separated by baseball players (black) and nonathletes (purple). (C) Decision accuracy for short, medium, and long target presentations. Averages of all observers are indicated by bar height.
Eye-movement signatures in pass and miss trials
Briefly presented moving targets in EyeStrike reliably elicited a combination of smooth-pursuit and saccadic eye movements. In response to target motion onset, observers either initiated smooth pursuit (73% of all trials) or maintained fixation and then initiated a saccade toward the target. Observers strongly anticipated the target's motion direction and initiated pursuit rapidly (mean pursuit latency: 29 ± 4 ms), thus maximizing pursuit in the presence of ultrashort presentation durations. The target disappeared after 100–300 ms and traveled approximately 800 ms before entering the strike box (see time markers in Figure 3A). For go responses (Figure 3A, upper panels), observers tended to follow the target closely with their eyes until making a targeting saccade into the strike box. For no-go responses, observers' eye movements followed the trajectory to the strike-box corner where they assumed the target to miss, using a combination of pursuit and saccades (Figure 3A, lower panels). In interception trials (correct/incorrect go), observers initiated their hand movement with a latency of 416 ms (SD = 8 ms) on average, and they intercepted close to the final eye position (Euclidean distance between eye and finger at time of interception: 2.5° ± 0.5°). Observers' interception positions were clustered around the actual target position, even though they were instructed to hit anywhere inside the strike box (Figure 3B). 
Figure 3
 
Eye- and hand-movement behavior in EyeStrike. (A) 2-D eye position from a single representative observer for four possible trial outcomes, showing tracking of the target (gray solid line) with a combination of smooth pursuit (solid colored lines) and saccades (dashed colored lines). The time course of the trial is indicated by 100-ms time stamps. Each circle marks the end of a 100-ms interval (filled black circles: target visible; filled gray circles: target has disappeared). In go responses (upper panels), observers moved their hand (gray trajectory) to intercept (red disk) the target inside the strike box. In no-go responses (lower panels), observers withheld a hand movement. Colors indicate correct go (green), correct no-go (blue), and incorrect decisions (red). (B) Heat map of all interception trials across observers. Total number of interceptions in each square was counted and is indicated by color. Observers' hand movements naturally curved toward the lower half of the strike box.
Figure 3
 
Eye- and hand-movement behavior in EyeStrike. (A) 2-D eye position from a single representative observer for four possible trial outcomes, showing tracking of the target (gray solid line) with a combination of smooth pursuit (solid colored lines) and saccades (dashed colored lines). The time course of the trial is indicated by 100-ms time stamps. Each circle marks the end of a 100-ms interval (filled black circles: target visible; filled gray circles: target has disappeared). In go responses (upper panels), observers moved their hand (gray trajectory) to intercept (red disk) the target inside the strike box. In no-go responses (lower panels), observers withheld a hand movement. Colors indicate correct go (green), correct no-go (blue), and incorrect decisions (red). (B) Heat map of all interception trials across observers. Total number of interceptions in each square was counted and is indicated by color. Observers' hand movements naturally curved toward the lower half of the strike box.
Decoding decisions from eye movements
In order to decide whether or not to initiate a hand movement, observers had to discriminate trajectories as either pass or miss. Figure 4 shows the absolute eye position relative to the target trajectory for two representative observers (Figure 4A and 4B) and averaged across all observers (Figure 4C). Eye position differed clearly between go and no-go responses (compare green and blue traces). In incorrect trials—observers either moved their hand to intercept a target that missed the strike box or withheld a hand movement when the target passed the strike box—eye positions followed a path in between pass and miss trajectories, going toward the corners of the strike box (see average final vertical eye positions in Figure 4C). In incorrect trials, eye movements may therefore reflect observers' indecision as to whether the target would pass or miss the strike box. 
Figure 4
 
2-D eye position relative to target trajectories. (A) Eye position for Subject 4 (baseball player) and (B) eye position for Subject 22 (nonathlete). In both panels, each line represents a single trial (384 per observer). Eye position followed the target trajectories (thin gray lines) for go (green lines) compared to no-go responses (blue lines). Eye position in incorrect trials (go and no-go) falls between pass and miss trajectories (red lines). (C) Eye position during the first 750 ms of each trial averaged across all observers (N = 45). Filled circles indicate final vertical eye position for all correct and incorrect decision outcomes.
Figure 4
 
2-D eye position relative to target trajectories. (A) Eye position for Subject 4 (baseball player) and (B) eye position for Subject 22 (nonathlete). In both panels, each line represents a single trial (384 per observer). Eye position followed the target trajectories (thin gray lines) for go (green lines) compared to no-go responses (blue lines). Eye position in incorrect trials (go and no-go) falls between pass and miss trajectories (red lines). (C) Eye position during the first 750 ms of each trial averaged across all observers (N = 45). Filled circles indicate final vertical eye position for all correct and incorrect decision outcomes.
Next, we investigated which eye-movement parameters best captured the observed differences between pass and miss trials and might therefore indicate decision outcome. We analyzed standard smooth-pursuit measures, relative velocity (gain), absolute and relative eye-position error, and saccade measures (number, latency, amplitude of initial and targeting saccade). To select the eye-movement parameters that best reflected decision outcome, we included all extracted pursuit and saccade measures in a logistic regression model. The model identified relative (horizontal) eye-position error (κ = 0.25) and latency of the targeting saccade (κ = 0.28) as the best predictive measures for go/no-go responses. Both these measures are related to the timing of the eye movement rather than the absolute spatial position of the target. 
Horizontal eye-position error across all trials was significantly more positive during go (M = 1.55 ± 0.22) compared to no-go responses (M = 0.01 ± 0.19), t(44) = 8.00, p < 0.001 (Figure 5A), indicating that the eye tended to be ahead of the target when a go decision was made. Observers made overall fewer saccades in trials in which they decided to go (Mgo = 2.46 ± 0.06 vs. Mno-go = 2.89 ± 0.07), t(44) = 8.96, p < 0.001, indicating smoother tracking, and initiated the targeting saccade earlier than for no-go responses (Mgo = 558 ± 9 ms vs. Mno-go = 700 ± 11 ms), t(44) = 14.2, p < 0.001 (Figure 5B). The observed time-binned frequency of targeting saccades indicates that this eye-movement measure differentiated between go and no-go responses starting at 300–350 ms after target onset (Figure 5B). At the 450-ms time point, a targeting saccade was approximately four times more likely to have occurred in a go trial than in a no-go trial, indicating clearly different saccade-pattern signatures for different decision outcomes. 
Figure 5
 
Eye-movement measures during go (green) and no-go (blue) responses. (A) Frequency of average position error across all trials and observers. Vertical lines indicate the group average for go and no-go responses. (B) Frequency of targeting saccades initiated at a given time with respect to stimulus onset. Both panels are for N = 45.
Figure 5
 
Eye-movement measures during go (green) and no-go (blue) responses. (A) Frequency of average position error across all trials and observers. Vertical lines indicate the group average for go and no-go responses. (B) Frequency of targeting saccades initiated at a given time with respect to stimulus onset. Both panels are for N = 45.
Accuracy of eye-movement-based decision indicators
The observed differences in eye movements between go and no-go responses might allow us to read out decision outcomes based on either of the two eye-movement parameters identified as best predictors by the regression model. We applied a method adopted from signal-detection theory (Green & Swets, 1966), which has also been used to decode decision outcomes from single-neuron activity in visual-discrimination tasks (Britten, Shadlen, Newsome, & Movshon 1992; Celebrini & Newsome, 1994; Kang & Maunsell, 2012; Yang et al., 2010). Following this method, EyeStrike can be viewed as a two-alternative forced-choice task in which hand movement (go vs. no-go) is the decision outcome and eye-movement behavior is the measure used to detect the response. For each trial, the model can either correctly predict a go (hit) or no-go response (correct rejection) or incorrectly predict a go (false alarm) or no-go response (miss). The configuration of correct versus incorrect predictions depends on the chosen decision criterion. With a conservative criterion we predict a no-go response in most trials. This will yield a low number of incorrect go predictions (false alarms), but it will also lead to relatively few correct go predictions (hits). Conversely, a liberal criterion will lead to a high hit rate, but also to many false alarms. With a continuously changing decision criterion we can calculate ROC curves (Green & Swets, 1966) for each observer reflecting the trade-off between prediction success (hit) and error (false alarm) on a trial-by-trial basis. We calculated the AUC for each observer's ROC curve to obtain an estimate of the goodness of the model's prediction of individual observers' go/no-go responses. An AUC of 100% indicates that eye movements perfectly reflect go/no-go decisions; an AUC of 50% is equivalent to a random prediction or chance. 
Figure 6A and 6B shows individual ROC curves for Subjects 16 (baseball player) and 45 (nonathlete), separated by task difficulty. Subject 16's go/no-go response could be predicted with 98% accuracy using eye-position error as the decision criterion, whereas targeting-saccade latency as the criterion yielded predictions of ≥85% accuracy. Conversely, Subject 45's decision outcome was best predicted by targeting-saccade latency (82% accuracy). These representative examples illustrate that eye-position error was the better predictor for some observers (n = 23), whereas targeting-saccade latency was the more sensitive predictor for others (n = 22). We formed two subgroups of observers based on which of the two predictors was more sensitive, and calculated ROC curves across observers within each group while taking task difficulty into account (Figure 6C). For both groups, go/no-go responses could be predicted well above chance (mean AUC: 76%). Importantly, predictions were above chance for each of the tested 45 observers (range: 60%–98% in measured data vs. 46%–54% in permutation test), t(45.6) = 25.77, p < 0.001. Predictions were overall more accurate for easy (mean AUC: 77%) compared to hard trajectories (mean AUC: 74%), t(44) = 3.23, p = 0.002. Predictions were also more accurate the more reliable the target's motion signal was (Figure 6D), increasing from 74% for the shortest trajectory to 80% for the longest trajectory, t(44) = 5.36, p < 0.001. The finding that our decision prediction based on observers' eye movements increases with motion-signal strength indicates that eye movements reflect the accumulation of sensory evidence over time. Next we related each observer's AUC to his decision accuracy. We observed a strong positive relationship between AUC and decision accuracy across different levels of task difficulty (Figure 6E) and signal strength (Figure 6F). Taken together, these results suggest that eye movements are sensitive indicators of decision outcome and differentiate between decisions based on task difficulty and signal strength. 
Figure 6
 
Decoding decision outcome from eye-movement parameters. (A) Receiver operating characteristic (ROC) curve for a representative observer (baseball player) for whom decision outcome was modeled more accurately by eye-position error (orange). (B) ROC curve for another representative observer (nonathlete) for whom final saccade latency was the better decision predictor (black). (C) Averaged ROC curves across observers whose decision outcome was better predicted by eye-position error (n = 23; orange) versus final saccade latency (n = 22; black). Curves are shown separately for easy (dashed) and hard (solid) trajectories. (D) Averaged group ROC curves separated for long (dashed) and short (solid) target presentations. (E) Relationship between decision accuracy and each observer's area under the curve separated by easy (open circles; dashed regression fit) and hard (filled circles; solid regression fit) target trajectories. Each data point depicts the per-observer average. (F) Relationship between decision accuracy and each observer's area under the curve separated by long (open circles; dashed regression fit) and short (filled circles; solid regression fit) target presentations.
Figure 6
 
Decoding decision outcome from eye-movement parameters. (A) Receiver operating characteristic (ROC) curve for a representative observer (baseball player) for whom decision outcome was modeled more accurately by eye-position error (orange). (B) ROC curve for another representative observer (nonathlete) for whom final saccade latency was the better decision predictor (black). (C) Averaged ROC curves across observers whose decision outcome was better predicted by eye-position error (n = 23; orange) versus final saccade latency (n = 22; black). Curves are shown separately for easy (dashed) and hard (solid) trajectories. (D) Averaged group ROC curves separated for long (dashed) and short (solid) target presentations. (E) Relationship between decision accuracy and each observer's area under the curve separated by easy (open circles; dashed regression fit) and hard (filled circles; solid regression fit) target trajectories. Each data point depicts the per-observer average. (F) Relationship between decision accuracy and each observer's area under the curve separated by long (open circles; dashed regression fit) and short (filled circles; solid regression fit) target presentations.
Decision making in varsity baseball players versus nonathletes
We tested two populations of observers, college varsity-level baseball players and nonathletes. Both groups were similar in terms of general eye-movement accuracy (no significant main effect of player on any of the reported eye measures). However, decision accuracy was significantly higher for varsity baseball players (M = 85.5% ± 1.0%) than for nonathletes (M = 78.0% ± 2.1%; compare black with purple data points in Figure 2B and 2C). This result was reflected in a significant main effect of player, F(1, 43) = 12.0, p = 0.001, in a repeated-measures analysis of variance. Correspondingly, decision predictions using the ROC model were higher for baseball players (mean AUC: 82%) than for nonathletes (mean AUC: 75%), t(33.95) = 3.05, p = 0.004. Decision-prediction accuracy in both groups was equally affected by difficulty and signal strength (Table 1). 
Table 1
 
Decision-prediction accuracy (area under the curve; group average ± SD) for baseball players and nonathletes separated by difficulty (easy vs. hard) and signal strength (strong vs. weak).
Table 1
 
Decision-prediction accuracy (area under the curve; group average ± SD) for baseball players and nonathletes separated by difficulty (easy vs. hard) and signal strength (strong vs. weak).
Discussion
We developed a rapid interception task that allowed us to systematically evaluate eye movements during go/no-go decisions. Our key findings are that eye movements systematically differed between go and no-go responses, and that these differences could be read out prior to the choice response, thus predicting decision outcome. Prediction accuracy was related to observers' decision accuracy under different levels of task difficulty and motion-signal strength. These results go beyond merely predicting whether or not the hand will move and suggest that human eye movements can be used to sensitively decode and predict decision outcome under different sensory and task constraints. 
In EyeStrike, observers naturally viewed a visual target that followed either a pass or miss trajectory (stimulus space) and indicated their choice by initiating or withholding a hand movement (decision outcome). Stimulus space and decision outcome are linked by an internal machinery that processes sensory information and forms an associated motor command (Gold & Shadlen, 2007; Heekeren et al., 2008; Platt, 2002). In EyeStrike there are two possible choices—go and no-go—which could be reflected in two distinct internal states. However, if the choice is difficult or less reliable—for example, the ball passes or misses close to the corner of the strike box or is visible for a very short time—the two internal states may overlap, potentially causing decision errors. We found that eye movements in incorrect decision trials followed a path in between pass and miss trajectories, and in between eye movements made during correct go and no-go choices (Figure 4C). These results indicate that eye movements not only reflect the decision outcome but might also indicate an observer's internal decision state and the confidence with which a decision is reached. 
Decision accuracy in behavioral visual-discrimination tasks is typically related to task difficulty and signal strength (or noise level); for example, motion-discrimination performance scales with motion coherence (Britten et al., 1992; Lappin & Bell, 1976). Congruently, task difficulty shapes neural activity during decision making. Single-unit recordings in macaque monkeys have shown that neural sensitivity in the middle temporal visual area (Britten et al., 1992) and superior colliculus (Basso & Wurtz, 1997; Horwitz & Newsome, 2001) are closely related to perceptual-discrimination performance. Interestingly, subsets of neurons in the supplementary eye field and frontal eye field take longer to decode more difficult perceptual decisions (300–475 ms) compared to easy decisions (175–190 ms) but reflect decision-outcome sensitively regardless of level of difficulty (Yang et al., 2010; Yang & Heinen, 2014). Importantly, the accuracy of predicting decision outcomes based on neural recordings increases with increasing motion-signal strength (Britten et al., 1992; Horwitz & Newsome, 2001) and decreasing task difficulty (Yang et al., 2010). Moreover, saccades evoked by frontal-eye-field microstimulation during perceptual decision making deviate toward the stimulus motion direction. These deviations scale with stimulus signal strength, indicating shared processing of decision formation and oculomotor response (Gold & Shadlen, 2000, 2003). 
Similarly, studies in humans have found that easy compared to difficult visual-categorization decisions elicited a greater blood-oxygen-level-dependent response in left dorsolateral prefrontal cortex (Heekeren, Marrett, Bandettini, & Ungerleider, 2004). Single-trial electroencephalographic analysis has revealed a decision-difficulty component evolving at around 220 ms after stimulus presentation (Philiastides, 2006) for easy compared to difficult visual-categorization decisions. The present results, obtained in a large sample of human observers, suggest that eye movements might sensitively reflect task difficulty and signal strength as well: Model predictions (AUC) were more accurate for easy compared to hard trajectories and for targets that were visible for a longer period of time (stronger signal). An increase in motion-signal strength (i.e., higher coherence, higher contrast, or longer visibility) generally boosts the decision signal, hence potentially strengthening the predictive accuracy of the eye-movement signature. 
Our findings are also closely related to evidence showing that eye movements can be modulated by decision formation, and that decision making and motor output are closely related. For example, neural-population activity in the motor cortex measured using magnetoencephalography has been shown to gradually build up several seconds before execution of a choice response, and to be usable to read out and predict observers' choices in a yes/no motion-detection task (Donner, Siegel, Fries, & Engel, 2009; Pape & Siegel, 2016). Decision-related modulation has also been found during motor execution. In an earlier study, when a hand movement was perturbed just prior to the choice response, the muscular reflex gain of the perturbed arm was modulated by motion-coherence strength, reflecting ongoing decision formation (Selen, Shadlen, & Wolpert, 2012). Similarly, saccades indicating choice in a direction-discrimination task have been shown to be initiated earlier and to deviate farther away from the nonselected target with increasing levels of motion coherence—that is, stronger decision signals (McSorley & McCloy, 2009). In another recent study, task-unrelated visually guided saccades, performed in between a visual discrimination and a button-press response, were initiated earlier and faster in the direction congruent with the decision, but they were not modulated if observers viewed the moving stimulus passively, thus directly linking them to the decision (Joo et al., 2016). Taken together, these findings suggest that decision-related processes continuously interact with motor planning and execution. 
Eye movements in natural behavior are characterized by task demands and action goals. Many studies have shown convincingly that the eye leads the hand in tasks related to pointing, hitting, catching, or any kind of object-handling behavior (Bekkering, Adam, Kingma, Huson, & Whiting, 1994; Belardinelli, Stepper, & Butz, 2016; Johansson et al., 2001; Land et al., 1999; Mrotek & Soechting, 2007). Congruently, there is strong behavioral (Chen, Valsecchi, & Gegenfurtner, 2016; Danion & Flanagan, 2018; Fooken et al., 2016; Leclercq, Blohm, & Lefèvre, 2013) and neurophysiological (Andersen & Cui, 2009; Crawford, Medendorp, & Marotta, 2004; Dean, Hagan, & Pesaran, 2012; Hwang, Hauschild, Wilke, & Andersen, 2014; Snyder, Calton, Dickinson, & Lawrence, 2002) evidence for interdependency between eye and hand movements, via either common control or a parallel and coordinated mechanism. Our time-critical decision task reveals different eye-movement dynamics in go versus no-go responses with regard to the targeting saccade of a trial. In go responses, this saccade occurred significantly earlier, thus allowing necessary time for planning an accurate manual interception. In no-go responses, in which the hand movement had to be inhibited, the targeting saccade commonly targeted the corner of the strike box. It had no relevance for leading the hand but might have provided important visual information confirming observers' perceptual decision. Eye movements therefore directly reflect the behavioral consequences of a perceptual decision. 
Conclusion
Previous research has shown that decision-related neural responses can be used to read out an observer's intention even before a choice response is made. Here we show that eye movements carry a decision signature that is sensitive to task difficulty and sensory-signal strength and relates to observers' decision accuracy. Eye movements can be viewed as a continuous readout of ongoing sensorimotor processes and can be studied to further our understanding of perception and cognition in naturalistic tasks (Huk, Bonnen, & He, 2018). Even though our results were obtained using a head-restrained paradigm, equivalent eye-movement behavior (i.e., initial tracking followed by a predictive saccade) is commonly observed in head-unrestrained virtual-reality or real-world settings (Bahill & LaRitz, 1984; Hayhoe, 2017; Land & McLeod, 2000). Our paradigm introduces ecological validity by allowing unrestricted eye movements and by using a natural hand movement to indicate the choice response. The findings presented here might generalize to decision making in the real world, such as batting in cricket or baseball. Understanding how humans make decisions in real-world tasks can therefore be significantly aided by evaluating eye-movement responses. Our findings provide a direct link between neural decision signatures and continuous eye-movement responses, thus demonstrating eye movements' capacity to serve as sensitive indicators of neural function outside of directly recording brain activity. 
Acknowledgments
This work was supported by NSERC Discovery and Accelerator Grants (RGPIN 418493) and a Canada Foundation for Innovation John R. Evans Leaders Fund equipment grant to MS. The authors thank Rose Shannon for help with data collection and preprocessing, and Anna Montagnini, Patrick Mayo, Alexander Goettker, Dinesh Pai, and members of the Spering lab for comments on the manuscript. 
Commercial relationships: none. 
Corresponding author: Jolande Fooken. 
Address: Visual Performance and Oculomotor Mobility Lab, Vancouver, Canada. 
References
Andersen, R. A., & Cui, H. (2009). Intention, action planning, and decision making in parietal-frontal circuits. Neuron, 63 (5), 568–583.
Bahill, A. T., & LaRitz, T. (1984). Why can't batters keep their eyes on the ball? American Scientist, 72, 249–253.
Bamber, D. (1975). The area above the ordinal dominance graph and the area below the receiver operating characteristic graph. Journal of Mathematical Psychology, 12 (4), 387–415.
Basso, M. A., & Wurtz, R. H. (1997). Modulation of neuronal activity by target uncertainty. Nature, 389 (6646), 66–69.
Bekkering, H., Adam, J. J., Kingma, H., Huson, A., & Whiting, H. T. (1994). Reaction time latencies of eye and hand movements in single- and dual-task conditions. Experimental Brain Research, 97 (3), 471–476.
Belardinelli, A., Stepper, M. Y., & Butz, M. V. (2016). It's in the eyes: Planning precise manual actions before execution. Journal of Vision, 16 (1): 18, 1–18, https://doi.org/10.1167/16.1.18. [PubMed] [Article]
Bennur, S., & Gold, J. I. (2011). Distinct representations of a perceptual decision and the associated oculomotor plan in the monkey lateral intraparietal area. The Journal of Neuroscience, 31 (3), 913–921.
Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433–436.
Brenner, E., & Smeets, J. B. J. (2017). Accumulating visual information for action. Progress in Brain Research, 236, 75–95.
Britten, K. H., Shadlen, M. N., Newsome, W. T., & Movshon, J.A. (1992). The analysis of visual motion: A comparison of neuronal and psychophysical performance. The Journal of Neuroscience, 12 (12), 4745–4765.
Celebrini, S., & Newsome, W. T. (1994). Neuronal and psychophysical sensitivity to motion signals in extrastriate area MST of the macaque monkey. The Journal of Neuroscience, 14 (7), 4109–4124.
Chen, J., Valsecchi, M., & Gegenfurtner, K. R. (2016). Role of motor execution in the ocular tracking of self-generated movements. Journal of Neurophysiology, 116 (6), 2586–2593.
Crapse, T. B., Lau, H., & Basso, M. A. (2018). A role for the superior colliculus in decision criteria. Neuron, 97 (1), 181–194.
Crawford, J. D., Medendorp, W. P., & Marotta, J. J. (2004). Spatial transformations for eye-hand coordination. Journal of Neurophysiology, 92 (1), 10–19.
Danion, F. R., & Flanagan, J. R. (2018). Different gaze strategies during eye versus hand tracking of a moving target. Scientific Reports, 8 (1), 1–9.
Dean, H. L., Hagan, M. A., & Pesaran, B. (2012). Only coherent spiking in posterior parietal cortex coordinates looking and reaching. Neuron, 73 (4), 829–841.
de Brouwer, S., Yuksel, D., Blohm, G., Missal, M., & Lefèvre, P . (2002). What triggers catch-up saccades during visual tracking? Journal of Neurophysiology, 87 (3), 1646–1650.
Ding, L., & Gold, J. (2013). The basal ganglia's contributions to perceptual decision making. Neuron, 79 (4), 640–649.
Donner, T. H., Siegel, M., Fries, P., & Engel, A. K. (2009). Buildup of choice-predictive activity in human motor cortex during perceptual decision making. Current Biology, 19 (18), 1581–1585.
Fooken, J., Yeo, S.-H., Pai, D. K., & Spering, M. (2016). Eye movement accuracy determines natural interception strategies. Journal of Vision, 16 (14): 1, 1–15, https://doi.org/10.1167/16.14.1. [PubMed] [Article]
Gold, J. I., & Shadlen, M. N. (2000). Representation of a perceptual decision in developing oculomotor commands. Nature, 404 (6776), 390–394.
Gold, J. I., & Shadlen, M. N. (2003). The influence of behavioral context on the representation of a perceptual decision in developing oculomotor commands. The Journal of Neuroscience, 23 (2), 632–651.
Gold, J. I., & Shadlen, M. N. (2007). The neural basis of decision making. Annual Review of Neuroscience, 30, 535–574.
Green, D., & Swets, J. (1966). Signal detection theory and psychophysics. New York: Wiley.
Hanks, T. D., & Summerfield, C. (2017). Perceptual decision making in rodents, monkeys, and humans. Neuron, 93 (1), 15–31.
Hayhoe, M. M. (2017). Vision and action. Annual Review of Vision Science, 3, 1–25.
Hayhoe, M. M., McKinney, T., Chajka, K., & Pelz, J. B. (2012). Predictive eye movements in natural vision. Experimental Brain Research, 217 (1), 125–136.
Heekeren, H. R., Marrett, S., Bandettini, P. A., & Ungerleider, L. G. (2004). A general mechanism for perceptual decision-making in the human brain. Nature, 431 (7010), 859–862.
Heekeren, H. R., Marrett, S., & Ungerleider, L. G. (2008). The neural systems that mediate human perceptual decision making. Nature Reviews Neuroscience, 9 (6), 467–479.
Heinen, S. J., Rowland, J., Lee, B.-T., & Wade, A. R. (2006). An oculomotor decision process revealed by functional magnetic resonance imaging. The Journal of Neuroscience, 26 (52), 13515–13522.
Horwitz, G. D., & Newsome, W. T. (2001). Target selection for saccadic eye movements: Prelude activity in the superior colliculus during a direction-discrimination task. Journal of Neurophysiology, 86 (5), 2543–2558.
Huk, A., Bonnen, K., & He, B. J. (2018). Beyond trial-based paradigms: Continuous behavior, ongoing neural activity, and natural stimuli. The Journal of Neuroscience, 38 (35), 7551–7558.
Hwang, E. J., Hauschild, M., Wilke, M., & Andersen, R. A. (2014). Spatial and temporal eye-hand coordination relies on the parietal reach region. The Journal of Neuroscience, 34 (38), 12884–12892.
Johansson, R. S., Westling, G., Bäckström, A., & Flanagan, J. R. (2001). Eye-hand coordination in object manipulation. The Journal of Neuroscience, 21 (17), 6917–6932.
Joo, S. J., Katz, L. N., & Huk, A. C. (2016). Decision-related perturbations of decision-irrelevant eye movements. Proceedings of the National Academy of Sciences, USA, 113 (7), 1925–1930.
Kang, I., & Maunsell, J. H. R. (2012). Potential confounds in estimating trial-to-trial correlations between neuronal response and behavior using choice probabilities. Journal of Neurophysiology, 108 (12), 3403–3415.
Kim, Y.-G., Badler, J. B., & Heinen, S. J. (2005). Trajectory interpretation by supplementary eye field neurons during ocular baseball. Journal of Neurophysiology, 94 (2), 1385–1391.
Kleiner, M., Brainard, D., Pelli, D., Ingling, A., Murray, R., & Broussard, C. (2007). What's new in Psychtoolbox-3. Perception, 36, 1–16.
Kuhn, M. (2008). Building predictive models in R using the caret package. Journal of Statistical Software, 28 (5), 1–26.
Land, M. F., & McLeod, P. (2000). From eye movements to actions: How batsmen hit the ball. Nature Neuroscience, 3 (12), 1340–1345.
Land, M., Mennie, N., & Rusted, J. (1999). The roles of vision and eye movements in the control of activities of daily living. Perception, 28 (11), 1311–1328.
Lappin, J. S., & Bell, H. H. (1976). The detection of coherence in moving random-dot patterns. Vision Research, 16 (2), 161–168.
Leclercq, G., Blohm, G., & Lefèvre, P. (2013). Accounting for direction and speed of eye motion in planning visually guided manual tracking. Journal of Neurophysiology, 110 (8), 1945–1957.
Liu, T., & Pleskac, T. J. (2011). Neural correlates of evidence accumulation in a perceptual decision task. Journal of Neurophysiology, 106 (5), 2383–2398.
Mayo, J. P., & Sommer, M. A. (2013). Neuronal correlates of visual time perception at brief timescales. Proceedings of the National Academy of Sciences, USA, 110 (4), 1506–1511.
McSorley, E., & McCloy, R. (2009). Saccadic eye movements as an index of perceptual decision-making. Experimental Brain Research, 198 (4), 513–520.
Mrotek, L. A., & Soechting, J. F. (2007). Target interception: Hand-eye coordination and strategies. The Journal of Neuroscience, 27 (27), 7297–7309.
Pape, A.-A., & Siegel, M. (2016). Motor cortex activity predicts response alternation during sensorimotor decisions. Nature Communications, 7 :13098, 1–10.
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442.
Philiastides, M. G. (2006). Neural representation of task difficulty and decision making during perceptual categorization: A timing diagram. The Journal of Neuroscience, 26 (35), 8965–8975.
Pho, G. N., Goard, M. J., Woodson, J., Crawford, B., & Sur, M. (2018). Task-dependent representations of stimulus and choice in mouse parietal cortex. Nature Communications, 9 :2596, 1–16.
Platt, M. L. (2002). Neural correlates of decisions. Current Opinion in Neurobiology, 12 (2), 141–148.
Schall, J. D. (2013). Macrocircuits: Decision networks. Current Opinion in Neurobiology, 23 (2), 269–274.
Selen, L. P. J., Shadlen, M. N., & Wolpert, D. M. (2012). Deliberation in the motor system: Reflex gains track evolving evidence leading to a decision. The Journal of Neuroscience, 32 (7), 2276–2286.
Shadlen, M. N., & Newsome, W. T. (1996). Motion perception: Seeing and deciding. Proceedings of the National Academy of Sciences, USA, 93 (2), 628–633.
Smeets, J. B., Hayhoe, M. M., & Ballard, D. H. (1996). Goal-directed arm movements change eye-head coordination. Experimental Brain Research, 109 (3), 434–440.
Snyder, L. H., Calton, J. L., Dickinson, A. R., & Lawrence, B. M. (2002). Eye-hand coordination: Saccades are faster when accompanied by a coordinated arm movement. Journal of Neurophysiology, 87 (5), 2279–2286.
Yang, S., & Heinen, S. (2014). Contrasting the roles of the supplementary and frontal eye fields in ocular decision making. Journal of Neurophysiology, 111 (12), 2644–2655.
Yang, S., Hwang, H., Ford, J., & Heinen, S. (2010). Supplementary eye field activity reflects a decision rule governing smooth pursuit but not the decision. Journal of Neurophysiology, 103 (5), 2458–2469.
Yates, J. L., Park, I. M., Katz, L. N., Pillow, J. W., & Huk, A. C. (2017). Functional dissection of signal and noise in MT and LIP during decision-making. Nature Neuroscience, 20 (9), 1285–1292.
Figure 1
 
Experimental procedure and design. (A) Observers were asked to fixate on a small black Gaussian dot (±15° from screen center). After 0.5–1 s, the target moved along a diagonal linear path and disappeared after being shown briefly (100–300 ms). Observers had to withhold a hand movement if the target missed a strike box (no-go) and intercept the target inside the strike box if it passed through (go). Observers received feedback about their interception position (red disk) in pass trials and about the target's final position (black X) in all trials. (B) Paradigm design. The target launched either upward or downward at one of four angles (5°, 7°, 10°, or 12°). Trajectories that passed or missed close to the corners of the strike box (7° and 10°) were more difficult.
Figure 1
 
Experimental procedure and design. (A) Observers were asked to fixate on a small black Gaussian dot (±15° from screen center). After 0.5–1 s, the target moved along a diagonal linear path and disappeared after being shown briefly (100–300 ms). Observers had to withhold a hand movement if the target missed a strike box (no-go) and intercept the target inside the strike box if it passed through (go). Observers received feedback about their interception position (red disk) in pass trials and about the target's final position (black X) in all trials. (B) Paradigm design. The target launched either upward or downward at one of four angles (5°, 7°, 10°, or 12°). Trajectories that passed or missed close to the corners of the strike box (7° and 10°) were more difficult.
Figure 2
 
Decision-making accuracy in EyeStrike. (A) Decision accuracy (ratio of all correct go and no-go responses to all trials) and incorrect responses for each launch angle; each open circle reflects the average for one observer; horizontal lines represent group averages (N = 45). (B) Decision accuracy for easy compared to hard decisions. Open circles reflect the observer average, separated by baseball players (black) and nonathletes (purple). (C) Decision accuracy for short, medium, and long target presentations. Averages of all observers are indicated by bar height.
Figure 2
 
Decision-making accuracy in EyeStrike. (A) Decision accuracy (ratio of all correct go and no-go responses to all trials) and incorrect responses for each launch angle; each open circle reflects the average for one observer; horizontal lines represent group averages (N = 45). (B) Decision accuracy for easy compared to hard decisions. Open circles reflect the observer average, separated by baseball players (black) and nonathletes (purple). (C) Decision accuracy for short, medium, and long target presentations. Averages of all observers are indicated by bar height.
Figure 3
 
Eye- and hand-movement behavior in EyeStrike. (A) 2-D eye position from a single representative observer for four possible trial outcomes, showing tracking of the target (gray solid line) with a combination of smooth pursuit (solid colored lines) and saccades (dashed colored lines). The time course of the trial is indicated by 100-ms time stamps. Each circle marks the end of a 100-ms interval (filled black circles: target visible; filled gray circles: target has disappeared). In go responses (upper panels), observers moved their hand (gray trajectory) to intercept (red disk) the target inside the strike box. In no-go responses (lower panels), observers withheld a hand movement. Colors indicate correct go (green), correct no-go (blue), and incorrect decisions (red). (B) Heat map of all interception trials across observers. Total number of interceptions in each square was counted and is indicated by color. Observers' hand movements naturally curved toward the lower half of the strike box.
Figure 3
 
Eye- and hand-movement behavior in EyeStrike. (A) 2-D eye position from a single representative observer for four possible trial outcomes, showing tracking of the target (gray solid line) with a combination of smooth pursuit (solid colored lines) and saccades (dashed colored lines). The time course of the trial is indicated by 100-ms time stamps. Each circle marks the end of a 100-ms interval (filled black circles: target visible; filled gray circles: target has disappeared). In go responses (upper panels), observers moved their hand (gray trajectory) to intercept (red disk) the target inside the strike box. In no-go responses (lower panels), observers withheld a hand movement. Colors indicate correct go (green), correct no-go (blue), and incorrect decisions (red). (B) Heat map of all interception trials across observers. Total number of interceptions in each square was counted and is indicated by color. Observers' hand movements naturally curved toward the lower half of the strike box.
Figure 4
 
2-D eye position relative to target trajectories. (A) Eye position for Subject 4 (baseball player) and (B) eye position for Subject 22 (nonathlete). In both panels, each line represents a single trial (384 per observer). Eye position followed the target trajectories (thin gray lines) for go (green lines) compared to no-go responses (blue lines). Eye position in incorrect trials (go and no-go) falls between pass and miss trajectories (red lines). (C) Eye position during the first 750 ms of each trial averaged across all observers (N = 45). Filled circles indicate final vertical eye position for all correct and incorrect decision outcomes.
Figure 4
 
2-D eye position relative to target trajectories. (A) Eye position for Subject 4 (baseball player) and (B) eye position for Subject 22 (nonathlete). In both panels, each line represents a single trial (384 per observer). Eye position followed the target trajectories (thin gray lines) for go (green lines) compared to no-go responses (blue lines). Eye position in incorrect trials (go and no-go) falls between pass and miss trajectories (red lines). (C) Eye position during the first 750 ms of each trial averaged across all observers (N = 45). Filled circles indicate final vertical eye position for all correct and incorrect decision outcomes.
Figure 5
 
Eye-movement measures during go (green) and no-go (blue) responses. (A) Frequency of average position error across all trials and observers. Vertical lines indicate the group average for go and no-go responses. (B) Frequency of targeting saccades initiated at a given time with respect to stimulus onset. Both panels are for N = 45.
Figure 5
 
Eye-movement measures during go (green) and no-go (blue) responses. (A) Frequency of average position error across all trials and observers. Vertical lines indicate the group average for go and no-go responses. (B) Frequency of targeting saccades initiated at a given time with respect to stimulus onset. Both panels are for N = 45.
Figure 6
 
Decoding decision outcome from eye-movement parameters. (A) Receiver operating characteristic (ROC) curve for a representative observer (baseball player) for whom decision outcome was modeled more accurately by eye-position error (orange). (B) ROC curve for another representative observer (nonathlete) for whom final saccade latency was the better decision predictor (black). (C) Averaged ROC curves across observers whose decision outcome was better predicted by eye-position error (n = 23; orange) versus final saccade latency (n = 22; black). Curves are shown separately for easy (dashed) and hard (solid) trajectories. (D) Averaged group ROC curves separated for long (dashed) and short (solid) target presentations. (E) Relationship between decision accuracy and each observer's area under the curve separated by easy (open circles; dashed regression fit) and hard (filled circles; solid regression fit) target trajectories. Each data point depicts the per-observer average. (F) Relationship between decision accuracy and each observer's area under the curve separated by long (open circles; dashed regression fit) and short (filled circles; solid regression fit) target presentations.
Figure 6
 
Decoding decision outcome from eye-movement parameters. (A) Receiver operating characteristic (ROC) curve for a representative observer (baseball player) for whom decision outcome was modeled more accurately by eye-position error (orange). (B) ROC curve for another representative observer (nonathlete) for whom final saccade latency was the better decision predictor (black). (C) Averaged ROC curves across observers whose decision outcome was better predicted by eye-position error (n = 23; orange) versus final saccade latency (n = 22; black). Curves are shown separately for easy (dashed) and hard (solid) trajectories. (D) Averaged group ROC curves separated for long (dashed) and short (solid) target presentations. (E) Relationship between decision accuracy and each observer's area under the curve separated by easy (open circles; dashed regression fit) and hard (filled circles; solid regression fit) target trajectories. Each data point depicts the per-observer average. (F) Relationship between decision accuracy and each observer's area under the curve separated by long (open circles; dashed regression fit) and short (filled circles; solid regression fit) target presentations.
Table 1
 
Decision-prediction accuracy (area under the curve; group average ± SD) for baseball players and nonathletes separated by difficulty (easy vs. hard) and signal strength (strong vs. weak).
Table 1
 
Decision-prediction accuracy (area under the curve; group average ± SD) for baseball players and nonathletes separated by difficulty (easy vs. hard) and signal strength (strong vs. weak).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×