December 2014
Volume 14, Issue 14
Free
Article  |   December 2014
Oculometric assessment of dynamic visual processing
Author Affiliations
Journal of Vision December 2014, Vol.14, 12. doi:https://doi.org/10.1167/14.14.12
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Dorion B. Liston, Leland S. Stone; Oculometric assessment of dynamic visual processing. Journal of Vision 2014;14(14):12. https://doi.org/10.1167/14.14.12.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Eye movements are the most frequent (∼3/s), shortest-latency (∼150–250 ms), and biomechanically simplest (one joint, no inertial complexities) voluntary motor behavior in primates, providing a model system to assess sensorimotor disturbances arising from trauma, fatigue, aging, or disease states. We have developed a 15-min behavioral tracking protocol consisting of randomized Rashbass (1961) step-ramp radial target motion to assess several aspects of the behavioral response to visual motion, including pursuit initiation, steady-state tracking, direction tuning, and speed tuning. We show how oculomotor data can be converted into direction- and speed-tuning oculometric functions, with large increases in efficiency over traditional button-press psychophysics. We also show how the latter two can be converted into standard visual psychometric thresholds. To assess our paradigm, we first tested for the psychometric criterion of repeatability, and report that our metrics are reliable across repeated sessions. Second, we tested for the psychometric criterion of validity, and report that our metrics show the anticipated changes as the motion stimulus degrades due to spatiotemporal undersampling. Third, we documented the distribution of these metrics across a population of 41 normal observers to provide a thorough quantitative picture of normal human ocular tracking performance, with practice and expectation effects minimized. Our method computes 10 metrics that quantify various aspects of the eye-movement response during a simple 15-min clinical test, which could be used as a screening or assessment tool for disorders affecting sensorimotor processing, including degenerative retinal disease; developmental, neurological or psychiatric disorders; strokes; and traumatic brain injury.

Introduction
Dynamic and peripheral visual processing remains more difficult to assess clinically than standard static foveal processing, due at least in part to a lack of a standard, quantitative, reliable, and efficient screening technique. Impairment in dynamic visual processing and smooth-pursuit tracking can stem from myriad causes, including lesions of extrastriate visual cortex (Blanke, Landis, Mermoud, Spinelli, & Safran, 2003; Dursteler, Wurtz, & Newsome, 1987; Newsome, Wurtz, Dursteler, & Mikami, 1985; Thurston, Leigh, Crawford, Thompson, & Kennard, 1988; Zihl, von Cramon, & Mai, 1983), cerebellar or brainstem damage (Handel, Thier, & Haarmeier, 2009; Nawrot & Rizzo, 1995, 1998; Thier, Bachor, Faiss, Dichgans, & Koenig, 1991), traumatic brain injury (Pelak & Hoyt, 2005; Suh et al., 2006), autism (Takarae, Minshew, Luna, Krisky, & Sweeney, 2004), Alzheimer's disease (Pelak & Hoyt, 2005), schizophrenia (Levin et al., 1988; Levy, Sereno, Gooding, & O'Driscoll, 2010), degenerative retinal disease (Turano & Wang, 1992), or pharmacological toxicity (Horton & Trobe, 1999; Rashbass, 1961; Winn, Liao, & Horton, 2007). The need for a readily available method to assess dynamic visual processing under clinical conditions has been noted (Pelak & Hoyt, 2005). The goal of this paper is to describe an eye–movement-based methodology that can quantify many aspects of human dynamic visual processing using a simple 15-min oculomotor task, noninvasive video-based eye tracking, and validated oculometric analysis techniques (Beutter & Stone, 1998; Kowler & McKee, 1987; Krukowski & Stone, 2005; Stone, Beutter, Eckstein, & Liston, 2009; Stone & Krauzlis, 2003). 
We describe here the 10 quantitative oculometrics derived from our test. By examining the eye-movement responses to a modified Rashbass (1961) step-ramp pursuit-tracking task, we can generate distinct performance measurements associated with pursuit initiation (latency and open-loop pursuit acceleration), steady-state tracking (gain, catch-up saccade amplitude, and the proportion of the steady-state response consisting of smooth movement), direction tuning (oblique effect amplitude, horizontal-vertical asymmetry, and direction noise), and speed tuning (speed responsiveness and noise). Our metrics for pursuit initiation are standard measures that quantify the vigor of movement onset (Leigh & Zee, 2006; Lisberger & Westbrook, 1985). Our steady-state tracking metrics include both standard measures of pursuit gain (Leigh & Zee, 2006; Robinson, 1965), as well as catch-up saccade amplitude (de Brouwer, Yuksel, Blohm, Missal, & Lefevre, 2002; Schalen, 1980) and the proportion of eye displacement consisting of smooth tracking. The other two sets of direction-tuning (Krukowski, Pirog, Beutter, Brooks, & Stone, 2003; Krukowski & Stone, 2005) and speed-tuning metrics can be converted directly to psychometric thresholds (Beutter & Stone, 1998; Kowler & McKee, 1987), without the need to perform a time-consuming motion-discrimination psychophysical task. The goals of the present paper are first, to determine test–retest reliability (i.e., are the measured oculometrics stable across sessions?), second, to demonstrate validity (i.e., do the oculometrics degrade as the motion energy in the visual stimulus degrades?), and third, to determine the extent to which the various oculometrics represent distinct pieces of information about neural function and to compare their median values from a baseline population of 41 normal observers with psychophysical data previously reported in the literature. 
Methods
Rashbass task
For all tasks, we used tailored variants of the classic Rashbass (1961) step-ramp paradigm. Subjects began each trial fixating the target (a small red spot) in primary position and pressing a mouse button when they chose to (self-paced test). Subjects then fixated the red spot for a randomized duration drawn from a truncated exponential distribution (Luce, 1986; Palmer, Huk, & Shadlen, 2005) for a mean time of 700 ms (minimum: 200 ms, maximum: 5000 ms) to defeat possible response strategies based on temporal expectation of motion onset (Luce, 1986; Nickerson & Burnham, 1969; Oswal, Ogden, & Carpenter, 2007). After the randomized delay interval had elapsed, the tracking target made a small step in a particular direction, and immediately began moving back toward the initial fixation location. The step size was set such that the target crossed its original fixation location 200 ms after motion onset and therefore reduced the likelihood of an initial catch-up saccade (Rashbass, 1961). In all experiments, each session consisted of a total of only 180 trials to maintain a high level of alertness (Leigh & Zee, 2006) and to complete the test in a clinically reasonable amount of time (∼15 min). The distributions of possible directions and speeds differed across the three experiments presented here, so additional experiment-specific details are given below. All observers provided informed consent under protocols approved by the NASA Ames Research Center Human Research Institutional Review Board, and our methods adhered to the Declaration of Helsinki. For clarity, we use the degree symbol (°) to indicate motion direction (target or eye) along the fronto-parallel plane, and the abbreviation “deg” to indicate degrees of visual angle to quantify angular position and motion on the display screen. 
Eye-movement recording
We sampled eye position at 240 Hz with an ISCAN video-based eye-tracker (ISCAN Inc., Burlington, MA). The eye-position traces were calibrated with six parameters (Beutter & Stone, 1998) fit to the raw digital values for fixations at nine screen locations within a Cartesian grid. This yielded an average precision of 0.32 deg (standard deviation of eye-position while fixating across our 41-subject population), which provides an upper limit on the tracker noise that may have perturbed the measured values of our metrics. However, the shared variance across subjects between our 10 metrics and the eye-tracker precision was 4% on average, with the two noise metrics showing the highest proportion of shared variance, as expected (direction noise: 14%, speed noise: 7%). Thus, tracker noise only weakly impacted our results. Saccades were detected and deleted from the raw eye-movement data using a nonlinear median filter to remove the low-frequency components in the eye-velocity trace due to smooth tracking, then taking the correlation between a saccade-shaped velocity template and the resulting trace to find and remove saccadic movements 0.2 deg or larger (Liston, Krukowski, & Stone, 2013). 
Reliability experiment
Our first experiment measured across-session variability for all metrics except the two speed-tuning metrics, for six subjects. Each session consisted of 180 tracking trials of the Rashbass (1961) step-ramp stimulus, corresponding to 180 directions sampled without replacement from 0° to 358° (in 2° increments) at a fixed speed of 20 deg/s. Stimuli were displayed on an Eizo FlexScan T966 60 Hz CRT monitor (Eizo Corporation, Hauksan, Japan) with resolution of 1024 × 768 (at our viewing distance of 470 mm, pixels were 0.04 by 0.04 deg). Each subject completed five repetitions of the 15-min task over a period of less than three weeks, with the exception of one observer who completed only four repetitions. 
Validation experiment
Our second experiment tested whether our set of metrics could detect degradations of stimulus motion due to coarse spatiotemporal sampling of the motion trajectory. This experiment highlights another potential use of our methodology, assessing variability across stimulus conditions due to differences in display fidelity, as opposed to assessing variability across sessions or observers due to differences in human performance. In this experiment, we used the sensitivity of the oculomotor system to assay the perceived quality of a sampled motion stimulus (Kuroki, Nishi, Kobayashi, Oyaizu, & Yoshimura, 2006; Sweet, Stone, Liston, & Hebert, 2008). 
To provide well-controlled sampled motion, we used a laser galvanometer system (Krukowski & Stone, 2005) that back-projected a spot on a translucent screen with its trajectory sampled at one of seven temporal frequencies (30, 60, 80, 96, 120, 240, and 960 Hz) in one of two sampling conditions. Our sample-and-hold condition simulated the sampling properties of an LCD display, with the laser spot illuminated continuously as it stepped through the trajectory. Our sample-and-blank condition simulated the sampling properties of a CRT display, with the laser spot illuminated for only the first half of each sample. We adjusted intensity of the laser spot to match the temporal average luminance in both sampling conditions. This experiment used a standard Rashbass (1961) step-ramp stimulus moving horizontally to either the left or the right at one of five possible speeds (10, 20, 40, 60, and 80 deg/s), drawn randomly on each trial. 
Six subjects (five naive) ran four to five experimental sessions consisting of two 180-trial blocks, one block for each sampling condition. For each, we collapsed their data across all sessions for each sampling condition, then computed the five metrics quantifying the vigor of pursuit initiation (latency and acceleration) and the quality of steady-state tracking (gain, saccade amplitude, and proportion smooth). The stimuli we used in the sampled motion experiment did not vary in direction (other than randomly left or right), thus the validity of the three direction-tuning metrics were not examined in this experiment. For each subject, we normalized the data by subtracting out their mean value to isolate their performance changes resulting from the stimulus differences. This allowed us to average across subjects while minimizing the variability caused by stimulus-independent intersubject differences in overall performance. 
Population baseline experiment
Our third experiment catalogued the full set of metrics for a baseline population of 41 subjects (19 female, age range: 20–56 years, median: 27; 35 of our 41 subjects had little or no prior experience as subjects in smooth-pursuit experiments). This experiment was designed to provide subjects with no prior information about the timing of motion onset, the motion, or the speed of motion to ensure that the oculomotor behavior was driven as much as possible by the visual stimulus properties of the moving target, rather than cognitive expectations (Kowler & McKee, 1987). On each trial, the target speed was randomly either 16, 18, 20, 22, or 24 deg/s. Target direction was randomly sampled without replacement from a uniform distribution from 0° to 358° in 2° increments. Stimuli were displayed on an Eizo FlexScan T966 as in the reliability study above. A scripted set of instructions was provided to each subject, given below: 
 

You will be performing a tracking task that will last for approximately fifteen minutes, consisting of 180 trials. At the beginning of each trial, you will see a small red spot appear in the center of the screen. When you are rigidly fixating the central spot, click the mouse to indicate that you're ready. After a randomized duration, the spot will make a small step away from the central fixation location in a randomized direction and will begin moving toward the original fixation location and then off toward the edge of the monitor. Track the motion of the red spot as best you can as long as it is visible.

 
We also recorded subject age and measured visual acuity using the Freiburg Visual Acuity software package (Bach, 1996). 
Measurements of pursuit initiation (INIT)
We used an automated “hinge” model (Adler, Bala, & Krauzlis, 2002; Krauzlis & Miles, 1996) to mark the onset of the pursuit movement. Because our tracking target moved in a fixed random direction, to increase our signal-to-noise we used velocity along the direction of target motion to measure pursuit onset by taking the dot product of the horizontal and vertical velocity traces and the direction of target motion. The hinge consists of two line segments (baseline and response, each 100 ms in duration) occurring consecutively; pursuit latency was defined as the point at which the two line segments intersected that minimized the root mean square (RMS) error with the observed data. We added three constraints to the fitting algorithm to increase its robustness. We forced the baseline velocity to be zero, constrained the response acceleration to be positive, constrained the latency to be between 100 and 400 ms, and weighted the error function by a “recinormal” prior-probability distribution of latencies on a similar task (Carpenter, 1981; Carpenter & Williams, 1995) using M = 5.4 s−1 and SD = 2 s1, based on an expected value of 185 ms (Krukowski & Stone, 2005). Our algorithm minimized the weighted-error between the best fitting two-parameter hinge model to the velocity trace to yield our “pursuit latency” and our initial “pursuit acceleration” metric which characterizes the open-loop pursuit response (Lisberger & Westbrook, 1985). 
Measurements of steady-state tracking (SS)
For the three steady-state tracking metrics, we defined the steady-state interval from 400 to 700 ms following target motion onset to allow enough time for eye velocity to reach a steady-state value, while ensuring that the stimulus motion was still present on all trials. The “steady-state gain” metric (Rashbass, 1961; Robinson, 1965) was defined as the ratio of eye velocity along the stimulus direction to target velocity. The average catch-up saccades amplitude (de Brouwer et al., 2002) was calculated for each trial and our “saccade amplitude” metric was defined as the median across trials. The “proportion of smooth pursuit” metric was defined as the ratio of eye displacement during smooth pursuit to total eye displacement. 
Direction-tuning measurements (DIR)
We measured the direction of the pursuit response during the steady-state interval to quantify the direction-tuning properties of the pursuit response. Replicating the methods of Krukowski and Stone (2005), direction gain was defined as the local slope of the function relating pursuit direction to stimulus direction, which shows deviations from unity slope (a wiggly line in Cartesian coordinates) that peaks near the cardinal and oblique axes, consistent with an expansion of direction space around the cardinal axes and a contraction around the oblique axes. In polar coordinates, this anisotropy in direction gain takes on a cloverleaf shape, with leaves protruding past unity gain near the cardinal axes and local regions of less-than-unity gain near the oblique directions. To describe the shape of the cloverleaf anisotropy, we fit the direction-tuning curves with a three-parameter function, ignoring points with directional errors greater than 30°. The first parameter, α, describes the magnitude of the cardinal-oblique anisotropy; the second parameter, β, describes the asymmetry between the size of the vertical and horizontal lobes; and the third parameter, Δ, describes the orientation of the cloverleaf. The fitting function is given by:    
In 1° increments, we measured the best-fitting, local linear-regression slope within a 30° window centered on a particular direction. We then fit the resulting plot of slope versus direction with Equation 1 to compute our direction anisotropy and direction asymmetry metrics. We then took the difference in the direction of the pursuit response across pairs of neighboring stimulus directions and pooled them across all directions to yield a distribution of difference measures, based on the previous observation that directional noise is isotropic (Krukowski & Stone, 2005). We defined “directional noise” as the standard deviation of the distribution of difference measures. 
Speed-tuning measurements (SPD)
To quantify the signal-to-noise properties of speed processing, we measured the mean speed of the pursuit response along the direction of target motion during the steady-state interval for each target speed. We computed our speed responsiveness metric as the slope of the linear regression of the mean eye speed measures across target speeds. Our speed-noise metric was then computed as the mean standard deviation in eye speed, averaged across target speeds. 
Results
The summary metrics for one subject from one 15-min session are shown in Figure 1, grouped by the four measurement types (INIT: pursuit initiation; SS: steady-state tracking; DIR: direction-tuning; SPD: speed-tuning). For this subject, the median latency of pursuit initiation was 162 ms with a median acceleration of 182 deg/s2. The median gain during the steady-state interval was 0.87, the median amplitude of the average catch-up saccade was 1.66 deg, and the proportion of eye displacement that can be attributed to smooth movement was 0.78. The direction-tuning of the pursuit response is summarized by two parameters (see Equation 1); the anisotropy (oblique effect) of 0.40 (Krukowski & Stone, 2005) and the asymmetry (horizontal-vertical bias) of 0.17 together capture the overall cloverleaf shape of the pursuit direction-gain function. The direction noise is captured by the mean standard deviation in eye-speed of 5.9° in the pursuit response (Krukowski & Stone, 2005). For this subject, the signal-to-noise properties of the speed-tuning are summarized by a speed responsiveness of 0.73, the slope of the quasi-linear speed-response function, and by the speed noise of 2.2 deg/s. Whereas the initiation and steady-state tracking metrics represent median measurements made from individual trials (see well-behaved distributions across trials in Figure 1), the direction and speed-tuning metrics are derived from pursuit behavior across the entire set of trials. 
Figure 1
 
Summary of oculometric measurements for one subject. Each 15-min session consisted of 180 trials, and yielded 10 metrics. Histograms in the left-hand column plot across-trial measurements of pursuit motor function; the oculometric direction-tuning and speed-tuning measurements are shown in the right-hand column. The measurements of pursuit initiation (INIT) yield a skewed recinormal distribution of latencies and a quasinormal distribution of accelerations. Measurements of steady-state (SS) tracking (400–700 ms after motion onset) include pursuit gain, the average amplitude of saccadic intrusions, and the proportion of eye displacement that consisted of smooth tracking. The direction-tuning (DIR) scatterplot shows pursuit direction as a function of target direction for each trial; the inset shows the cloverleaf anisotropy (blue dashed line) referenced to a circle of unity gain. The speed-tuning (SPD) scatterplot plots pursuit speed as a function of target speed (solid black circles), the across-trial median (solid black square), and the speed-tuning slope (solid red line).
Figure 1
 
Summary of oculometric measurements for one subject. Each 15-min session consisted of 180 trials, and yielded 10 metrics. Histograms in the left-hand column plot across-trial measurements of pursuit motor function; the oculometric direction-tuning and speed-tuning measurements are shown in the right-hand column. The measurements of pursuit initiation (INIT) yield a skewed recinormal distribution of latencies and a quasinormal distribution of accelerations. Measurements of steady-state (SS) tracking (400–700 ms after motion onset) include pursuit gain, the average amplitude of saccadic intrusions, and the proportion of eye displacement that consisted of smooth tracking. The direction-tuning (DIR) scatterplot shows pursuit direction as a function of target direction for each trial; the inset shows the cloverleaf anisotropy (blue dashed line) referenced to a circle of unity gain. The speed-tuning (SPD) scatterplot plots pursuit speed as a function of target speed (solid black circles), the across-trial median (solid black square), and the speed-tuning slope (solid red line).
Reliability
To assess the test–retest reliability of our metrics, we ran six subjects in an experiment that quantified all metrics, except the two speed-tuning metrics. The objective of this test was to compare the intrasubject variability across repeated sessions to the intersubject variability. Each filled circle in Figure 2 plots the average across five repeated measurements for one subject; gray error bars illustrate the entire range of the measurements for that observer. All eight metrics tested showed significant differences across subjects (Kruskal-Wallis, p < 0.0001), and the ratio of average intersubject (across subjects for a given metric) variance to the average intrasubject (across sessions for a given subject) variance ranged from 2.6 to 14.8. In many cases, the across-subject measurements are completely nonoverlapping. These results indicate that our metrics provide sufficient test–retest reliability to quantify consistent performance differences across individuals, despite the fact that we did not control for potential sources of within-subject variability like systematic circadian-rhythm variations or random effects like meal timing or fatigue. 
Figure 2
 
Test–retest reliability. To assess whether our oculometric measures were stable enough to distinguish across-subject differences, we made repeated measurements across five sessions for each of the six subjects. Each black filled circle plots the mean oculometric measurement across sessions for one subject; the gray error bars represent the entire range of measurements for that subject. We observed significant differences across our pool of six subjects for all oculometric measures (Kruskal-Wallis, p < 0.0001). The ratio of across-observer variance to within-observer variance is given for each metric.
Figure 2
 
Test–retest reliability. To assess whether our oculometric measures were stable enough to distinguish across-subject differences, we made repeated measurements across five sessions for each of the six subjects. Each black filled circle plots the mean oculometric measurement across sessions for one subject; the gray error bars represent the entire range of measurements for that subject. We observed significant differences across our pool of six subjects for all oculometric measures (Kruskal-Wallis, p < 0.0001). The ratio of across-observer variance to within-observer variance is given for each metric.
Validity
To assess the ability of our metrics to detect degradations in the visual stimulus (i.e., that our metrics are valid measures of dynamic visual information processing), we ran six subjects in an experiment using sampled motion, which is known to produce both degraded motion perception (Kuroki et al., 2006; Watson, 2013) and smooth-pursuit tracking (Churchland & Lisberger, 2000). We used a classic Rashbass (1961) tracking task with only horizontal (randomly left or right) motion, and measured the five INIT and SS metrics. Our results on the reliability experiment demonstrated large across-subject variability, which tends to reduce the power to detect possible effects of sampling rate. To minimize the impact of that variability, for each subject, we first normalized the data by subtracting out their mean value across all target speed and sampling rate conditions. 
Degraded visual motion in sampled stimuli strongly impairs pursuit initiation, whereas steady-state tracking shows more subdued effects as expected under closed-loop control (Figure 3). Using a three-way ANOVA, we observed clear main effects of sampling rate on both initiation metrics [latency: F(6, 5) = 96.0; acceleration: F(6, 5) = 41.5, both ps < 0.0001], as well as a significant interaction between sampling rate and speed [latency: F(24, 314) = 2.7; acceleration: F(24, 314) = 6.9, both ps < 0.0001]. We also observed significant but more subdued main effects of sampling rate on all three steady-state tracking metrics [gain: F(6, 5) = 15.9, p < 0.0001; saccade amplitude: F(6, 5) = 3.2, p < 0.05; proportion smooth: F(6, 5) = 9.7, p < 0.0001], as well as significant interaction between sampling rate and speed [gain: F(24, 314) = 1.9, p < 0.01; saccade amplitude: F(24, 314) = 1.9, p < 0.01; proportion smooth: F(24, 314) = 4.32, p < 0.0001]. Whereas the significant main effects demonstrate that these five oculometric measures are sensitive to the quality of sampled motion, the significant interactions shows that sampling rate has a larger impact at higher speed consistent with frequency-domain predictions (Churchland & Lisberger, 2000; Watson, 2013). 
Figure 3
 
Validation of oculometric measures with sampled motion stimuli. Each row contains axes plotting one oculometric measurement as a function of target speed: one set of axes in the sample-and-hold condition (left-hand column), and one set for the sample-and-blank condition (right-hand column). The color series in each set of axes represents the sampling frequencies from 30 Hz (bright yellow) to 960 Hz (black). The data in each panel are zeroed about the mean value across all target speeds and sampling frequencies for each observer; error bars plot the standard error of the mean across observers.
Figure 3
 
Validation of oculometric measures with sampled motion stimuli. Each row contains axes plotting one oculometric measurement as a function of target speed: one set of axes in the sample-and-hold condition (left-hand column), and one set for the sample-and-blank condition (right-hand column). The color series in each set of axes represents the sampling frequencies from 30 Hz (bright yellow) to 960 Hz (black). The data in each panel are zeroed about the mean value across all target speeds and sampling frequencies for each observer; error bars plot the standard error of the mean across observers.
Baseline population metrics
Figure 4 shows the population distribution of set of 10 metrics for 41 normal subjects, nearly all of whom were naive to previous oculomotor or psychophysical testing. Our subjects ranged in age from 20 to 56 years (median: 27) and they had static visual acuity that ranged from −0.29 to 0.44 logMAR (median: −0.20). With high direction and speed uncertainty in the motion of the step-ramp stimulus, the median pursuit latency was 180 ms, with a median initial pursuit acceleration of 124 deg/s2. The median steady-state pursuit gain was 0.82, with a median proportion smooth of 0.67 and median catch-up saccade amplitude 2.3 deg. The median direction noise was 8.66° with a median cardinal-oblique anisotropy of 0.37 and a median vertical-horizontal asymmetry of 0.10. These data can be converted into a direction-tuning threshold of 6.3° along the cardinal axes and 13.7° along the oblique axes (see Appendix). The median speed responsiveness was 0.55 with median speed noise of 3.4 deg/s. These data can be converted into a median speed-discrimination threshold of 6.23 deg/s or a Weber fraction of 31% (see Appendix). 
Figure 4
 
Benchmark measures across population of 41 subjects. The first four oculometric measures are across-session median or average values for a distribution of trial-by-trial measurements. The last two oculometric measures are taken from data from one session. Oculometric measures for normal subject populations will allow individual subject data to be characterized as a multidimensional vector. By converting these distributions to standard normal distributions, characteristic deficit patterns resulting from disease can be expressed as vector deviation from the origin; detection metrics can be derived by taking the dot product of an individual subject's data with the direction of impairment.
Figure 4
 
Benchmark measures across population of 41 subjects. The first four oculometric measures are across-session median or average values for a distribution of trial-by-trial measurements. The last two oculometric measures are taken from data from one session. Oculometric measures for normal subject populations will allow individual subject data to be characterized as a multidimensional vector. By converting these distributions to standard normal distributions, characteristic deficit patterns resulting from disease can be expressed as vector deviation from the origin; detection metrics can be derived by taking the dot product of an individual subject's data with the direction of impairment.
To quantify the extent to which our various metrics provide independent information, we measured the degree of correlation between them across our population of 41 subjects. Figure 5 plots correlation matrices between the sets of measurements, illustrating the range of r2 values from 0.0 to 0.62. Clearly, our sets of initiation and speed metrics share a significant proportion of underlying variance, but still on average only about a quarter of the variance (mean 23%) is shared between any pair of two metrics. The two metrics quantifying the pursuit oblique effect anisotropy (Krukowski & Stone, 2005) were somewhat correlated with one another (r2 = 0.31), but uncorrelated with the set of eight other metrics (mean r2 = 0.03, p > 0.05). Lastly, all 10 oculometrics were uncorrelated with both static visual acuity (Pearson's R, p > 0.05, r2 < 0.06) and age (Pearson's R, p > 0.05, r2 < 0.08). To highlight this clustering evident in the correlation matrix, we grouped families of metrics whose mutual correlation was r2 > 0.2 (solid red boundaries). 
Figure 5
 
Correlation analysis. The grayscale color plots illustrate the correlation between our 10 metrics. In the left-hand panel, the grayscale value represents r2 value of the pairwise correlation between our set of 10 metrics as well as visual acuity and age. The uninformative values along the diagonal have been colored black, and all r2 values are plotted. The metrics were ordered according to the average correlation strength with all other measurements. In the right-hand panel, a p < 0.01 threshold was applied, leaving only correlation values stronger than predicted by chance (p = 0.015). The red outlines delimit the boundaries of families of metrics with mutual correlation of r2 > 0.2; the first family contains eight of the 10 metrics; the second family contains the last two metrics that quantify the nonlinear properties of pursuit direction tuning. White text indicates the r2 value.
Figure 5
 
Correlation analysis. The grayscale color plots illustrate the correlation between our 10 metrics. In the left-hand panel, the grayscale value represents r2 value of the pairwise correlation between our set of 10 metrics as well as visual acuity and age. The uninformative values along the diagonal have been colored black, and all r2 values are plotted. The metrics were ordered according to the average correlation strength with all other measurements. In the right-hand panel, a p < 0.01 threshold was applied, leaving only correlation values stronger than predicted by chance (p = 0.015). The red outlines delimit the boundaries of families of metrics with mutual correlation of r2 > 0.2; the first family contains eight of the 10 metrics; the second family contains the last two metrics that quantify the nonlinear properties of pursuit direction tuning. White text indicates the r2 value.
Discussion
We describe 10 oculometrics measured with a 15-min radial motion-tracking task: Five are standard measures of vigor of the smooth-pursuit response and five quantify sensitivity to stimulus direction and speed. First, we assessed the test–retest reliability of eight of our metrics and observed the measurements to be remarkably stable across sessions, showing significantly less variability as compared to the variability across subjects. Second, when tracking degraded motion stimuli, we observed significant decrements in both initiation metrics, as well as the three steady-state tracking metrics, validating the ability of our metrics to detect degradation of visual motion processing. Last, we measure our full set of metrics for a population of 41 normal (mostly naive) subjects, providing a much more extensive baseline dataset of standard human performance than earlier human perceptual and oculomotor studies that typically used a small number of highly practiced human or nonhuman primate subjects (Beutter & Stone, 2000; De Bruyn & Orban, 1988; Kowler & McKee, 1987; Lisberger & Westbrook, 1985; Robinson, 1965; Stone & Krauzlis, 2003; Tychsen & Lisberger, 1986). 
Many previous behavioral studies have found that, under a wide range of conditions, pursuit's sensitivity to both low-level factors like speed and direction (Kowler & McKee, 1987; Krukowski et al., 2003; Krukowski & Stone, 2005; Stone & Krauzlis, 2003; Watamaniuk & Heinen, 1999) and high-order factors like windowing, depth interpretation, and stimulus ambiguity (Beutter & Stone, 1998; Madelain & Krauzlis, 2003b; Stone, Beutter, & Lorenceau, 2000) mirrors that of perception (for a review, see Stone et al., 2009; see, however, Schutz, Braun, & Gegenfurtner, 2011) based on extensive shared neural processing within the extrastriate cortex (Newsome et al., 1985). This study capitalized on that fact and utilized a highly randomized stimulus environment (unpredictable onset time, speed, and direction) and large pool of naive participants (with no training effects) so as to minimize nonvisual influences, such as prediction and practice, to generate a robust archive of human dynamic visual motion processing for perception and oculomotor control. We conclude that our set of oculometric measures provides valid, reliable, and robust estimates of several independent aspects of visual motion processing, which may prove useful as a clinical assessment tool for detecting and characterizing disorders or impairment of sensorimotor processing. 
Comparison with previous human and primate oculomotor studies
Our population metrics of the smooth-pursuit response are generally well-matched to previously reported values. The median latency across our population for tracking 16–24 deg/s step-ramp target motion (180 ms, semi-interquartile range: 176–185 ms) matched a previous measurement of 185 ms when tracking 10 deg/s target motion with high directional uncertainty (Krukowski & Stone, 2005), but was longer than measurements of 90–160 ms step-ramp pursuit human latencies with low directional uncertainty (Carl & Gellman, 1987; Krukowski & Stone, 2005; Rashbass, 1961; Tychsen & Lisberger, 1986) and measurements of 90–110 ms for latencies of highly trained monkeys (Lisberger & Westbrook, 1985). We must mention, however, that our automatic pursuit latency computation used a weighting term based on the previously reported value of 185 ms; clearly, this weighting did not dictate our latency measurements, as seen in the idiosyncratic, yet repeatable (Figure 4) latency variation across subjects with some showing latencies as low as 169 ms, but the weighting did bias our values towards the earlier result. Using this latency prior provided a graceful constraint to allow the automated pursuit latency algorithm to behave well in the presence of highly variable raw data, at the cost of constraining the range and biasing the measured values slightly. Omitting this constraint would, however, have led to some absurd hinge placements (e.g., a flat hinge well before pursuit initiation). 
Our median acceleration of 124 deg/s2 (semi-interquartile range: 92–143 deg/s2) during the open-loop interval for 16–24 deg/s target motion falls within the reported 75–150 deg/s2 range of saccade-free pursuit acceleration while tracking 20 deg/s ramp-step-ramp motion (Lisberger, Evinger, Johanson, & Fuchs, 1981; Lisberger & Westbrook, 1985). As our acceleration measurements often included some postsaccadic pursuit (Lisberger, 1998), we observed accelerations significantly higher than the 40 deg/s2 accelerations measured during exclusively presaccadic pursuit of 20 deg/s step-ramp motion (Carl & Gellman, 1987), often matching or exceeding the reported high acceleration value of 150 deg/s2 (Carl & Gellman, 1987). 
Our median gain of 0.82 (semi-interquartile range: 0.75–0.86), averaged over the entire steady-state interval for 16–24 deg/s target motion, falls just below the smooth-pursuit gain of 0.86–1.0 previously observed by Schalen (1980) for tracking 20 deg/s target motion, but matches the gain of 0.83 reported by Levin et al. (1988). 
Our population metrics for direction and speed-tuning thresholds were somewhat higher than some previously reported values in the literature, likely due to at least two reasons: highly trained versus naive observers, low versus high uncertainty in the prior probability of stimulus direction, speed, and motion onset. Our median 2AFC direction discrimination thresholds across observers was 8.9° along the cardinal axes and 19.4° along the oblique axes measured during steady-state tracking. These measurements match the previously reported oculometric thresholds for direction discrimination of 8°–9° (Krukowski & Stone, 2005; Watamaniuk & Heinen, 1999) along the cardinal axes and 16.1° along the oblique axes (Krukowski & Stone, 2005) using a small spot stimulus, but are substantially higher than the 2°–3° oculometric direction discrimination threshold reported during steady-state tracking of spot stimuli in monkeys (Osborne, Hohl, Bialek, & Lisberger, 2007), the 1.3° perceptual and 1.4°–1.9° oculometric direction discrimination thresholds for the motion of a small spot stimulus along cardinal directions (Stone & Krauzlis, 2003), the 1.7° perceptual direction discrimination thresholds for a 10-deg diameter pixel noise stimulus moving at 16 deg/s (De Bruyn & Orban, 1988), and perceptual discrimination thresholds of 3.5° and 6.7° for cardinal and oblique directions of motion, respectively (Krukowski & Stone, 2005). Our median speed discrimination threshold across observers was 7.3 deg/s (semi-interquartile range: 4.4–9.2 deg/s), a fractional threshold of 31% (semi-interquartile range: 22%–46%). This Weber fraction is somewhat higher than perceptual speed discrimination thresholds of 10% for a large pixel noise stimulus (De Bruyn & Orban, 1988), perceptual and oculometric thresholds of 7%–9% for a small spot stimulus (Gegenfurtner, Xing, Scott, & Hawken, 2003; Kowler & McKee, 1987), perceptual thresholds of 14%–27% for a spatial 2AFC speed discrimination between random dot patches in macaques (Liu & Newsome, 2005), and thresholds of 11%–15% (Osborne et al., 2007) derived from smooth pursuit of a spot stimulus in monkeys. Consistent with standard practice in psychophysics, several of these previous studies have used very highly trained human (De Bruyn & Orban, 1988; Kowler & McKee, 1987; Stone & Krauzlis, 2003; Watamaniuk & Heinen, 1999) or monkey (Liu & Newsome, 2005; Osborne et al., 2007) subjects, as well as a restricted set of directions (Gegenfurtner et al., 2003; Kowler & McKee, 1987; Osborne et al., 2007; Stone & Krauzlis, 2003; Watamaniuk & Heinen, 1999) such that stimulus motion was to some extent predictable. Training effects significantly improve performance in motion-discrimination tasks (Huang, Lu, Zhou, & Liu, 2011), which may contribute to the differences between our measurements of direction and speed discrimination threshold in a large population of mostly naive subjects versus a small sample of highly practiced subjects (De Bruyn & Orban, 1988; Kowler & McKee, 1987; Osborne et al., 2007). Lastly, eye-tracker noise could also theoretically influence both noise metrics and thus elevate direction and speed-tuning thresholds; however, the fact that our tracker noise accounts for only a small portion of the observed variance in our metrics (typically only 5%) indicates that it was not a dominant factor in this study. 
While our method uses a 15-min test, making the analogous set of measurements using standard psychophysical techniques (Beutter & Stone, 1998, 2000; De Bruyn & Orban, 1988; Krukowski et al., 2003; Stone & Krauzlis, 2003) is much more burdensome and time consuming. Using our simulations (see Appendix) for comparison, we estimate that our oculometric approach yields a more than 20-fold time savings over traditional two-interval forced-choice (2IFC) button-press psychophysics; direction and speed discrimination measurements that typically require several 1-hr sessions to collect would now only take 15 min. Furthermore, such extensive parametric psychophysical testing generates concerns about fatigue, inattention/complacency, and circadian variation when performance measures must be taken across several days. 
Use of smooth-pursuit tasks for clinical screening
Our quantitative metrics fulfill the basic criteria necessary for a useful psychometric test (Nunnally, 1967) and, because our 180-trial test can be performed in a 15-min session using only a chin rest, a noninvasive eye tracker, and a standard display system, it could provide a valuable new clinical assessment tool for detecting and characterizing visual pathology or impairment. First, our quantitative metrics are highly stable across repeated measurements. Our metrics typically revealed substantial differences across subjects, with much smaller test–retest variability within subject across sessions. To be an effective clinical screening tool, test–retest variability must be significantly smaller than the variability associated with a factor of interest (e.g., variability across clinical or display conditions). Second, our quantitative metrics do indeed detect impaired motion processing. Using a coarsely sampled motion stimulus that is known to produce degraded motion percepts, our metrics showed significant effects of sampling rate that were more pronounced at higher speed. Thirdly, a correlation analysis of our set of 10 metrics revealed two statistically unrelated groups of metrics: one small group comprised of the amplitude and anisotropy of pursuit direction-tuning, and one larger group containing the remaining eight metrics with modest, albeit significant, correlations. Across our population, all metrics showed a significant degree of statistical independence from all other metrics, with the shared variance being on average only 23%. All of our oculometrics were uncorrelated with both subject age and visual acuity (the standard measure of static visual processing). 
Our multidimensional metrics may not only be useful in detecting deficits in visual processing but also in characterizing oculomotor signs of various disease states by showing a characteristic pattern in the changes across the 10 metrics. For a degenerative retinal disease, such as retinitis pigmentosa for example, one might expect prolonged pursuit latency and sluggish acceleration, as well as a high level of direction noise (Turano & Wang, 1992) due to poor detection of motion onset in the periphery, although the steady-state tracking metrics can be largely unimpaired when the target image falls on the intact fovea and tracking is driven by a higher-order target motion signal (Madelain & Krauzlis, 2003a; Newsome, Wurtz, & Komatsu, 1988; Stone et al., 2009). For neurological conditions associated with diffuse damage to extrastriate visual cortex (e.g., Alzheimer's disease, traumatic brain injury) or degenerative disorders involving sensorimotor pathways (Wessel, Moschner, Wandinger, Kompf, & Heide, 1998), one might expect deficits in steady-state metrics (Dursteler et al., 1987), consistent with impaired higher-order visual perception (Pelak & Hoyt, 2005; Suh et al., 2006). Particular psychiatric or developmental disorders may yet show another characteristic pattern of deficits. In schizophrenia, for example, pursuit latency for 20 deg/s step-ramp motion has been reported to be 188 ms (Levin et al., 1988), similar to our observed median of 180 ms, although the reported acceleration of 48 deg/s2 and average gain of 0.36 are substantially lower than our observed median acceleration of 143 deg/s2 and median gain of 0.82. If a patient has a severe motor deficit related to eye movements (e.g., square-wave jerk, oculomotor nerve palsy), their oculomotor data in our paradigm may be so compromised as to be useless, or may show a characteristic impairment pattern in the data that masks possible concurrent dynamic visual impairments. To evaluate the utility of our screening test for any particular pathological state quantitatively, one must estimate its sensitivity (i.e., the signal-to-noise ratio) for detecting disease using oculomotor symptoms. This relates to the psychometric concepts (Nunnally, 1967) of validity, which addresses the signal, and test–retest reliability, which addresses the noise. For example, consider using a single metric, say steady-state pursuit gain, as a tool to screen for mild traumatic brain injury (mTBI); a single gain measurement would have low sensitivity (i.e., signal-to-noise ratio) if either the noise is large or the signal available to be measured is small. The noise component includes session-to-session variability in gain measurements for individual subjects (Figure 2), as well as interobserver variability in the gain measurements across the normal and mTBI populations (Figures 2, 3, and 4). The signal component results from the magnitude of the change in gain associated with various levels of mTBI, which must be large relative to the noise for any valid screening test for mTBI. Thus, by extension, the clinical utility can be quantified by determining the sensitivity (i.e., the signal-to-noise ratio) of our multidimensional set of metrics with respect to a particular factor of interest (e.g., mTBI), that is, by statistically comparing the difference between the 10 metrics from a clinical population to their values in a normal population. By using a multidimensional test, one can not only measure the sensitivity for detecting any particular magnitude change in the standard signal-detection theory sense, but can also generate a 10-dimensional direction change as a means of characterizing the type of impairment. Thus, the multidimensionality of our metric set not only increases overall sensitivity, it also provides a qualitative advantage over single-metric tests by providing both a magnitude and direction for the impairment, allowing measurement of both impaired and better-than-normal dynamic visual processing. 
For decades, gross oculomotor function has provided neurologists with a window to assess lesions and brain disease (Leigh & Zee, 2006). Indeed, one of the earliest clinical oculomotor experiments used Raymond Dodge's photographic technique (Dodge, 1903; Dodge & Cline, 1901) to measure horizontal smooth-pursuit eye movements in patients with several types of psychiatric disorders (Diefendorf & Dodge, 1908). Recent reports highlight the possible use of more quantitative metrics derived from standardized eye-movement tasks for more fine-tuned detection (Pearson, Armitage, Horner, & Carpenter, 2007), screening (Heitger, Jones, & Anderson, 2008), or diagnostic uses (Leigh & Kennard, 2004; Zee, 2012), or to evaluate therapeutic interventions (Anderson & MacAskill, 2013). As a practical matter, standardization of oculomotor tasks (Antoniades et al., 2013), saccade-detection algorithms (Liston et al., 2013) eye-tracker calibration methods (Beutter & Stone, 1998), and data analysis techniques (Beutter & Stone, 2000; Kowler & McKee, 1987; Liston & Krauzlis, 2003; Liston & Stone, 2008; Osborne et al., 2007; Stone et al., 2009; Stone & Krauzlis, 2003; Watamaniuk & Heinen, 1999) will enable more rigorous quantitative screening and assessment of clinical conditions, for example, the presence or absence of deficits, efficacy of a therapeutic intervention, or recovery from trauma. In particular, the multidimensional vector space of metrics described here may allow for the identification of oculometric phenotypes of neural pathologies as represented by their characteristic vector displacement between the normal and pathological populations. 
Acknowledgments
The development of this methodology has been supported over the past 10 years by NASA's Human Health and Performance Program (111-10-10), NASA's Human Factors Engineering Program (131-20-30), the National Science Foundation's Program in Perception, Action, and Cognition Program (NSF 0924841), the National Space Biomedical Research Institute (SA 2002), the U.S. Air Force School of Aerospace Medicine, a NASA Ames Research Center Director's Innovation Fund Award, and the Office of Naval Research (SAA2-402925). 
Commercial relationships: Both authors (DL and LS) share a provisional patent, which is the subject matter of this publication, not licensed or otherwise commercialized.  
Corresponding author: Dorion Liston. 
Email: dorion.b.liston@nasa.gov. 
Address: NASA Ames Research Center, Moffett Field, CA, USA. 
References
Adler S. A. Bala J. Krauzlis R. J. (2002). Primacy of spatial information in guiding target selection for pursuit and saccades. Journal of Vision, 2 (9): 5, 627–644, http://www.journalofvision.org/content/2/9/5, doi:10.1167/2.9.5. [PubMed] [Article] [PubMed]
Anderson T. J. MacAskill M. R. (2013). Eye movements in patients with neurodegenerative disorders. Nature Reviews Neurology, 9 (2), 74–85. [CrossRef] [PubMed]
Antoniades C. Ettinger U. Gaymard B. Gilchrist I. Kristjansson A. Kennard C. Carpenter R. H. S. (2013). An internationally standardised antisaccade protocol. Vision Research, 84, 1–5. [CrossRef] [PubMed]
Bach M. (1996). The Freiburg Visual Acuity test—Automatic measurement of visual acuity. Optometry and Vision Science, 73 (1), 49–53. [CrossRef] [PubMed]
Beutter B. R. Eckstein M. P. Stone L. S. (2003). Saccadic and perceptual performance in visual search tasks. I. Contrast detection and discrimination. Journal of the Optical Society of America, A: Optics, Image Science, & Vision, 20 (7), 1341–1355. [CrossRef]
Beutter B. R. Stone L. S. (1998). Human motion perception and smooth eye movements show similar directional biases for elongated apertures. Vision Research, 38 (9), 1273–1286. [CrossRef] [PubMed]
Beutter B. R. Stone L. S. (2000). Motion coherence affects human perception and pursuit similarly. Visual Neuroscience, 17 (1), 139–153. [PubMed]
Blanke O. Landis T. Mermoud C. Spinelli L. Safran A. B. (2003). Direction-selective motion blindness after unilateral posterior brain damage. European Journal of Neuroscience, 18 (3), 709–722. [CrossRef] [PubMed]
Carl J. R. Gellman R. S. (1987). Human smooth pursuit: Stimulus-dependent responses. Journal of Neurophysiology, 57 (5), 1446–1463. [PubMed]
Carpenter R. H. S. (1981). Oculomotor procrastination. In Fisher D. F. Monty R. A. Senders J. W. (Eds.), Eye movements: Cognition and visual perception (pp. 237–246). Hillsdale, NJ: Lawrence Erlbaum Associates.
Carpenter R. H. S. Williams M. L. L. (1995). Neural computation of log likelihood in control of saccadic eye movements. Nature, 377 (6544), 59–62. [CrossRef] [PubMed]
Churchland M. M. Lisberger S. G. (2000). Apparent motion produces multiple deficits in visually guided smooth pursuit eye movements of monkeys. Journal of Neurophysiology, 84 (1), 216–235. [PubMed]
de Brouwer S. Yuksel D. Blohm G. Missal M. Lefevre P. (2002). What triggers catch-up saccades during visual tracking? Journal of Neurophysiology, 87 (3), 1646–1650. [PubMed]
De Bruyn B. Orban G. A. (1988). Human velocity and direction discrimination measured with random dot patterns. Vision Research, 28 (12), 1323–1335. [CrossRef] [PubMed]
Diefendorf A. R. Dodge R. (1908). An experimental study of the ocular reactions of the insane from photographic records. Brain, 31 (3), 451–489. [CrossRef]
Dodge R. (1903). Five types of eye movement in the horizontal meridian plane of the field of regard. American Journal of Physiology, 8 (2), 307–327.
Dodge R. Cline T. S. (1901). The angle velocity of eye movements. Psychology Review, 82 (2), 145–157. [CrossRef]
Dursteler M. R. Wurtz R. H. Newsome W. T. (1987). Directional pursuit deficits following lesions of the foveal representation within the superior temporal sulcus of the macaque monkey. Journal of Neurophysiology, 57 (5), 1262–1287. [PubMed]
Gegenfurtner K. R. Xing D. Scott B. H. Hawken M. J. (2003). A comparison of pursuit eye movement and perceptual performance in speed discrimination. Journal of Vision, 3 (11): 19, 865–876, http://www.journalofvision.org/content/3/11/19, doi:10.1167/3.11.19. [PubMed] [Article] [PubMed]
Green D. M. (1964). Consistency of auditory detection judgments. Psychology Review, 71, 392–407. [CrossRef]
Handel B. Thier P. Haarmeier T. (2009). Visual motion perception deficits due to cerebellar lesions are paralleled by specific changes in cerebro-cortical activity. Journal of Neuroscience, 29 (48), 15126–15133. [CrossRef] [PubMed]
Heitger M. H. Jones R. D. Anderson T. J. (2008). A new approach to predicting postconcussion syndrome after mild traumatic brain injury based upon eye movement function. Paper presented at the International Institute of Electrical and Electronics Engineers Engineering in Medicine and Biology Society Conference, Vancouver, British Columbia, Canada (pp. 3570–3573 ). doi:10.1109/IEMBS.2008.4649977.
Horton J. C. Trobe J. D. (1999). Akinetopsia from nefazodone toxicity. American Journal of Ophthalmology, 128 (4), 530–531. [CrossRef] [PubMed]
Huang X. Lu H. Zhou Y. Liu Z. (2011). General and specific perceptual learning in radial speed discrimination. Journal of Vision, 11 (4): 7, 1–11, http://www.journalofvision.org/content/11/4/7, doi:10.1167/11.4.7. [PubMed] [Article]
Kowler E. McKee S. P. (1987). Sensitivity of smooth eye movement to small differences in target velocity. Vision Research, 27 (6), 993–1015. [CrossRef] [PubMed]
Krauzlis R. J. Adler S. A. (2001). Effects of directional expectations on motion perception and pursuit eye movements. Visual Neuroscience, 18 (3), 365–376. [CrossRef] [PubMed]
Krauzlis R. J. Miles F. A. (1996). Release of fixation for pursuit and saccades in humans: Evidence for shared inputs acting on different neural substrates. Journal of Neurophysiology, 76 (5), 2822–2833. [PubMed]
Krukowski A. E. Pirog K. A. Beutter B. R. Brooks K. R. Stone L. S. (2003). Human discrimination of visual direction of motion with and without smooth pursuit eye movements. Journal of Vision, 3 (11): 16, 831–840, http://www.journalofvision.org/content/3/11/16, doi:10.1167/3.11.16. [PubMed] [Article] [PubMed]
Krukowski A. E. Stone L. S. (2005). Expansion of direction space around the cardinal axes revealed by smooth pursuit eye movements. Neuron, 45 (2), 315–323. [CrossRef] [PubMed]
Kuroki Y. Nishi T. Kobayashi S. Oyaizu H. Yoshimura S. (2006). Improvement of motion image quality by high frame rate. Society for Information Display Symposium Digest of Technical Papers, 37 (1), 14–17. [CrossRef]
Leigh R. J. Kennard C. (2004). Using saccades as a research tool in the clinical neurosciences. Brain, 127 (3), 460–477. [PubMed]
Leigh R. J. Zee D. S. (2006). The neurology of eye movements. Philadelphia: F.A. Davis.
Levin S. Luebke A. Zee D. S. Hain T. C. Robinson D. A. Holzman P. S. (1988). Smooth pursuit eye movements in schizophrenics: Quantitative measurements with the search-coil technique. Journal of Psychiatric Research, 22 (3), 195–206. [CrossRef] [PubMed]
Levy D. L. Sereno A. B. Gooding D. C. O'Driscoll G. A. (2010). Eye tracking dysfunction in schizophrenia: Characterization and pathophysiology. Current Topics in Behavioral Neuroscience, 4, 311–347.
Lisberger S. G. (1998). Postsaccadic enhancement of initiation of smooth pursuit eye movements in monkeys. Journal of Neurophysiology, 79 (4), 1918–1930. [PubMed]
Lisberger S. G. Evinger C. Johanson G. W. Fuchs A. F. (1981). Relationship between eye acceleration and retinal image velocity during foveal smooth pursuit in man and monkey. Journal of Neurophysiology, 46 (2), 229–249. [PubMed]
Lisberger S. G. Westbrook L. E. (1985). Properties of visual inputs that initiate horizontal smooth pursuit eye movements in monkeys. Journal of Neuroscience, 5 (6), 1662–1673. [PubMed]
Liston D. B. Krauzlis R. J. (2003). Shared response preparation for pursuit and saccadic eye movements. Journal of Neuroscience, 23 (36), 11305–11314. [PubMed]
Liston D. B. Krauzlis R. J. (2005). Shared decision signal explains performance and timing of pursuit and saccadic eye movements. Journal of Vision, 5 (9): 3, 678–689, http://www.journalofvision.org/content/5/9/3, doi:10.1167/5.9.3. [PubMed] [Article] [PubMed]
Liston D. B. Krukowski A. E. Stone L. S. (2013). Saccade detection during smooth tracking. Displays, 34, 171–176. [CrossRef]
Liston D. B. Stone L. S. (2008). Effects of prior information and reward on oculomotor and perceptual choices. Journal of Neuroscience, 28 (51), 13866–13875. [CrossRef] [PubMed]
Liu J. Newsome W. T. (2005). Correlation between speed perception and neural activity in the middle temporal visual area. Journal of Neuroscience, 25 (3), 711–722. [CrossRef] [PubMed]
Luce R. D. (1986). Response times: Their role in inferring elementary mental organization. New York: Oxford University Press.
Madelain L. Krauzlis R. J. (2003a). Effects of learning on smooth pursuit during transient disappearance of a visual target. Journal of Neurophysiology, 90 (2), 972–982. [CrossRef]
Madelain L. Krauzlis R. J. (2003b). Pursuit of the ineffable: Perceptual and motor reversals during the tracking of apparent motion. Journal of Vision, 3 (11): 1, 642–653, http://www.journalofvision.org/content/3/11/1, doi:10.1167/3.11.1. [PubMed] [Article]
McKee S. P. (1981). A local mechanism for differential velocity detection. Vision Research, 21 (4), 491–500. [CrossRef] [PubMed]
Nawrot M. Rizzo M. (1995). Motion perception deficits from midline cerebellar lesions in human. Vision Research, 35 (5), 723–731. [CrossRef] [PubMed]
Nawrot M. Rizzo M. (1998). Chronic motion perception deficits from midline cerebellar lesions in human. Vision Research, 38 (14), 2219–2224. [CrossRef] [PubMed]
Newsome W. T. Wurtz R. H. Dursteler M. R. Mikami A. (1985). Deficits in visual motion processing following ibotenic acid lesions of the middle temporal visual area of the macaque monkey. Journal of Neuroscience, 5 (3), 825–840. [PubMed]
Newsome W. T. Wurtz R. H. Komatsu H. (1988). Relation of cortical areas MT and MST to pursuit eye movements. II. Differentiation of retinal from extraretinal inputs. Journal of Neurophysiology, 60 (2), 604–620. [PubMed]
Nickerson R. S. Burnham D. W. (1969). Response times with nonaging foreperiods. Journal of Experimental Psychology, 79, 452–457. [CrossRef]
Nunnally J. C. (1967). Psychometric theory. New York: McGraw-Hill.
Osborne L. C. Hohl S. S. Bialek W. Lisberger S. G. (2007). Time course of precision in smooth-pursuit eye movements of monkeys. Journal of Neuroscience, 27 (11), 2987–2998. [CrossRef] [PubMed]
Osborne L. C. Lisberger S. G. Bialek W. (2005). A sensory source for motor variation. Nature, 437 (7057), 412–416. [CrossRef] [PubMed]
Oswal A. Ogden M. Carpenter R. H. (2007). The time course of stimulus expectation in a saccadic decision task. Journal of Neurophysiology, 97 (4), 2722–2730. [CrossRef] [PubMed]
Palmer J. Huk A. C. Shadlen M. N. (2005). The effect of stimulus strength on the speed and accuracy of a perceptual decision. Journal of Vision, 5 (5): 1, 376–404, http://www.journalofvision.org/content/5/5/1, doi:10.1167/5.5.1. [PubMed] [Article] [PubMed]
Pearson B. C. Armitage K. R. Horner C. W. Carpenter R. H. (2007). Saccadometry: The possible application of latency distribution measurement for monitoring concussion. British Journal of Sports Medicine, 41 (9), 610–612. [CrossRef] [PubMed]
Pelak V. S. Hoyt W. F. (2005). Symptoms of akinetopsia associated with traumatic brain injury and alzheimer's disease. Neuro-Ophthalmology, 29, 137–142. [CrossRef]
Rashbass C. (1961). The relationship between saccadic and smooth tracking eye movements. Journal of Physiology, 159, 326–338. [CrossRef] [PubMed]
Robinson D. A. (1965). The mechanics of human smooth pursuit eye movement. Journal of Physiology, 180 (3), 569–591. [CrossRef] [PubMed]
Schalen L. (1980). Quantification of tracking eye movements in normal subjects. Acta Oto-Laryngologica, 90 (5–6), 404–413. [CrossRef] [PubMed]
Schutz A. C. Braun D. I. Gegenfurtner K. R. (2011). Eye movements and perception: A selective review. Journal of Vision, 11 (5): 9, 1–30, http://www.journalofvision.org/content/11/5/9, doi:10.1167/11.5.9. [PubMed] [Article]
Stone L. S. Beutter B. B. Eckstein M. P. Liston D. B. (2009). Perception and eye movements. In Squire L. R. (Ed.), Encyclopedia of neuroscience (Vol. 7, pp. 503–511). Oxford, England: Academic Press.
Stone L. S. Beutter B. R. Lorenceau J. (2000). Visual motion integration for perception and pursuit. Perception, 29 (7), 771–787. [CrossRef] [PubMed]
Stone L. S. Krauzlis R. J. (2003). Shared motion signals for human perceptual decisions and oculomotor actions. Journal of Vision, 3 (11): 7, 725–736, http://www.journalofvision.org/content/3/11/7, doi:10.1167/3.11.7. [PubMed] [Article] [PubMed]
Suh M. Kolster R. Sarkar R. McCandliss B. Ghajar J. & the Cognitive and Neurobiological Research Consortium. (2006). Deficits in predictive smooth pursuit after mild traumatic brain injury. Neuroscience Letters, 401 (1–2), 108–113. [CrossRef] [PubMed]
Sweet B. T. Stone L. S. Liston D. B. Hebert T. M. (2008). Effects of spatio-temporal aliasing on out-the-window visual systems. Paper presented at the IMAGE Society conference, St. Louis, Missouri.
Takarae Y. Minshew N. J. Luna B. Krisky C. M. Sweeney J. A. (2004). Pursuit eye movement deficits in autism. Brain, 127 (12), 2584–2594. [CrossRef] [PubMed]
Thier P. Bachor A. Faiss J. Dichgans J. Koenig E. (1991). Selective impairment of smooth-pursuit eye movements due to an ischemic lesion of the basal pons. Annals of Neurology, 29 (4), 443–448. [CrossRef] [PubMed]
Thurston S. E. Leigh R. J. Crawford T. Thompson A. Kennard C. (1988). Two distinct deficits of visual tracking caused by unilateral lesions of cerebral cortex in humans. Annals of Neurology, 23 (3), 266–273. [CrossRef] [PubMed]
Turano K. Wang X. (1992). Motion thresholds in retinitis pigmentosa. Investigative Ophthalmology & Visual Science, 33 (8), 2411–2422, http://www.iovs.org/content/33/8/2411. [PubMed] [Article] [PubMed]
Tychsen L. Lisberger S. G. (1986). Visual motion processing for the initiation of smooth-pursuit eye movements in humans. Journal of Neurophysiology, 56 (4), 953–968. [PubMed]
van Zoest W. Hunt A. R. (2011). Saccadic eye movements and perceptual judgments reveal a shared visual representation that is increasingly accurate over time. Vision Research, 51 (1), 111–119. [CrossRef] [PubMed]
Watamaniuk S. N. Heinen S. J. (1999). Human smooth pursuit direction discrimination. Vision Research, 39 (1), 59–70. [CrossRef] [PubMed]
Watson A. B. (2013). High frame rates and human vision: A view through the window of visibility. Society of Motion Picture and Television Engineers Motion Imaging Journal, 122 (2), 18–32.
Wessel K. Moschner C. Wandinger K. P. Kompf D. Heide W. (1998). Oculomotor testing in the differential diagnosis of degenerative ataxic disorders. Archives of Neurology, 55 (7), 949–956. [CrossRef] [PubMed]
Winn B. J. Liao Y. J. Horton J. C. (2007). Intracranial pressure returns to normal about a month after stopping tetracycline antibiotics. Archives of Ophthalmology, 125 (8), 1137–1138. [CrossRef] [PubMed]
Zee D. S. (2012). What the future holds for the study of saccades. Biocybernetics & Biomedical Engineering, 32 (2), 65–76.
Zihl J. von Cramon D. Mai N. (1983). Selective disturbance of movement vision after bilateral brain damage. Brain, 106, 313–340. [CrossRef] [PubMed]
Appendix
Oculometrics and psychometrics: Conversions, comparisons, and caveats
Eye-tracking and psychophysical studies have demonstrated that neural processing contributing to motion perception can be read out using smooth-pursuit eye movements. First, studies quantifying the precision of pursuit movements, including both speed and direction, have concluded that the signals driving pursuit and supporting perceptual responses are limited by the same sources of internal noise (Beutter, Eckstein, & Stone, 2003; Beutter & Stone, 1998; Kowler & McKee, 1987; Osborne, Lisberger, & Bialek, 2005; Stone & Krauzlis, 2003). Second, linked effects on response bias for pursuit and perception indicate a common motion-integration stage (Beutter & Stone, 2000; Stone et al., 2000) or site where top-down factors influence sensory processing (Krauzlis & Adler, 2001). Last, a “sameness” analysis (Green, 1964) comparing discrete choices made by pursuit and perception on a trial-by-trial basis demonstrates shared sources of internal noise (Beutter et al., 2003; Beutter & Stone, 1998; Krauzlis & Adler, 2001; Stone & Krauzlis, 2003). Together, these findings establish tight linkage between the continuous, real-time readout provided by the pursuit system and motion perception (as measured with traditional button-press psychophysics) validating the oculometric (eye–movement-based) approach to measuring perceptual performance on visual motion tasks (Stone et al., 2009). 
Direction and speed-tuning oculometrics can be converted into standard psychometric measurements. Using the metrics from the example subject in Figure A1, the step-by-step process to convert our direction-tuning metrics (anisotropy, amplitude, and direction noise) into an oculometric function is given in Figure A1. Starting with the two parameters that describe the shape of the direction-gain function (Figure A1, Equation 1), we simulated the slope of the function relating pursuit direction to motion direction (Figure A1, center panel) across cardinal and oblique directions of motion, then added Gaussian-distributed directional noise. We then simulated a 2IFC direction discrimination task in which a standard motion signal along either a cardinal or oblique axis is compared to a test signal along a similar trajectory to yield oculometric functions (Figure A1, right panel) for the cardinal and oblique directions of motion, which are given by our three direction-tuning metrics. Similarly, the linear gain and noise components of the speed-tuning response (Figures 1 and A2) can be converted into an oculometric function for a simulated 2IFC speed discrimination threshold or into a Weber fraction (threshold/mean). 
Figure A1
 
Conversion of oculometric measurements to oculometric functions for direction discrimination. The solid gray cloverleaf shape in the left-hand panel plots a direction-gain function described by Equation 1 on a solid black reference circle of unity gain. The two colored arrows represent two standard directions of motion along cardinal (blue) and oblique (red) axes, to which nearby directions of motion might be compared in a two-interval forced-choice (2IFC) discrimination task. Filled circles in the center panel represent a simulation (100,000 trials) of pursuit direction-tuning with respect to the two standard directions of motion. Filled circles represent the average direction of pursuit tracking with respect to a fixed standard direction; error bars represent the standard deviation across simulated trials. The solid lines are linear predictions given by our direction-tuning metrics, not linear regressions of the simulated data. The psychometric functions in the right-hand panel simulate a 2IFC direction discrimination using the pursuit direction-tuning responses; the simulated observer reports whether the motion in the interval containing the standard or the test interval appeared to be more clockwise. Filled circles plot the proportion of clockwise responses constructed by computing pairwise receiver operating characteristic (ROC) areas between the standard direction and each of the other directions. The direction discrimination threshold (i.e., the noise-to-signal ratio) is given by the ratio of the DIR noise to the direction slope; the standard deviation of the 2IFC direction-discrimination oculometric function in this simulated 2IFC task is the threshold scaled by Image not available (De Bruyn & Orban, 1988; Krukowski & Stone, 2005). The two solid cumulative Gaussian psychometric functions are given by our direction-tuning metrics, not fits to the simulated data.
Figure A1
 
Conversion of oculometric measurements to oculometric functions for direction discrimination. The solid gray cloverleaf shape in the left-hand panel plots a direction-gain function described by Equation 1 on a solid black reference circle of unity gain. The two colored arrows represent two standard directions of motion along cardinal (blue) and oblique (red) axes, to which nearby directions of motion might be compared in a two-interval forced-choice (2IFC) discrimination task. Filled circles in the center panel represent a simulation (100,000 trials) of pursuit direction-tuning with respect to the two standard directions of motion. Filled circles represent the average direction of pursuit tracking with respect to a fixed standard direction; error bars represent the standard deviation across simulated trials. The solid lines are linear predictions given by our direction-tuning metrics, not linear regressions of the simulated data. The psychometric functions in the right-hand panel simulate a 2IFC direction discrimination using the pursuit direction-tuning responses; the simulated observer reports whether the motion in the interval containing the standard or the test interval appeared to be more clockwise. Filled circles plot the proportion of clockwise responses constructed by computing pairwise receiver operating characteristic (ROC) areas between the standard direction and each of the other directions. The direction discrimination threshold (i.e., the noise-to-signal ratio) is given by the ratio of the DIR noise to the direction slope; the standard deviation of the 2IFC direction-discrimination oculometric function in this simulated 2IFC task is the threshold scaled by Image not available (De Bruyn & Orban, 1988; Krukowski & Stone, 2005). The two solid cumulative Gaussian psychometric functions are given by our direction-tuning metrics, not fits to the simulated data.
Figure A2
 
Conversion of oculometric measurements to oculometric functions for speed discrimination. Solid black squares in the left-hand panel plot a simulation of the speed-tuning function. The psychometric function in the right-hand panel simulates a 2IFC speed discrimination using the pursuit speed-tuning data (Gegenfurtner et al., 2003). The simulated 2IFC task presents the standard (20 deg/s) stimulus during one interval and a test speed during the other, requiring the observer to report which interval seemed faster. The percentage of faster responses for the test stimulus is plotted as a function of test target speed (McKee, 1981). The speed-tuning threshold (i.e., noise-to-signal ratio) is the ratio of SPD_noise to SPD_slope; the standard deviation of the 2IFC psychometric function is the threshold scaled by Image not available to account for the 2IFC discrimination.
Figure A2
 
Conversion of oculometric measurements to oculometric functions for speed discrimination. Solid black squares in the left-hand panel plot a simulation of the speed-tuning function. The psychometric function in the right-hand panel simulates a 2IFC speed discrimination using the pursuit speed-tuning data (Gegenfurtner et al., 2003). The simulated 2IFC task presents the standard (20 deg/s) stimulus during one interval and a test speed during the other, requiring the observer to report which interval seemed faster. The percentage of faster responses for the test stimulus is plotted as a function of test target speed (McKee, 1981). The speed-tuning threshold (i.e., noise-to-signal ratio) is the ratio of SPD_noise to SPD_slope; the standard deviation of the 2IFC psychometric function is the threshold scaled by Image not available to account for the 2IFC discrimination.
Comparisons between oculometric and psychophysical measurements must always consider the inherent differences between the visual input and motor output delays of oculomotor and perceptual systems. For example, smooth pursuit provides a continuous readout of time-evolving neural signals, whereas psychometric functions are constructed based upon button-press measurements, which occur at a discrete point in time and are limited by the output delays and noise of manual finger-press responses. In simple choice tasks, smooth-pursuit latencies in simple choice tasks range from approximately 120–200 ms, whereas saccadic motor latencies in similar tasks range from 150-350 ms, and button-press latencies are still longer, ranging from 200-500 ms. To test functional hypotheses about neural processing using two systems with different motor output delays, the timing of the motor output behavior must be taken into account in the design of the visual discrimination as well as the response (Beutter et al., 2003; van Zoest & Hunt, 2011). Left uncontrolled, the amount of visual processing time available will introduce a clear inequality, putting short-latency responses at a competitive disadvantage because they tend to fall along the rising portion of the speed-accuracy curve (Beutter et al., 2003; Liston & Krauzlis, 2005). When visual processing time is controlled, performance variations between systems (perceptual or oculomotor) have been interpreted as differences in the read-out mechanism applied to the same time-evolving neural variable (Liston & Krauzlis, 2003; Liston & Krauzlis, 2005), or by unshared sources of output noise (Liston & Stone, 2008; Osborne et al., 2007; Stone & Krauzlis, 2003). Although we did not measure motion perception directly in the present set of experiments, any discussion of oculometric assessment of perceptual motion-processing deficits must include this methodological and interpretational caveat. 
Figure 1
 
Summary of oculometric measurements for one subject. Each 15-min session consisted of 180 trials, and yielded 10 metrics. Histograms in the left-hand column plot across-trial measurements of pursuit motor function; the oculometric direction-tuning and speed-tuning measurements are shown in the right-hand column. The measurements of pursuit initiation (INIT) yield a skewed recinormal distribution of latencies and a quasinormal distribution of accelerations. Measurements of steady-state (SS) tracking (400–700 ms after motion onset) include pursuit gain, the average amplitude of saccadic intrusions, and the proportion of eye displacement that consisted of smooth tracking. The direction-tuning (DIR) scatterplot shows pursuit direction as a function of target direction for each trial; the inset shows the cloverleaf anisotropy (blue dashed line) referenced to a circle of unity gain. The speed-tuning (SPD) scatterplot plots pursuit speed as a function of target speed (solid black circles), the across-trial median (solid black square), and the speed-tuning slope (solid red line).
Figure 1
 
Summary of oculometric measurements for one subject. Each 15-min session consisted of 180 trials, and yielded 10 metrics. Histograms in the left-hand column plot across-trial measurements of pursuit motor function; the oculometric direction-tuning and speed-tuning measurements are shown in the right-hand column. The measurements of pursuit initiation (INIT) yield a skewed recinormal distribution of latencies and a quasinormal distribution of accelerations. Measurements of steady-state (SS) tracking (400–700 ms after motion onset) include pursuit gain, the average amplitude of saccadic intrusions, and the proportion of eye displacement that consisted of smooth tracking. The direction-tuning (DIR) scatterplot shows pursuit direction as a function of target direction for each trial; the inset shows the cloverleaf anisotropy (blue dashed line) referenced to a circle of unity gain. The speed-tuning (SPD) scatterplot plots pursuit speed as a function of target speed (solid black circles), the across-trial median (solid black square), and the speed-tuning slope (solid red line).
Figure 2
 
Test–retest reliability. To assess whether our oculometric measures were stable enough to distinguish across-subject differences, we made repeated measurements across five sessions for each of the six subjects. Each black filled circle plots the mean oculometric measurement across sessions for one subject; the gray error bars represent the entire range of measurements for that subject. We observed significant differences across our pool of six subjects for all oculometric measures (Kruskal-Wallis, p < 0.0001). The ratio of across-observer variance to within-observer variance is given for each metric.
Figure 2
 
Test–retest reliability. To assess whether our oculometric measures were stable enough to distinguish across-subject differences, we made repeated measurements across five sessions for each of the six subjects. Each black filled circle plots the mean oculometric measurement across sessions for one subject; the gray error bars represent the entire range of measurements for that subject. We observed significant differences across our pool of six subjects for all oculometric measures (Kruskal-Wallis, p < 0.0001). The ratio of across-observer variance to within-observer variance is given for each metric.
Figure 3
 
Validation of oculometric measures with sampled motion stimuli. Each row contains axes plotting one oculometric measurement as a function of target speed: one set of axes in the sample-and-hold condition (left-hand column), and one set for the sample-and-blank condition (right-hand column). The color series in each set of axes represents the sampling frequencies from 30 Hz (bright yellow) to 960 Hz (black). The data in each panel are zeroed about the mean value across all target speeds and sampling frequencies for each observer; error bars plot the standard error of the mean across observers.
Figure 3
 
Validation of oculometric measures with sampled motion stimuli. Each row contains axes plotting one oculometric measurement as a function of target speed: one set of axes in the sample-and-hold condition (left-hand column), and one set for the sample-and-blank condition (right-hand column). The color series in each set of axes represents the sampling frequencies from 30 Hz (bright yellow) to 960 Hz (black). The data in each panel are zeroed about the mean value across all target speeds and sampling frequencies for each observer; error bars plot the standard error of the mean across observers.
Figure 4
 
Benchmark measures across population of 41 subjects. The first four oculometric measures are across-session median or average values for a distribution of trial-by-trial measurements. The last two oculometric measures are taken from data from one session. Oculometric measures for normal subject populations will allow individual subject data to be characterized as a multidimensional vector. By converting these distributions to standard normal distributions, characteristic deficit patterns resulting from disease can be expressed as vector deviation from the origin; detection metrics can be derived by taking the dot product of an individual subject's data with the direction of impairment.
Figure 4
 
Benchmark measures across population of 41 subjects. The first four oculometric measures are across-session median or average values for a distribution of trial-by-trial measurements. The last two oculometric measures are taken from data from one session. Oculometric measures for normal subject populations will allow individual subject data to be characterized as a multidimensional vector. By converting these distributions to standard normal distributions, characteristic deficit patterns resulting from disease can be expressed as vector deviation from the origin; detection metrics can be derived by taking the dot product of an individual subject's data with the direction of impairment.
Figure 5
 
Correlation analysis. The grayscale color plots illustrate the correlation between our 10 metrics. In the left-hand panel, the grayscale value represents r2 value of the pairwise correlation between our set of 10 metrics as well as visual acuity and age. The uninformative values along the diagonal have been colored black, and all r2 values are plotted. The metrics were ordered according to the average correlation strength with all other measurements. In the right-hand panel, a p < 0.01 threshold was applied, leaving only correlation values stronger than predicted by chance (p = 0.015). The red outlines delimit the boundaries of families of metrics with mutual correlation of r2 > 0.2; the first family contains eight of the 10 metrics; the second family contains the last two metrics that quantify the nonlinear properties of pursuit direction tuning. White text indicates the r2 value.
Figure 5
 
Correlation analysis. The grayscale color plots illustrate the correlation between our 10 metrics. In the left-hand panel, the grayscale value represents r2 value of the pairwise correlation between our set of 10 metrics as well as visual acuity and age. The uninformative values along the diagonal have been colored black, and all r2 values are plotted. The metrics were ordered according to the average correlation strength with all other measurements. In the right-hand panel, a p < 0.01 threshold was applied, leaving only correlation values stronger than predicted by chance (p = 0.015). The red outlines delimit the boundaries of families of metrics with mutual correlation of r2 > 0.2; the first family contains eight of the 10 metrics; the second family contains the last two metrics that quantify the nonlinear properties of pursuit direction tuning. White text indicates the r2 value.
Figure A1
 
Conversion of oculometric measurements to oculometric functions for direction discrimination. The solid gray cloverleaf shape in the left-hand panel plots a direction-gain function described by Equation 1 on a solid black reference circle of unity gain. The two colored arrows represent two standard directions of motion along cardinal (blue) and oblique (red) axes, to which nearby directions of motion might be compared in a two-interval forced-choice (2IFC) discrimination task. Filled circles in the center panel represent a simulation (100,000 trials) of pursuit direction-tuning with respect to the two standard directions of motion. Filled circles represent the average direction of pursuit tracking with respect to a fixed standard direction; error bars represent the standard deviation across simulated trials. The solid lines are linear predictions given by our direction-tuning metrics, not linear regressions of the simulated data. The psychometric functions in the right-hand panel simulate a 2IFC direction discrimination using the pursuit direction-tuning responses; the simulated observer reports whether the motion in the interval containing the standard or the test interval appeared to be more clockwise. Filled circles plot the proportion of clockwise responses constructed by computing pairwise receiver operating characteristic (ROC) areas between the standard direction and each of the other directions. The direction discrimination threshold (i.e., the noise-to-signal ratio) is given by the ratio of the DIR noise to the direction slope; the standard deviation of the 2IFC direction-discrimination oculometric function in this simulated 2IFC task is the threshold scaled by Image not available (De Bruyn & Orban, 1988; Krukowski & Stone, 2005). The two solid cumulative Gaussian psychometric functions are given by our direction-tuning metrics, not fits to the simulated data.
Figure A1
 
Conversion of oculometric measurements to oculometric functions for direction discrimination. The solid gray cloverleaf shape in the left-hand panel plots a direction-gain function described by Equation 1 on a solid black reference circle of unity gain. The two colored arrows represent two standard directions of motion along cardinal (blue) and oblique (red) axes, to which nearby directions of motion might be compared in a two-interval forced-choice (2IFC) discrimination task. Filled circles in the center panel represent a simulation (100,000 trials) of pursuit direction-tuning with respect to the two standard directions of motion. Filled circles represent the average direction of pursuit tracking with respect to a fixed standard direction; error bars represent the standard deviation across simulated trials. The solid lines are linear predictions given by our direction-tuning metrics, not linear regressions of the simulated data. The psychometric functions in the right-hand panel simulate a 2IFC direction discrimination using the pursuit direction-tuning responses; the simulated observer reports whether the motion in the interval containing the standard or the test interval appeared to be more clockwise. Filled circles plot the proportion of clockwise responses constructed by computing pairwise receiver operating characteristic (ROC) areas between the standard direction and each of the other directions. The direction discrimination threshold (i.e., the noise-to-signal ratio) is given by the ratio of the DIR noise to the direction slope; the standard deviation of the 2IFC direction-discrimination oculometric function in this simulated 2IFC task is the threshold scaled by Image not available (De Bruyn & Orban, 1988; Krukowski & Stone, 2005). The two solid cumulative Gaussian psychometric functions are given by our direction-tuning metrics, not fits to the simulated data.
Figure A2
 
Conversion of oculometric measurements to oculometric functions for speed discrimination. Solid black squares in the left-hand panel plot a simulation of the speed-tuning function. The psychometric function in the right-hand panel simulates a 2IFC speed discrimination using the pursuit speed-tuning data (Gegenfurtner et al., 2003). The simulated 2IFC task presents the standard (20 deg/s) stimulus during one interval and a test speed during the other, requiring the observer to report which interval seemed faster. The percentage of faster responses for the test stimulus is plotted as a function of test target speed (McKee, 1981). The speed-tuning threshold (i.e., noise-to-signal ratio) is the ratio of SPD_noise to SPD_slope; the standard deviation of the 2IFC psychometric function is the threshold scaled by Image not available to account for the 2IFC discrimination.
Figure A2
 
Conversion of oculometric measurements to oculometric functions for speed discrimination. Solid black squares in the left-hand panel plot a simulation of the speed-tuning function. The psychometric function in the right-hand panel simulates a 2IFC speed discrimination using the pursuit speed-tuning data (Gegenfurtner et al., 2003). The simulated 2IFC task presents the standard (20 deg/s) stimulus during one interval and a test speed during the other, requiring the observer to report which interval seemed faster. The percentage of faster responses for the test stimulus is plotted as a function of test target speed (McKee, 1981). The speed-tuning threshold (i.e., noise-to-signal ratio) is the ratio of SPD_noise to SPD_slope; the standard deviation of the 2IFC psychometric function is the threshold scaled by Image not available to account for the 2IFC discrimination.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×