December 2017
Volume 17, Issue 14
Open Access
Article  |   December 2017
Dynamic gaze-position prediction of saccadic eye movements using a Taylor series
Author Affiliations
  • Shuhang Wang
    Schepens Eye Research Institute, Mass Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
    Shuhang_Wang@MEEI.HARVARD.EDU
  • Russell L. Woods
    Schepens Eye Research Institute, Mass Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
    Russell_Woods@MEEI.HARVARD.EDU
  • Francisco M. Costela
    Schepens Eye Research Institute, Mass Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
    Francisco_Costela@MEEI.HARVARD.EDU
  • Gang Luo
    Schepens Eye Research Institute, Mass Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
    Gang_Luo@MEEI.HARVARD.EDU
Journal of Vision December 2017, Vol.17, 3. doi:https://doi.org/10.1167/17.14.3
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shuhang Wang, Russell L. Woods, Francisco M. Costela, Gang Luo; Dynamic gaze-position prediction of saccadic eye movements using a Taylor series. Journal of Vision 2017;17(14):3. https://doi.org/10.1167/17.14.3.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Gaze-contingent displays have been widely used in vision research and virtual reality applications. Due to data transmission, image processing, and display preparation, the time delay between the eye tracker and the monitor update may lead to a misalignment between the eye position and the image manipulation during eye movements. We propose a method to reduce the misalignment using a Taylor series to predict the saccadic eye movement. The proposed method was evaluated using two large datasets including 219,335 human saccades (collected with an EyeLink 1000 system, 95% range from 1° to 32°) and 21,844 monkey saccades (collected with a scleral search coil, 95% range from 1° to 9°). When assuming a 10-ms time delay, the prediction of saccade movements using the proposed method could reduce the misalignment greater than the state-of-the-art methods. The average error was about 0.93° for human saccades and 0.26° for monkey saccades. Our results suggest that this proposed saccade prediction method will create more accurate gaze-contingent displays.

Introduction
A saccade is a rapid ballistic eye movement between gaze fixations (Liversedge & Findlay, 2000). During typical viewing of a scene, saccadic eye movements are used to align the fovea with different regions of the visual scene to gather the highest resolution information about objects of interest (Bahill & Stark, 1979). Saccadic eye movements play an important role in exploring the visual scene that helps construct perceptual representations of environments, and reflect underlying visual attention (Baloh, Sills, Kumley, & Honrubia, 1975; Rayner, 1978; Remington, 1980). 
Visual perception can be manipulated using a methodology called a gaze-contingent display (GCD) that updates the system's display depending on the gaze, i.e., head and eye movements (Duchowski, Cournia, & Murphy, 2004; Reder, 1973). The general process of a GCD is first to detect the gaze direction using an eye tracker, and then to manipulate the image on the display synchronously according to the gaze direction (Aguilar & Castet, 2011; Han, Saunders, Woods, & Luo, 2013; Santini, Redner, Iovin, & Rucci, 2007). GCD paradigms have been used in a variety of applications, including vision science research (Loschky & McConkie, 2002; Pidcoe & Wetzel, 2006; Rayner, 2014; Zang, Jia, Müller, & Shi, 2015), virtual reality (Sheldon, Abegg, Sekunova, & Barton, 2012; Wade et al., 2016), video transmission (Duchowski et al., 2004), and driving simulators (Reingold, Loschky, McConkie, & Stampe, 2003). 
In vision science research, GCDs have been used to control the visual information presented to observers (Duchowski et al., 2004; Perry & Geisler, 2002; Rayner, 2014). For instance, GCDs have been used extensively in the study of reading (Rayner, 1998, 2014; Sheldon et al., 2012). With the help of GCDs, vision loss such as central vision loss, hemianopia, and tunnel vision (Mathôt, Melmi, & Castet, 2015; Pidcoe & Wetzel, 2006; Sheldon et al., 2012) can be simulated with normally sighted observers. This type of stimulus manipulation in studies is also used to understand cognitive load in driving (Gaspar et al., 2016), visual search strategies (Zang et al., 2015), and scene perception (Loschky & McConkie, 2002; Reingold et al., 2003). 
In engineering applications, GCD methodology can speed up image processing and transmission by increasing compression of information that would be in peripheral vision and thus not resolvable by the user (Duchowski et al., 2004; Rayner, 2014). Fletcher and Zelinsky (2009) used the GCD technique to develop a prototype driver assistance system that alerted the driver when the driver failed to fixate on objects of interest (e.g., road signs). Wade et al. (2016) monitored gaze patterns in a virtual reality driving simulator and participants with Autism Spectrum Disorder were alerted to gaze behavior that was not appropriate for the driving situation. Moreover, GCD systems can assist in the provision of haptic feedback during minimally-invasive surgery (Mylonas et al., 2012). In all GCD systems, the spatial and temporal alignment of gaze location and displayed stimulus is crucial. A misalignment is almost inevitable during eye movements due to the time delay between the eye tracker and the display update, which arises from image processing, data transmission, and display refreshing. The updating delay varies greatly, depending on the equipment and software, ranging from 12 to 40 ms in one study that compared seven displays using software updating (Saunders & Woods, 2014) and as short as 10 ms with dedicated hardware updating (Santini et al., 2007). Since saccades are fast and frequent, even a small time delay may result in a large misalignment. Even though a saccadic eye movement is brief, intrasaccadic perception has been reported in many studies (Campbell & Wurtz, 1978; Castet, 2009; Castet & Masson, 2000; García-Pérez & Peli, 2001; Mathôt et al., 2015). Perhaps more important may be the failure of the stimulus to be correctly placed at the time that the saccade ends (lands). This misalignment may allow the viewer a brief glimpse of the displaying areas that are meant to be masked or altered. Even if the glimpse is less than 10 ms, it could affect the perception significantly (Bodelón, Fallah, & Reynolds, 2007; McConkie & Loschky, 2002). 
The current best-practice is to update the display image based on the eye position at a time as close as possible to the next refresh (Aguilar & Castet, 2011), but the efficiency of this method depends on the updating delay of the display system. The impact of the updating delay may be reduced by predicting the eye position. Based on the assumption that the velocity function is symmetrical, Anliker (1976) predicted the saccade trajectories after peak velocity by mirroring the data points before the peak velocity; however, later studies showed that the velocity profile is skewed (Van Opstal & Van Gisbergen, 1987). Even assuming that the velocity profile is symmetrical, Anliker's (1976) method only works for less than half the duration of a saccade. Komogortsev (2007) developed a model to predict eye movements using a Kalman filter, assuming the error terms have a Gaussian distribution. This model is not applicable to saccades, since the prediction error is strongly associated with the dynamic procedure of saccades, which include large acceleration and deceleration phases. Han et al. (2013) modeled the trajectory of saccadic eye movements using a compressed exponential model. However, they noted that their 1D model did not do well with all saccades, as in a 2D space many saccades did not follow the classic ballistic model. Indeed, two saccades of similar trajectories may have much different eye movement speed, which is crucial to predict the eye position. This is evident in the noisy main sequence, the relationship between saccade sizes versus peak speed. 
To solve this problem, we propose a method that predicts the eye position dynamically using a Taylor series, without assuming any predefined eye movement model. We validate our saccade-prediction method by comparing it to other GCD updating methods. Our method produced smaller errors between stimulus placement and eye position than the state-of-the-art. Note that the algorithm only attempts to predict the eye movements given by the eye tracker and that the position of gaze in the scene can only be predicted within the accuracy limits imposed by the eye tracker itself (Poletti & Rucci, 2016). 
Method
The prediction method
The intent of gaze-position prediction is to estimate the future eye position based on the past eye movement. It is a common practice to decompose a saccade into a horizontal component and a vertical component, and then predict each component independently (Sparks, 2002). But the horizontal or vertical component of a curved saccade may not be monotonic (Figure 1a), and such non-monotonic movement (Figure 1b) is difficult to model. To avoid this problem, we predict the moving distance and direction instead of the horizontal and vertical components. 
Figure 1
 
A saccade example with nonmonotonic movement in the horizontal direction. (a) A curved saccade. (b) The variation of the saccade's horizontal component over time. The red point indicates the starting point.
Figure 1
 
A saccade example with nonmonotonic movement in the horizontal direction. (a) A curved saccade. (b) The variation of the saccade's horizontal component over time. The red point indicates the starting point.
Prediction of the moving distance
Moving distance prediction can be broadly categorized into two groups: specific model regression and universal fitting. For specific model regression, an eye movement model must be assumed first. For instance, Zhou, Chen, & Enderle (2009) used a high-order differential equation derived from a plant model that involved Voigt elements (a pair of viscosity and elasticity elements in parallel) representing the ocular muscles, while Han et al. (2013) used a compressed exponential model to approximate the eye movement trajectory over time. Not all saccades follow those assumed models (Han et al., 2013), so model regression approaches would fail sometimes. On the other hand, universal fitting methods are supposed to fit arbitrary shapes of trajectories (Komogortsev, 2007). These methods predict eye movements in straight-forward ways, such as polynomial fitting and Taylor series representation. 
Figure 2a and b show the moving distance prediction with 10 and 15 known positions, respectively. We can see that polynomial regression is prone to large deviation across curvature change points, because the high degree polynomial terms have great influences to the function. On the other hand, the prediction based on the Taylor series seems to be relatively more robust. This is largly because the higher the order of the Taylor term, usually, the less the influence of the term. Therefore, we choose to predict the moving distance using a Taylor series. Based on Taylor's theorem, the moving distance at a sampling point can be represented by the sum of the derivative terms at its previous sampling point:  
\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\begin{equation}\tag{1}D\left( {t + 1} \right) = \mathop \sum \limits_{n = 0}^\infty {{{D^{(n)}}\left( t \right)} \over {n!}}{\rm }\end{equation}
where Display Formula\(D\left( t \right)\) indicates the moving distance at the time Display Formula\(t\), Display Formula\({D^{\left( n \right)}}(t)\) is the nth order derivative, and Display Formula\(1/n!\) is the weight of the nth order derivative.  
Figure 2
 
An example of the moving distance prediction for one saccade using polynomial functions of the fifth degree (red circles) and Taylor series (red crosses) with (a) 10 known positions and (b) 15 known positions. Filled blue circles are known positions and open blue circles are future eye positions (not known at the time of prediction) of this example saccade. As can be seen in this example, polynomial functions may overfit the known positions, so that they may not predict the moving distance well.
Figure 2
 
An example of the moving distance prediction for one saccade using polynomial functions of the fifth degree (red circles) and Taylor series (red crosses) with (a) 10 known positions and (b) 15 known positions. Filled blue circles are known positions and open blue circles are future eye positions (not known at the time of prediction) of this example saccade. As can be seen in this example, polynomial functions may overfit the known positions, so that they may not predict the moving distance well.
To use this model, the order of Taylor series needs to be determined. Figure 3 shows the prediction errors for up to 10 orders of derivatives based on the EyeLink dataset (described below). The prediction error decreased as the number of derivatives increased, while remaining almost the same with higher-order Display Formula\(( \ge 4)\) derivatives included. This is because the weights of higher order derivatives are very small, and the residuals are almost noise. Therefore, the proposed method only uses the first- to fourth-order derivatives, and the moving distance Display Formula\(D\left( {t + 1} \right) - D\left( t \right)\) can be estimated as:  
\begin{equation}\tag{2}D\left( {t + 1} \right) - D\left( t \right) = \mathop \sum \limits_{n = 1}^4 {{{D^{\left( n \right)}}\left( t \right)} \over {n!}}{\rm }\end{equation}
 
Figure 3
 
Prediction errors with respect to the number of derivatives. Prediction errors were calculated by using different numbers of derivatives to estimate the moving distance.
Figure 3
 
Prediction errors with respect to the number of derivatives. Prediction errors were calculated by using different numbers of derivatives to estimate the moving distance.
Prediction of the moving direction
The moving direction of a saccade changes smoothly, so its first order derivative is almost constant over a short time period and higher order derivatives are approximately 0. The moving direction can be well modeled using the first- and second-order derivatives:  
\begin{equation}\tag{3}A\left( {t + 1} \right) = \mathop \sum \limits_{n = 0}^2 {{{A^{(n)}}\left( t \right)} \over {n!}}{\rm }\end{equation}
where Display Formula\(A\left( t \right)\) indicates the moving direction.  
Since the values of the derivatives are rather small, measurement noise may have relatively large effects. To overcome this problem, the proposed method estimates the moving direction in three different time scales, taking the average of the three scales to estimate the moving direction:  
\begin{equation}\tag{4}A\left( {t + 1} \right) = {1 \over 3}\mathop \sum \limits_{i = 1}^3 \mathop \sum \limits_{n = 0}^2 {{A_i^{(n)}\left( t \right)} \over {n!}}{\rm }\end{equation}
where Display Formula\(i\) indicates the time scale, and Display Formula\(A_i^{(n + 1)}\left( t \right) = (A_i^{\left( n \right)}\left( t \right) - A_i^{\left( n \right)}\left( {t - i} \right))/i\).  
Combination of the moving distance and direction
After obtaining the moving distance and the moving direction for each of the following Display Formula\(DT\) milliseconds, we predicted the eye position at Display Formula\(t + DT\) using the dead-reckoning procedure in each direction:  
\begin{equation}\tag{5}x\left( {t + DT} \right) = x\left( t \right) + \mathop \sum \limits_{i = 1}^{DT} (((D\left( {t + i} \right) - D\left( {t + i - 1} \right))\cdot{\rm{cos}}(A\left( {t + i} \right))){\rm }\end{equation}
 
\begin{equation}\tag{6}y\left( {t + DT} \right) = y\left( t \right) + \mathop \sum \limits_{i = 1}^{DT} (((D\left( {t + i} \right) - D\left( {t + i - 1} \right))\cdot{\rm{sin}}(A\left( {t + i} \right))){\rm }\end{equation}
whereDisplay Formula\((x(t),\,y(t))\) indicates the eye position at the time Display Formula\(t\).  
Data smoothing
Unlike the regression methods such as Han et al.'s (2013), which can automatically smooth random noises, the method based on Taylor series is sensitive to noises due to its derivative nature. So data smoothing is essential in our method. There are two conflicting issues need to be considered: (a) the time window of the smoothing filter should be large enough to suppress noise; and (b) the smoothing filter adds extra latency on top of the tracker-to-display delay. To accommodate the extra latency, our method has to start the prediction earlier (i.e., with less updated data) than the other methods. 
Our method filters the data using a 6th order Butterworth low-pass filter, with the normalized cutoff frequency of 0.005. To compensate for the delay caused by the IIR filter, the filtering is applied in both forward and backward directions. In real time systems, however, the last three filtered data points are not so reliable because they will be updated when more eye movement data becomes available. Therefore, in this study, our method predicts 3 ms further into the future than other methods. For the same reason, our prediction is reliable only after more than six data points are available. 
Note that the smoothing is not meant to calculate the gaze ground truth, which can't be known exactly, and the purpose of data smoothing is to suppress noise so that our method does not fail. 
The flowchart of the proposed method
Figure 4 shows the flowchart for estimating the (Display Formula\(i + DT\))th point after obtain the ith sampling point. The proposed method first smooths the raw data using Butterworth filter, then predicts the moving distance and direction for each of the following milliseconds, and finally estimates the eye position using a dead-reckoning process. 
Figure 4
 
Flowchart for estimating the (\(i + DT\))th points after obtaining the ith sampling point. DT indicates the system delay time. This method does not use all the filtered samples, because the last n/2 filtered data points of the nth order Butterworth filter are not reliable.
Figure 4
 
Flowchart for estimating the (\(i + DT\))th points after obtaining the ith sampling point. DT indicates the system delay time. This method does not use all the filtered samples, because the last n/2 filtered data points of the nth order Butterworth filter are not reliable.
Participants
Human subjects
Seventy five normally-sighted human subjects, median age: 49.3( 22–85) years; 38 male, 37 female) participated in two related studies that were approved by the Institutional Review Board of the Schepens Eye Research Institute in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki). Preliminary screening of the participants included self-report of ocular health, measures of visual acuity and contrast sensitivity for a 2.5° high letter target and evaluation of fixation and central retinal health using retinal photography (Nidek MP-1, Nidek Technologies, Vigonza, Italy or Optos OCT/SLO, Marlborough, MA). All the participants had visual acuity of 20/25 or better, letter contrast sensitivity of 1.675 log units or better, and steady central fixation with no evidence of retinal defects. 
Macaques
Recordings included data from two awake rhesus macaques (Macaca mulatta) studied at the Barrow Neurological Institute (eye-tracking equipment by Riverbend Instruments, Inc., Birmingham, AL). Standard sterile surgical techniques, recording procedures, and animal care methods were approved by the Institutional Animal Care and Use Committee at Barrow Neurological Institute. Information about the breeding, care, and maintenance of the macaques can be found in previously reported studies that addressed different experimental questions (Costela et al., 2015; Costela et al., 2014). Prior to the eye movement recordings, cranial head-post and scleral search-coil implantation surgeries were conducted, under general anesthesia using aseptic techniques, and with full postoperative analgesia and antibiotic therapy. No animals were sacrificed at the end of the experiments. We followed the ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines. 
Apparatus
Video eye tracking
We collected eye data using an EyeLink 1000 eye tracker (SR Research Ltd., Mississauga, Ontario, Canada) at a 1000-Hz sampling rate while subjects viewed a 27” display (60 × 34 cm) from 1 m for a 33° × 19° potential viewing area. A total of 219,335 saccades were identified from two studies. In the first study, participants watched 40 to 46 of 206 thirty-second “Hollywood” video clips, which were chosen to represent a range of genres and types of depicted activities. The genres included nature documentaries (e.g., BBC's “Deep Blue,” “The March of the Penguins”), cartoons (e.g., Shrek, Mulan), and dramas (e.g., Shakespeare in Love, Pay it Forward). This group of participants (N = 62) contributed a total of 108,640 saccades to the dataset. Participants viewing the 30-s clips were instructed to watch the stimulus “normally, as you would watch television or a movie program at home.” At the end of each clip, the participant was asked to describe the contents of the clip (Saunders, Bex, & Woods, 2013). In the second study, fourteen participants (one was also in the first study) watched at least two of five different 30-min movie clips (Bambi, Inside Job, Juno, Kpax, and Flash of Genius), contributing 110,695 saccades. 
Scleral search coil
Eye position was recorded monocularly at 1000 Hz with a scleral search coil (Martinez-Conde, Macknik, & Hubel, 2000, 2002; Robinson, 1963). Macaques fixated their gaze on a small fixation target (0.5° of visual angle, with a luminance of 24.3 cd/m2) on a video monitor (Reference Calibrator V, 60–120-Hz refresh rate; Barco) placed at a distance of 57 cm. Fruit juice rewards were provided for every 1.5–2 s of fixation. Eye movements exceeding a 2° × 2° fixation window were recorded but not rewarded. A total of 21,844 saccades were identified with the scleral search coil. 
Saccade detection
Saccades were detected off-line using the method described below, but the data used to test the algorithms was the raw data of the saccades so identified. For EyeLink data, blinks were identified and removed using EyeLink's online data parser. Periods preceding and following the missing data were removed if they exceeded a speed threshold of 30°/s. Then, we interpolated over the removed blink data by applying cubic splines. For saccade detection in both datasets, the raw data was smoothed by applying a 3rd Savinsky-Sgolay filter with a frame size of 14. Without this smoothing, saccade detection was much less reliable. For the scleral search coil data, periods where the gaze followed a characteristic nasalward and downward phase were considered blink periods and ignored during saccade detection. Speed was calculated as the first derivative of the eye position with respect to time. The beginning of a saccade was signaled when speed exceeded 30°/s for at least 10 ms. The end of a saccade was signaled when speed went below 30°/s. The saccades were restricted to saccades (a) smaller than 40° as this was approximately the maximum diagonal dimension of the display; and (b) larger than 1° and 15 ms in duration to exclude microsaccades. We imposed additional restrictions regarding the initial (< 0.075°/ms) and terminal velocity (<0.3°/ms), as well as the removal of saccades with a velocity at first quartile of duration lower than 0.15 peak velocity. This threshold removed those eye movements with uniform but unrealistic low velocity profiles during their initial phase, and which may have been pursuit eye movement. The smoothed data of the saccades identified using the above procedure was then replaced with the raw data. The rationale for using raw data is all that a real-time algorithm can expect to have available and therefore realistic input. 
The distribution of the saccade amplitudes is shown in Figure 5. For the EyeLink dataset, 95% of the saccade amplitudes were between 1° and 32°. For the scleral search coil dataset, 95% of the saccade amplitudes were between 1° and 9°. 
Figure 5
 
The distribution of the saccade amplitude. (a) The histogram of saccade numbers for the EyeLink data. The total number of the EyeLink saccades is 219,335. The 95% CI of the saccade amplitude range is [1°, 32°]. (b) The histogram of saccade numbers for the scleral search coil data. The total number of the scleral search coil saccades is 21,844. The 95% CI of the saccade amplitude range is [1°, 9°].
Figure 5
 
The distribution of the saccade amplitude. (a) The histogram of saccade numbers for the EyeLink data. The total number of the EyeLink saccades is 219,335. The 95% CI of the saccade amplitude range is [1°, 32°]. (b) The histogram of saccade numbers for the scleral search coil data. The total number of the scleral search coil saccades is 21,844. The 95% CI of the saccade amplitude range is [1°, 9°].
Results
The prediction performance was evaluated by examining the difference (residual error) between the predicted position and the measured position (raw data). Although the raw data includes errors (noise and systematic deviation) and the absolute ground truth is unknown, we can use the raw data as the reference. Correction of error in the raw data is outside of the scope of this paper. In specific applications, error correction can be implemented separately, and our saccade prediction can still be applied. 
To demonstrate the value of the proposed method, the prediction performance was compared with Aguilar and Castet's (2011) method and Han et al.'s (2013) method. We assumed that the system delay was 10 ms, as was done by Han et al. Thus, the prediction was for 10 ms later than the latest position. Aguilar and Castet's method places the stimulus at the latest position (i.e., where the gaze was 10 ms before). To make the Taylor series based method fully work, our method needs at least n + 1 points if we need to use the nth order derivatives. When there are few data points, we could set the corresponding derivative as 0. Therefore, prediction can be started very early. However, Han et al.'s method needs at least six points to calculate the parameters for their model (and they used 10 points in their paper). For comparison here, we adopted the requirement that at least six data points be available before prediction commenced. 
Histograms of residual errors of the three methods for the EyeLink and scleral search coil datasets are shown in Figure 6. Overall, the residual-error distribution of the proposed method is located more towards to the left (lower residual errors) than the other methods, meaning it had fewer large errors. Specifically, for the EyeLink data, the median residual errors of Aguilar and Castet's method, Han et al.'s method, and the proposed method were 1.41°, 1.11°, and 0.93°, respectively. For the scleral search coil data, the median residual errors of Aguilar and Castet's method, Han et al.'s method, and the proposed method were 0.44°, 0.38°, and 0.26°, respectively. 
Figure 6
 
The distribution of residual errors of eye position updating with Aguilar and Castet's (2011) method (blue), Han et al.'s (2013) method (green), and the proposed method (yellow). (a) The residual errors for the EyeLink data. (b) The residual errors for the scleral search coil data.
Figure 6
 
The distribution of residual errors of eye position updating with Aguilar and Castet's (2011) method (blue), Han et al.'s (2013) method (green), and the proposed method (yellow). (a) The residual errors for the EyeLink data. (b) The residual errors for the scleral search coil data.
Figure 7 shows the average residual errors for different saccade sizes. The residual errors of Aguilar and Castet's method and Han et al.'s method became larger rapidly with the increasing of the saccade size. For the EyeLink data, although Han et al.'s method performed better than Aguilar and Castet's method, its residual errors were, on average, close to 3° for saccades larger than 30°. The residual errors of the proposed method were, on average, less than 2° for all the sizes. Similarly, the proposed method also performed better than the other two methods with the scleral search coil data. 
Figure 7
 
The average residual errors with respect to the saccade amplitude for Aguilar and Castet's (2011) method (blue), Han et al.'s (2013) method (green), and the proposed method (yellow). (a) The residual errors for the EyeLink data. (b) The residual errors for the scleral search coil data. The error bars indicate the standard error of the mean.
Figure 7
 
The average residual errors with respect to the saccade amplitude for Aguilar and Castet's (2011) method (blue), Han et al.'s (2013) method (green), and the proposed method (yellow). (a) The residual errors for the EyeLink data. (b) The residual errors for the scleral search coil data. The error bars indicate the standard error of the mean.
Since the moving velocity and the rotation vary considerably during a saccadic eye movement, the accuracy of the prediction may change at different stages. We therefore evaluated the distribution of the residual errors as a function of the saccade amplitude (x axis) and the proportion of saccade amplitude (y axis), which is the saccadic displacement divided by the saccade amplitude, and represents the progression of saccades. 
Figure 8 shows the prediction performance of the three methods at different stages. For the EyeLink data, the residual errors of Han et al.'s method were smaller than Aguilar and Castet's method between 20 and 40 ms after saccade onset. For the scleral search coil data, Han et al.'s method provided a small improvement over Aguilar and Castet's method, but less than the benefit in EyeLink data, because the saccades were shorter and thus there was less time to make a prediction of the saccade trajectory. Comparatively, the residual errors of the proposed method were much smaller than the other two methods across all stages. 
Figure 8
 
A summary of the residual errors for the EyeLink data (the first row) and scleral search coil data (the second row). The average residual error in degrees is shown for each saccade-amplitude bin (x axis) by proportion-of-saccade amplitude bin (y axis). Note the different saccade amplitude and average residual error scales in the two rows. Contour lines indicate bins that are 10, 20, 30, and 40 ms after saccade onset. (a1) and (a2) The residual errors by Aguilar and Castet's (2011) method. (b1) and (b2) The residual errors by Han et al.'s (2013) method. (c1) and (c2) The residual errors by the proposed method.
Figure 8
 
A summary of the residual errors for the EyeLink data (the first row) and scleral search coil data (the second row). The average residual error in degrees is shown for each saccade-amplitude bin (x axis) by proportion-of-saccade amplitude bin (y axis). Note the different saccade amplitude and average residual error scales in the two rows. Contour lines indicate bins that are 10, 20, 30, and 40 ms after saccade onset. (a1) and (a2) The residual errors by Aguilar and Castet's (2011) method. (b1) and (b2) The residual errors by Han et al.'s (2013) method. (c1) and (c2) The residual errors by the proposed method.
Finally, we tested the calculation time of the three methods. The testing platform used Matlab R2013b, an Intel (R) Core (TM) i7-4790 CPU at 3.6 GHz, and Windows 7. The average time to predict an eye position using these three methods was 0.007 ms for Aguilar and Castet's method, 0.46 ms for Han et al.'s method, and 0.45 ms for the proposed method. Since Aguilar and Castet's method does not need to compute a prediction, its speed is much faster than the other two methods. The most common refresh rates of monitors used in GCDs are 60 Hz, 100 Hz, and 120 Hz, which imply that the display systems refresh the screen each 16.7 ms, 10 ms, or 8.3 ms, respectively (however, note that system updating latencies are considerably longer [Saunders & Woods, 2014]). The calculation time of the proposed method is far below that required for such displays. Therefore, the proposed method can be used for real-time applications in most cases. 
Conclusions
In this paper, we have shown that the proposed method outperformed the state-of-the-art methods in predicting saccadic eye movements. The experiment was conducted using two large datasets, which were collected using different eye tracking systems (video-image based EyeLink and scleral search coil). Since we were able to achieve the lowest residual errors for a wide range of saccades made by different species and with two different eye-tracking systems, it seems that the proposed method may be reliable and robust in GCD applications. 
We believe that an important reason for the Taylor series-based method to work well is that it does not predefine any saccadic movement model, but instead it follows the real-time movement trend. As Han et al. (2013) noted, although many eye movement plant models have been proposed, most of them are not in an elementary function form, or have too many parameters to be useful in real-time prediction. Simple models with only a few parameters, such as Han et al.'s method compared in this paper, may not be able to precisely model many saccades, especially those made in complex conditions (e.g., watching movie, as in the EyeLink data). Among the EyeLink data, we have observed many saccades with drastic moving direction changes. The most commonly studied cause of eye movement change is distractors, attractors, and repellers (van Zoest, Donk, & Van der Stigchel, 2012). We have also noted curved saccades even with eye movements between two simple stimuli with no other stimuli visible. In the case of a distractor, the distraction is independent from the obtained gaze sampling data, so it is difficult to predict this kind of movement change in a timely and accurate manner. Therefore, methods based on predefined models can be prone to systematic errors, unless accurate models are used. 
One drawback of the proposed method is that a low pass filter is usually needed to suppress random noise, and the filter causes extra latency. If a given eye tracker has too much noise and a high-order, low-pass filter needs to be used, the proposed method would have to accommodate a larger latency than Han's method. Its performance under such situations needs to be evaluated in future studies. Similarly, low eye tracking data frequencies can negatively impact the prediction performance of the proposed method, because for a given order of low pass filter, the lower the tracking frequency, the longer the filter induced latency. Based on our experiment, the proposed method had advantages at 1000-Hz sampling rate. Its prediction performance for lower sampling rates needs to be investigated. 
Acknowledgments
This research was supported in part by NIH grant EY023724. 
Commercial relationships: none. 
Corresponding author: Gang Luo. 
Address: Schepens Eye Research Institute, Boston, MA, USA. 
References
Aguilar, C., & Castet, E. (2011). Gaze-contingent simulation of retinopathy: Some potential pitfalls and remedies. Vision Research, 51 (9), 997–1012.
Anliker, J. (1976). Eye movements: online measurement, analysis, and control. In Monty R. A. & Senders J. W. (Eds.), Eye movements and psychological processes (pp. 185–202). Hillsdale, NJ: Lawrence Erlbaum.
Bahill, A. T., & Stark, L. (1979). The trajectories of saccadic eye movements. Scientific American, 240 (1), 108–117.
Baloh, R. W., Sills, A. W., Kumley, W. E., & Honrubia, V. (1975). Quantitative measurement of saccade amplitude, duration, and velocity. Neurology, 25 (11), 1065–1065.
Bodelón, C., Fallah, M., & Reynolds, J. H. (2007). Temporal resolution for the perception of features and conjunctions. Journal of Neuroscience, 27 (4), 725–730.
Campbell, F. W., & Wurtz, R. H. (1978). Saccadic omission: Why we do not see a grey-out during a saccadic eye movement. Vision Research, 18 (10), 1297–1303.
Castet, E. (2009). Perception of intra-saccadic motion. In Masson G. S. & Ilg U. J. (Eds.), Dynamics of visual motion processing (pp. 213–235). New York Springer.
Castet, E., & Masson, G. S. (2000). Motion perception during saccadic eye movements. Nature Neuroscience, 3 (2), 177–183.
Costela, F. M., Otero-Millan, J., McCamy, M. B., Macknik, S. L., Di Stasi, L. L., Rieiro, H.,… Martinez-Conde, S. (2015). Characteristics of spontaneous square-wave jerks in the healthy macaque monkey during visual fixation. PloS One, 10 (6), e0126485.
Costela, F. M., Otero-Millan, J., McCamy, M. B., Macknik, S. L., Troncoso, X. G., Jazi, A. N.,… Martinez-Conde, S. (2014). Fixational eye movement correction of blink-induced gaze position errors. PLoS One, 9 (10), e110889.
Duchowski, A. T., Cournia, N., & Murphy, H. (2004). Gaze-contingent displays: A review. Cyber Psychology & Behavior, 7 (6), 621–634.
Fletcher, L., & Zelinsky, A. (2009). Driver inattention detection based on eye gaze—Road event correlation. The International Journal of Robotics Research, 28 (6), 774–801.
García-Pérez, M. A., & Peli, E. (2001). Intrasaccadic perception. Journal of Neuroscience, 21 (18), 7313–7322.
Gaspar, J. G., Ward, N., Neider, M. B., Crowell, J., Carbonari, R., Kaczmarski, H.,… Loschky, L. C. (2016). Measuring the useful field of view during simulated driving with gaze-contingent displays. Human Factors, 58 (4), 630–641.
Han, P., Saunders, D. R., Woods, R. L., & Luo, G. (2013). Trajectory prediction of saccadic eye movements using a compressed exponential model. Journal of Vision, 13 (8): 27, 1–13, doi:10.1167/13.8.27. [PubMed] [Article]
Komogortsev, O. V. (2007). Eye movement prediction by oculomotor plant modeling with Kalman filter (Doctoral dissertation). Kent State University, Kent, Ohio.
Liversedge, S. P., & Findlay, J. M. (2000). Saccadic eye movements and cognition. Trends in Cognitive Sciences, 4 (1), 6–14.
Loschky, L. C., & McConkie, G. W. (2002). Investigating spatial vision and dynamic attentional selection using a gaze-contingent multiresolutional display. Journal of Experimental Psychology: Applied, 8 (2), 99–117.
Martinez-Conde, S., Macknik, S. L., & Hubel, D. H. (2000). Microsaccadic eye movements and firing of single cells in the striate cortex of macaque monkeys. Nature Neuroscience, 3 (3), 251–258.
Martinez-Conde, S., Macknik, S. L., & Hubel, D. H. (2002). The function of bursts of spikes during visual fixation in the awake primate lateral geniculate nucleus and primary visual cortex. Proceedings of the National Academy of Sciences, USA, 99 (21), 13920–13925.
Mathôt, S., Melmi, J.-B., & Castet, E. (2015). Intrasaccadic perception triggers pupillary constriction. PeerJ, 3, e1150.
McConkie, G. W., & Loschky, L. C. (2002). Perception onset time during fixations in free viewing. Behavior Research Methods, Instruments, & Computers, 34 (4), 481–490.
Mylonas, G. P., Kwok, K.-W., James, D. R., Leff, D., Orihuela-Espina, F., Darzi, A., & Yang, G.-Z. (2012). Gaze-contingent motor channeling, haptic constraints and associated cognitive demand for robotic MIS. Medical Image Analysis, 16 (3), 612–631.
Perry, J. S., & Geisler, W. S. (2002). Gaze-contingent real-time simulation of arbitrary visual fields. In Rogowitz B. E. & Pappas T. N. (Eds.), Proceedings of the SPIE 4662, human vision and electronic imaging VII (pp. 57–69). San Jose, CA SPIE.
Pidcoe, P. E., & Wetzel, P. A. (2006). Oculomotor tracking strategy in normal subjects with and without simulated scotoma. Investigative Ophthalmology & Visual science, 47 (1), 169–178. [PubMed] [Article]
Poletti, M., & Rucci, M. (2016). A compact field guide to the study of microsaccades: Challenges and functions. Vision Research, 118, 83–97.
Rayner, K. (1978). Eye movements in reading and information processing. Psychological Bulletin, 85 (3), 618.
Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124 (3), 372.
Rayner, K. (2014). The gaze-contingent moving window in reading: Development and review. Visual Cognition, 22 (3–4), 242–258.
Reder, S. M. (1973). On-line monitoring of eye-position signals in contingent and noncontingent paradigms. Behavior Research Methods & Instrumentation, 5 (2), 218–228.
Reingold, E. M., Loschky, L. C., McConkie, G. W., & Stampe, D. M. (2003). Gaze-contingent multiresolutional displays: An integrative review. Human Factors, 45 (2), 307–328.
Remington, R. W. (1980). Attention and saccadic eye movements. Journal of Experimental Psychology: Human Perception and Performance, 6 (4), 726–744.
Robinson, D. (1963). A method of measuring eye movement using a scleral coil in a magnetic field. IEEE Transactions on Biomedical Engineering, 10, 137–145.
Santini, F., Redner, G., Iovin, R., & Rucci, M. (2007). EyeRIS: a general-purpose system for eye-movement-contingent display control. Behavior Research Methods, 39 (3), 350–364.
Saunders, D. R., Bex, P. J., & Woods, R. L. (2013). Crowdsourcing a normative natural language dataset: A comparison of Amazon Mechanical Turk and in-lab data collection. Journal of Medical Internet Research, 15 (5), e100.
Saunders, D. R., & Woods, R. L. (2014). Direct measurement of the system latency of gaze-contingent displays. Behavior Research Methods, 46 (2), 439–447.
Sheldon, C. A., Abegg, M., Sekunova, A., & Barton, J. J. (2012). The word-length effect in acquired alexia, and real and virtual hemianopia. Neuropsychologia, 50 (5), 841–851.
Sparks, D. L. (2002). The brainstem control of saccadic eye movements. Nature Reviews Neuroscience, 3 (12), 952–964.
Van Opstal, A., & Van Gisbergen, J. (1987). Skewness of saccadic velocity profiles: a unifying parameter for normal and slow saccades. Vision Research, 27 (5), 731–745.
van Zoest, W., Donk, M., & Van der Stigchel, S. (2012). Stimulus-salience and the time-course of saccade trajectory deviations. Journal of Vision, 12 (8): 16, 1–13, doi:10.1167/12.8.16. [PubMed] [Article]
Wade, J., Zhang, L., Bian, D., Fan, J., Swanson, A., Weitlauf, A., … Sarkar, N. (2016). A gaze-contingent adaptive virtual reality driving environment for intervention in individuals with autism spectrum disorders. ACM Transactions on Interactive Intelligent Systems (TiiS), 6 (1), 3.
Zang, X., Jia, L., Müller, H. J., & Shi, Z. (2015). Invariant spatial context is learned but not retrieved in gaze-contingent tunnel-view search. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41 (3), 807.
Zhou, W., Chen, X., & Enderle, J. (2009). An updated time-optimal 3rd-order linear saccadic eye plant model. International Journal of Neural Systems, 19 (05), 309–330.
Figure 1
 
A saccade example with nonmonotonic movement in the horizontal direction. (a) A curved saccade. (b) The variation of the saccade's horizontal component over time. The red point indicates the starting point.
Figure 1
 
A saccade example with nonmonotonic movement in the horizontal direction. (a) A curved saccade. (b) The variation of the saccade's horizontal component over time. The red point indicates the starting point.
Figure 2
 
An example of the moving distance prediction for one saccade using polynomial functions of the fifth degree (red circles) and Taylor series (red crosses) with (a) 10 known positions and (b) 15 known positions. Filled blue circles are known positions and open blue circles are future eye positions (not known at the time of prediction) of this example saccade. As can be seen in this example, polynomial functions may overfit the known positions, so that they may not predict the moving distance well.
Figure 2
 
An example of the moving distance prediction for one saccade using polynomial functions of the fifth degree (red circles) and Taylor series (red crosses) with (a) 10 known positions and (b) 15 known positions. Filled blue circles are known positions and open blue circles are future eye positions (not known at the time of prediction) of this example saccade. As can be seen in this example, polynomial functions may overfit the known positions, so that they may not predict the moving distance well.
Figure 3
 
Prediction errors with respect to the number of derivatives. Prediction errors were calculated by using different numbers of derivatives to estimate the moving distance.
Figure 3
 
Prediction errors with respect to the number of derivatives. Prediction errors were calculated by using different numbers of derivatives to estimate the moving distance.
Figure 4
 
Flowchart for estimating the (\(i + DT\))th points after obtaining the ith sampling point. DT indicates the system delay time. This method does not use all the filtered samples, because the last n/2 filtered data points of the nth order Butterworth filter are not reliable.
Figure 4
 
Flowchart for estimating the (\(i + DT\))th points after obtaining the ith sampling point. DT indicates the system delay time. This method does not use all the filtered samples, because the last n/2 filtered data points of the nth order Butterworth filter are not reliable.
Figure 5
 
The distribution of the saccade amplitude. (a) The histogram of saccade numbers for the EyeLink data. The total number of the EyeLink saccades is 219,335. The 95% CI of the saccade amplitude range is [1°, 32°]. (b) The histogram of saccade numbers for the scleral search coil data. The total number of the scleral search coil saccades is 21,844. The 95% CI of the saccade amplitude range is [1°, 9°].
Figure 5
 
The distribution of the saccade amplitude. (a) The histogram of saccade numbers for the EyeLink data. The total number of the EyeLink saccades is 219,335. The 95% CI of the saccade amplitude range is [1°, 32°]. (b) The histogram of saccade numbers for the scleral search coil data. The total number of the scleral search coil saccades is 21,844. The 95% CI of the saccade amplitude range is [1°, 9°].
Figure 6
 
The distribution of residual errors of eye position updating with Aguilar and Castet's (2011) method (blue), Han et al.'s (2013) method (green), and the proposed method (yellow). (a) The residual errors for the EyeLink data. (b) The residual errors for the scleral search coil data.
Figure 6
 
The distribution of residual errors of eye position updating with Aguilar and Castet's (2011) method (blue), Han et al.'s (2013) method (green), and the proposed method (yellow). (a) The residual errors for the EyeLink data. (b) The residual errors for the scleral search coil data.
Figure 7
 
The average residual errors with respect to the saccade amplitude for Aguilar and Castet's (2011) method (blue), Han et al.'s (2013) method (green), and the proposed method (yellow). (a) The residual errors for the EyeLink data. (b) The residual errors for the scleral search coil data. The error bars indicate the standard error of the mean.
Figure 7
 
The average residual errors with respect to the saccade amplitude for Aguilar and Castet's (2011) method (blue), Han et al.'s (2013) method (green), and the proposed method (yellow). (a) The residual errors for the EyeLink data. (b) The residual errors for the scleral search coil data. The error bars indicate the standard error of the mean.
Figure 8
 
A summary of the residual errors for the EyeLink data (the first row) and scleral search coil data (the second row). The average residual error in degrees is shown for each saccade-amplitude bin (x axis) by proportion-of-saccade amplitude bin (y axis). Note the different saccade amplitude and average residual error scales in the two rows. Contour lines indicate bins that are 10, 20, 30, and 40 ms after saccade onset. (a1) and (a2) The residual errors by Aguilar and Castet's (2011) method. (b1) and (b2) The residual errors by Han et al.'s (2013) method. (c1) and (c2) The residual errors by the proposed method.
Figure 8
 
A summary of the residual errors for the EyeLink data (the first row) and scleral search coil data (the second row). The average residual error in degrees is shown for each saccade-amplitude bin (x axis) by proportion-of-saccade amplitude bin (y axis). Note the different saccade amplitude and average residual error scales in the two rows. Contour lines indicate bins that are 10, 20, 30, and 40 ms after saccade onset. (a1) and (a2) The residual errors by Aguilar and Castet's (2011) method. (b1) and (b2) The residual errors by Han et al.'s (2013) method. (c1) and (c2) The residual errors by the proposed method.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×