June 2018
Volume 18, Issue 6
Open Access
Methods  |   June 2018
Unsupervised detection of microsaccades in a high-noise regime
Author Affiliations
Journal of Vision June 2018, Vol.18, 19. doi:https://doi.org/10.1167/18.6.19
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Denis Sheynikhovich, Marcia Bécu, Changmin Wu, Angelo Arleo; Unsupervised detection of microsaccades in a high-noise regime. Journal of Vision 2018;18(6):19. https://doi.org/10.1167/18.6.19.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Micromovements of the eye during visual fixations provide clues about how our visual system acquires information. The analysis of fixational eye movements can thus serve as a noninvasive means to detect age-related or pathological changes in visual processing, which can in turn reflect associated cognitive or neurological disorders. However, the utility of such diagnostic approaches relies on the quality and usability of detection methods applied for the eye movement analysis. Here, we propose a novel method for (micro)saccade detection that is resistant to high-frequency recording noise, a frequent problem in video-based eye tracking in either aged subjects or subjects suffering from a vision-related pathology. The method is fast, it does not require manual noise removal, and it can work with position, velocity, or acceleration features, or a combination thereof. The detection accuracy of the proposed method is assessed on a new dataset of manually labeled recordings acquired from 14 subjects of advanced age (69–81 years old), performing an ocular fixation task. It is demonstrated that the detection accuracy of the new method compares favorably to that of two frequently used reference methods and that it is comparable to the best of the two algorithms when tested on an existing low-noise eye-tracking dataset.

Introduction
During visual fixation, the eyes produce small involuntary movements, called fixational eye movements (FEMs), which are conventionally separated into saccadic-like jumps (more generally referred to as saccadic intrusions, fixational saccades, or microsaccades), drift, and tremor (Steinman, Haddad, Skavenski, & Wyman, 1973; Abadi, Clement, & Gowen, 2003; Collewijn & Kowler, 2008). During the last decades FEM research has gained a renewed interest due to the possible involvement in perception and attention (Engbert, 2006; Hafed, Chen, & Tian, 2015) and to neurophysiological evidence showing that neural activity is sensitive to different types of FEMs (Martinez-Conde, Macknik, & Hubel, 2000; Kagan, Gur, & Snodderly, 2008). While little functional significance has been attributed to tremor, multiple roles for drift and fixational saccades were suggested. These include error correction (Cornsweet, 1956; Poletti & Rucci, 2010), gaze relocation in high-acuity tasks (Ko, Poletti, & Rucci, 2010; Tian, Yoshida, & Hafed, 2016), enhancement of high-frequency content in input images (Kuang, Poletti, Victor, & Rucci, 2012), and prevention of image fading (Ditchburn, Fender, & Mayne, 1959; Martinez-Conde, Macknik, Troncoso, & Dyar, 2006, but see Collewijn & Kowler, 2008; Poletti & Rucci, 2010). In addition, eye movements, including microsaccades, have been linked with postural sway (Hunter & Hoffman, 2001; Jahn, 2002; Rolfs, Engbert, & Kliegl, 2004). In a more cognitive domain, timing of microsaccades has been associated with shifts in covert attention (Hafed & Clark, 2002; Engbert & Kliegl, 2003; Yuval-Greenberg, Merriam, & Heeger, 2014) and was shown to correlate with the task-induced cognitive load (Bonneh, Adini, Fried, & Arieli, 2011). Given these diverse functional roles of FEMs, eye movement analysis is often used as a diagnostic tool in clinical and aging research (Leigh & Zee, 2015). In particular, changes in eye movement properties with respect to age-matched controls have been characterized in patients with age-related macular degeneration (Moller & Bek, 1998; Macedo, Crossland, & Rubin, 2011), glaucoma (Crabb, Smith, & Zhu, 2014), Parkinsonian disorders (Otero-Millan et al., 2011), and Alzheimer disease (Parkinson & Maxner, 2005; Kapoula et al., 2014). 
The utility of FEM analysis in clinical and aging research critically depends on the availability of reliable detection methods for different types of FEMs. The quality and complexity of an appropriate detection method is in turn determined by the type of hardware and the associated quality of FEM recordings. While high precision eye trackers, such as those based on electromagnetic coils and dual-Purkinje images, are sometimes used in eye movement research (rarely in humans, Kagan et al., 2008; Poletti, Rucci, & Carrasco, 2017; more often in primates, Hafed et al., 2011), video-based eye trackers are by far the most used in human studies, including aging research (Irving, Steinbach, Lillakas, Babu, & Hutchings, 2006; Holmqvist et al., 2011; Port, Trimberger, Hitzeman, Redick, & Beckerman, 2016). These eye trackers are readily available and easy to maintain, they do not require subjects to wear lenses, and they do not limit the visual field of recording (in contrast to some high-precision recording systems; Holmqvist et al., 2011), which are important practical constraints in aging research. 
The dominating method used to estimate gaze direction in video-based systems consists in tracking the position of the pupil in a video recording of the eye. Pupil detection problems, caused by optical artifacts or eye-related disorders, often result in a high-frequency recording noise in these systems (Holmqvist et al., 2011). This is especially true for aged subjects, since “dropping eyelids” caused by fatigue or age-related abnormalities constitute a frequent phenomenon (Nyström, Andersson, Holmqvist, & van de Weijer, 2013; Damasceno et al., 2015). Such a high-frequency noise leads to inaccuracies of microsaccade detection methods that are based on velocity thresholds, since these thresholds are usually chosen depending on the velocity distribution in the sample (Engbert & Mergenthaler, 2006; Otero-Millan, Castro, Macknik, & Martinez-Conde, 2014). A substantial amount of high-frequency noise leads to an excessively elevated threshold value, leading to a massive loss of microsaccades and hence potentially biased statistical analyses. More recently, Bayesian methods were developed for (micro)saccade detection that are not based on velocity thresholds but assume a particular generative model for saccades and Gaussian noise sources (Daye & Optican, 2014; Mihali, van Opheusden, & Ma, 2017). 
In this paper we propose a new (micro)saccade detection method, based on an unsupervised clustering approach, that can effectively separate fixational microsaccades from high-frequency recording noise. The accuracy of the method is assessed on a new labeled dataset consisting of experimental recordings from aged subjects. The method is compared with previous methods in terms of performance and practical use, and finally, a procedure for tuning its main free parameter from the experimental data is proposed. 
Methods
Data collection
Eye-position data were recorded monocularly from 14 subjects aged 69–81 years (75.27 ± 4.25). The participants were recruited as part of the Silversight cohort study (Vision Institute, Paris, France) and they were ascertained to have no visual, audio-vestibular, neuropsychological, or cognitive losses. Subjects with refractive errors larger than +4.00δ or lower than −4.00δ, astigmatism larger than 1δ, visual acuity lower than 8/10 (0.1 logMAR), or dissociate phoria larger than 6δ were excluded. The clinical screening was ethically approved by the “CPP Ile de France,” and it was carried out under medical supervision at the Clinical Investigation Center of the Quinze-Vingts Ophtalmologic Hospital, Paris. 
The EyeLink 1000 (SR Research Ltd., Ontario, Canada) static tower-mounted video-based eye tracker with forehead and chin rest was used for recordings at 1-kHz frequency and manufacturer-stated spatial resolution <0.01° (root-mean-square). Subjects were asked to fixate a bull's-eye–shaped target (0.6° × 0.6° of visual angle) at the center of computer screen (1,280 × 1,024 pixels, 60 Hz, 32 bits, RGB) located 57 cm in front of the subject, covering 30° × 38° of visual angle. A 9-point calibration procedure was performed at the beginning of each session for all subjects. The experiments were performed in the dark and throughout the recording subjects wore trial frames to ensure their best corrected visual acuity for this particular eye–screen distance. To reduce accommodative demands, between +1 and +1.75δ was added to the refractive error. Five sessions were performed per subject, 10 trials per session. Each trial lasted 32 s, resulting in 26.66 min total recording time per subject. 
In order to quantify the performance of our microsaccade detection method and compare it to other methods, second trials of all experimental sessions were manually labeled, giving rise to five labeled traces per subject. All traces were labeled independently by three experts (including M.B. and C.W.), who marked all microsaccade-like events as either “microsaccade,” “ambiguous event,” or “artifact.” These labels were then combined into a final labeled dataset by the majority vote, resulting in 5,049 labeled microsaccades, 480 ambiguous events, and 38 artifacts in total. 
In addition, the performance of the algorithm was comparatively assessed on an existing labeled dataset provided by M. Rucci (Active Perception Laboratory, University of Rochester). This dataset included two-dimensional traces from four young subjects (22–31 years old) performing a fixation task similar to the one used in our experiment (each subject performed 16–90 fixation trials, 12–15 s each). Monocular recordings at 1 kHz were conducted using a high-resolution Dual-Purkinje eye tracker system. 
Preprocessing and event detection
Raw recording data for each subject consisted of horizontal and vertical coordinates of the eye in degrees of visual angle, along with the corresponding timestamps (in milliseconds). The data from all sessions/trials were stacked into one matrix (per subject) with three columns Display Formula\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\([\vec t,\vec x,\vec y\,{]}\) (this is not a necessary step as the algorithm can work with traces from a single trial, provided that they include a sufficient number of microsaccades; see Discussion). Parts of the trace corresponding to a 200-ms period around blink events, automatically detected by the EyeLink software, were removed as an obvious source of noise. Due to the presence of blinks and concatenation of multiple trials, the combined data traces exhibited artifactual jumps in the x and y traces. These jumps were removed by aligning data traces independently for the horizontal and vertical components. No other data preprocessing was performed. 
To detect velocity peaks as candidate microsaccade events, the horizontal and vertical velocity components were calculated from eye position recordings using 11-point average  
\begin{equation}\tag{1}{v_x} = {{\sum\nolimits_{i = 1}^5 ( {x_{n + i}} - {x_{n - i}})} \over {30\Delta t}}\end{equation}
and similarly for the vertical component vy, with Δt = 1 ms (Engbert & Kliegl, 2003). This is equivalent to smoothing the position by a triangular normalized Bartlett window (of size 12) and then differentiating (Otero-Millan et al., 2014). Velocity smoothing using this method was very close to smoothing using Savitzky-Golay differentiation filters of second order (filter size 11; Nyström & Holmqvist, 2010).  
Following the well-known detection procedure proposed by Engbert and Kliegl (2003), separate velocity thresholds ηx and ηy were computed for the horizontal and vertical components using a multiple (λ) of median SD estimator:  
\begin{equation}\tag{2}{\eta _{x,y}} = \lambda \left[ {\langle v_{x,y}^2\rangle - {{\langle {v_{x,y}}\rangle }^2}} \right]\end{equation}
where Display Formula\(\langle \cdot \rangle \) denotes the median value of the argument. These thresholds define an ellipsoid in the velocity space, such that any velocity point lying outside the ellipsoid corresponds to a high-speed event in the trace. The time point corresponding to such an event was considered as the time of a candidate microsaccade. The value of λ controls the initial separation between the potential microsaccades from noise, and was shown to produce the best results with empirically defined values set to 5 or 6 (Engbert & Kliegl, 2003; Engbert & Mergenthaler, 2006). In our algorithm we used a permissive value λ = 5 to detect most of the velocity peaks, under the constraint that consecutive peaks are separated by at least 30 ms. In addition, peaks with unnaturally high velocity (1000°/s) were discarded.  
If the noise level is low and microsaccades are the only events with a relatively high speed, then microsaccades can efficiently be detected as sequences of high-speed events (Engbert & Kliegl, 2003; Engbert & Mergenthaler, 2006). However, in a high-noise regime such a procedure can result in large number of false positives, as any sequence of sufficiently high-speed events is taken to be a microsaccade, without consideration of the actual velocity profile. Note that in contrast to the Engbert and Kliegl's method, candidate microsaccades in our algorithm are single peaks in the velocity profile, rather then event sequences, eliminating one free parameter (i.e., the minimal duration of the candidate microsaccade). Moreover, while λ is considered as a free parameter in the Engbert and Kliegl's method, it is fixed to the relatively low value in our algorithm, since it is only used to implement initial selection of candidate events. A higher value of this parameter will result in a lower sensitivity of the algorithm (i.e., its ability to detect a true microsaccade), since it limits the number of candidate events even before noise removal. A lower value will result in a large number of candidate events without further increasing the sensitivity, since a vast majority of the microsaccades is detected at this value at low noise. Separation of putative microsaccades from noise is performed as a next step of the algorithm. 
Correlation-based unsupervised clustering
Once velocity peaks were determined, feature vectors were constructed by concatenating absolute velocity along horizontal and vertical dimensions in the interval Display Formula\([ - \Delta p,\Delta p{]}\) around each candidate peak (Δp = 50). The feature vectors can be easily extended to also include position and acceleration data around the peak (see Results) without any changes to the algorithm. The unsupervised clustering is performed in the high-dimensional (i.e., of dimension 2Δp + 1 in the case of velocity-based features) space using a recently proposed fast density-based clustering method (Rodriguez & Laio, 2014). This method estimates, for each data point i, its local density ρi and the nearest neighbor distance δi between this point and its closest point having a higher density. Cluster centers are then determined as widely separated points of highest density—that is, points with large values of ρi and δi
Calculation of both of these measures is based on the distance matrix, whose components are pairwise distances dij between feature vectors. Our method uses a correlation-based distance metric:  
\begin{equation}\tag{3}{d_{ij}} = 1 - {r_{ij}}\end{equation}
where rij is the Pearson correlation between the two vectors. This distance corresponds to the cosine of the angle between the two vectors (in the case of centered data) and hence does not depend on the vector lengths. Since microsaccades are events with a stereotyped shape but generally different amplitudes, the correlation distance is well suited to detect them. A high correlation between two data points (e.g., between two microsaccades) corresponds to the correlation distances close to zero, whereas a low correlation (e.g., between a microsaccade and a noise event) gives dij ≈ 1. The space spanned by all pairwise distances between feature vectors is limited to the volume of maximal size 2, in which all microsaccades are expected to form a tight cluster, while noise events are expected to lie far from the cluster center (Figure 1A). The number of clusters may be higher than 1 depending on the chosen features (see Results).  
Figure 1
 
Unsupervised clustering in the correlation feature space. (A) A multidimensional scaling (MDS) representation of candidate microsaccades. Each dot corresponds to a feature vector. A cluster on the left (shown by the red dots) represents potential microsaccades that are highly correlated and thus lie close to each other in the correlation space. Noise events are less correlated with microsaccades and between themselves and thus form a low-density cloud of points (shown by the black dots). Note that the visible distance between the points on the plot does not reflect the true distance, as the feature vectors are projected onto the MDS plane B. The decision graph represents candidate microsaccades (small circles) as points in δρ space. The cluster center is detected as the point with the largest product δρ (the outlier in the upper-right part of the plot). Inset: The position trace of cluster center (i.e., the prototypical microsaccade). (C) The characteristic distance dc is chosen as the point of the fastest increase of the k-dist curve. This value is used to separate microsaccades from noise, giving rise to the color labeling in (A).
Figure 1
 
Unsupervised clustering in the correlation feature space. (A) A multidimensional scaling (MDS) representation of candidate microsaccades. Each dot corresponds to a feature vector. A cluster on the left (shown by the red dots) represents potential microsaccades that are highly correlated and thus lie close to each other in the correlation space. Noise events are less correlated with microsaccades and between themselves and thus form a low-density cloud of points (shown by the black dots). Note that the visible distance between the points on the plot does not reflect the true distance, as the feature vectors are projected onto the MDS plane B. The decision graph represents candidate microsaccades (small circles) as points in δρ space. The cluster center is detected as the point with the largest product δρ (the outlier in the upper-right part of the plot). Inset: The position trace of cluster center (i.e., the prototypical microsaccade). (C) The characteristic distance dc is chosen as the point of the fastest increase of the k-dist curve. This value is used to separate microsaccades from noise, giving rise to the color labeling in (A).
Given the distance matrix, a local density of each point (i.e., the feature vector corresponding to a candidate microsaccade) is estimated using a smooth Gaussian kernel:  
\begin{equation}\tag{4}{\rho _i} = \sum\limits_j {\exp } {\left( { - {{{d_{ij}}} \over {d_c}}} \right)^2}\end{equation}
where dij the distance between data points i and j, and dc is a parameter determining a characteristic distance between feature vectors in the data set (see the following section for the proposed procedure of estimation of this parameter based on the data). The nearest neighbor distance is computed as (Rodriguez & Laio, 2014):  
\begin{equation}\tag{5}{\delta _i} = \mathop {\min }\limits_{j:{\rho _j} \gt {\rho _i}} {d_{ij}}.\end{equation}
 
A plot of δi as a function of ρi for all data points for one sample subject shows an example of the decision graph, with a clearly separated cluster center (Figure 1B). The cluster center (i.e., a prototypical microsaccade) is determined automatically as the data point with the largest product ρiδi. In the case of multiple clusters, a corresponding number of points with the largest product ρδ is determined, and each point in the dataset is assigned to the same cluster as its closest neighbor with a higher local density (Rodriguez & Laio, 2014). This clustering approach is not iterative; it is thus fast compared to classical algorithms such as k-means, and it does not suffer from related overfitting problems. 
Separating microsaccades from noise
With the proposed correlation-based distance metric, noise events are those that are far from (i.e., uncorrelated with) the prototypical microsaccade of a given subject. We thus label a data point as noise if it is farther than a characteristic distance dc from its nearest neighbor with a higher density. The value of dc is determined based on the density of the microsaccade cluster as follows. First, the density profile of the data is estimated using a sorted k-dist graph (Ester, Kriegel, Sander, & Xu, 1996). Namely, for each data point, we calculate the distance (Equation 3) to its kth nearest neighbor with k = 4 (we checked that values k = 1, …, 5 produce similar results). These distances are then sorted in the descending order, giving rise to the curve that reflects the density distribution of the data points (Figure 1C). The data points with largest indices (corresponding to the lowest k-dist values) are those located near the cluster centers. As the data point index decreases, there is a sharp increase in the associated k-dist value, indicating a fast decrease in the density. We set the characteristic distance dc to the k-dist value corresponding to the point of the steepest decrease in density (i.e., the fastest change in the k-dist curve), which is in turn calculated as the maximum of the derivative of the k-dist function. The derivative is estimated using Savitzky-Golay differentiation filters of fourth order (filter size was set to 10% of the number of data points to smooth out noise). Depending on the size of the dataset, other methods can be used to detect the maximum of the derivative, including semi-automatic approaches. Thus, distances between neighboring data points are (a) smaller than dc for data points close to the cluster center; (b) approximately equal to dc near the border of the cluster (i.e., when its density drops fast); and (c) larger than dc far from the clusters, since noise events have lower local density (i.e., noise events are less correlated between each other than microsaccade events). 
Note that in our algorithm, in contrast to the microsaccade clustering algorithm based on k-means (Otero-Millan et al., 2014), no separate cluster is assigned to noise. Rather, noise events are those that are far from the microsaccade center (although not necessarily close to each other; Dave, 1991). This essentially means that no specific assumptions are made about noise, apart from it being different from microsaccades. 
Receiver operating characteristic analysis
Receiver operating characteristic (ROC) analysis was used to compare the performance of our method against methods proposed by Engbert and Kliegl (2003) and Otero-Millan et al. (2014), referred in the following as EK and OM methods, respectively. For the EK algorithm (Version 2.1 written in MATLAB), the λ parameter varied from 3 to 15, with minimal duration of microsaccade fixed to 6. The OM algorithm (Version 1.1, downloaded from http://smc.neuralcorrelate.com/sw/microsaccade-detection/) does not have free parameters, since it automatically detects the best separation between the microsaccade cluster and noise cluster in a chosen low-dimensional feature space. Therefore, when showing ROC curves, we indicate the performance of this method by a point corresponding to the optimal performance, instead of a curve. 
While manual labeling provides a way to quantify the number of true positives, false positives, and false negatives, the number of true negatives is arbitrary, since any point in time can be considered as a nonmicrosaccade event (Otero-Millan et al., 2014). Therefore this analysis cannot be used to give an absolute measure of the method's performance in terms of its specificity. However, it is valid to compare different methods at an arbitrarily fixed true negative rate. We thus set the number of true negatives for each subject equal to the total number of microsaccades for that subject (as detected by the experts). This changes the interpretation of the horizontal axis of the ROC curve, labeled adjusted false positive rate in the figures. While in the classical ROC analysis, the false positive rate equals to 0.5 when the number of false positives is equal to the number of true negatives, the same value of the adjusted rate corresponds to the case when the number of false positives equals the number of true microsaccades executed by the subject. 
An implementation of the algorithm using MATLAB (version R2016b; MathWorks, Natick, MA) is available at https://github.com/sheynikh/msdetect
Results
Detection of (micro)saccades in a high-noise regime
To illustrate the ability of the algorithm to detect fixational microsaccades, we show in Figure 2A and B samples of position and velocity traces from two subjects with different noise levels. While microsaccades can be relatively easily discerned by eye in the position trace in both subjects, saccade detection from the velocity trace becomes more difficult when the noise is higher. Specifically, high-frequency bouts in the velocity trace make a simple threshold-based approach unreliable. While making a few errors, the new algorithm is not vulnerable to this type of noise, as it considers the shape of the signal around velocity peaks as feature vectors. Nevertheless, the higher noise level resulted in a lower performance in this particular example (i.e., decrease in the true positive rate from 0.99 to 0.85, and increase in the adjusted false positive rate from 0.04 to 0.16). Plots of microsaccade shapes show that almost all microsaccades in these two subjects were horizontal with a strong overshoot (Figure 2C and D, top row), while false detections mostly corresponded to low-amplitude peaks in velocity (Figure 2C and D, bottom row). 
Figure 2
 
Example traces and extracted microsaccades. (A–B) Horizontal position (top) and velocity \(v = \sqrt {v_x^2 + v_y^2} \) (bottom) during 10-s recording period for two different subjects. Microsaccades detected by the algorithm and manually labeled microsaccades are shown by the black and red triangles, respectively. (C–D) Shape of the microsaccades detected by the algorithm in the two subjects across all labeled trials. Top: Horizontal (left) and vertical (right) eye coordinate for correctly detected microsaccades (true positives). Mean microsaccade shape is shown by the black line. Bottom: Horizontal (left) and vertical (right) eye coordinate for false positives. Note the change in vertical scale. Only saccades with amplitude <2° are shown for clarity.
Figure 2
 
Example traces and extracted microsaccades. (A–B) Horizontal position (top) and velocity \(v = \sqrt {v_x^2 + v_y^2} \) (bottom) during 10-s recording period for two different subjects. Microsaccades detected by the algorithm and manually labeled microsaccades are shown by the black and red triangles, respectively. (C–D) Shape of the microsaccades detected by the algorithm in the two subjects across all labeled trials. Top: Horizontal (left) and vertical (right) eye coordinate for correctly detected microsaccades (true positives). Mean microsaccade shape is shown by the black line. Bottom: Horizontal (left) and vertical (right) eye coordinate for false positives. Note the change in vertical scale. Only saccades with amplitude <2° are shown for clarity.
We compared the accuracy of the new microsaccade detection method with that of the EK and OM methods using ROC analysis. The analysis was performed on the manually labeled portion of the data recorded in aged subjects and shows that the new method offers a better balance between sensitivity (i.e., true positive rate) and specificity (i.e., adjusted true negative rate), compared to the two previous methods (Figure 3 and Table 1). Per-subject performance at a chosen value of the free parameters shows also that the new method offers an advantage over the EK method in terms of specificity (i.e., it has a smaller false positive rate, Wilcoxon p < 10–6), and over the OM method in terms of sensitivity (i.e., it has a higher true positive rate, Wilcoxon p < 10–5). Although the EK method had a higher mean sensitivity, compared to the new algorithm, it also had a much larger number of false positives. The number of false positives did not differ between the new method and the OM method. 
Figure 3
 
Performance comparison of the EK (in black), OM (in blue), and the new microsaccade detection method (in red) using eye recordings from aged subjects. (A) Average ROC curves (±SEM) across subjects for the three methods. Since the OM method has no free parameters, its average performance is shown by a point. (B) Each symbol corresponds to the performance of the method (in the ROC space) applied to the eye-tracking data of one subject. For the EK method (small circles) λ = 6, while for the new method (triangles) the parameter is chosen automatically, separately for each subject, using the estimation procedure described in Methods. Performance of the OM method is shown by the crosses. (C–D) ROC curves for all subjects resulting from the new method (C) and the EK method (D).
Figure 3
 
Performance comparison of the EK (in black), OM (in blue), and the new microsaccade detection method (in red) using eye recordings from aged subjects. (A) Average ROC curves (±SEM) across subjects for the three methods. Since the OM method has no free parameters, its average performance is shown by a point. (B) Each symbol corresponds to the performance of the method (in the ROC space) applied to the eye-tracking data of one subject. For the EK method (small circles) λ = 6, while for the new method (triangles) the parameter is chosen automatically, separately for each subject, using the estimation procedure described in Methods. Performance of the OM method is shown by the crosses. (C–D) ROC curves for all subjects resulting from the new method (C) and the EK method (D).
Table 1
 
Performance comparison of the EK, OM, and the new microsaccade detection method. Notes: Precision = the fraction of true microsaccades as labeled by the experts among all events detected as microsaccades by the algorithm; Sensitivity = true positive rate; Adjusted FPR = adjusted false positive rate (see Methods); Youden's index (or informedness) = performance index corresponding to the height above the chance line in the ROC space, see text for details.
Table 1
 
Performance comparison of the EK, OM, and the new microsaccade detection method. Notes: Precision = the fraction of true microsaccades as labeled by the experts among all events detected as microsaccades by the algorithm; Sensitivity = true positive rate; Adjusted FPR = adjusted false positive rate (see Methods); Youden's index (or informedness) = performance index corresponding to the height above the chance line in the ROC space, see text for details.
In a video-based, pupil-corneal reflection system, gaze direction is estimated on the basis of the relative position of the pupil and the corneal reflection centers. Either the pupil or the corneal reflection may be (partially) covered by dropping eyelid or downward pointing eyelashes (often referred to as partial blinks) leading to a repetitive misestimation of the reflection center positions (Damasceno et al., 2015). In addition, the pupil or the corneal reflection can be confused with other optical object in the image of the eye, caused by uncontrolled light sources, glasses, contact lenses, or a clinical condition of the subjects (Nyström et al., 2013). The partial blinks or optical artifacts cause repetitive errors in eye tracking, resulting in high-frequency oscillations in the recordings. The usual method of dealing with high-frequency noise consists in removing parts of the trace before and after a detected blink event (since partial blinks often, but not always, occur just before of after normal blinks), as well as in removing the parts of the trace that are accompanied by fast changes in pupil size (e.g., when the pupil is repetitively confused with another optical object of a different size). To see whether trace filtering based on pupil size improves the microsaccade detection performance in our data, we removed parts of the trace 200 ms before and after time points in which pupil size changed faster than 20 units per millisecond (Otero-Millan et al., 2014; filtering based on blink detection was applied during preprocessing, see Methods). This standard blink- and pupil-based filtering approach led to the loss of about 20% of recorded data (including about 20% of true recorded microsaccades, as estimated from labeled data, Figure 4A). Moreover, the performance of the new method was still better than that of the other two methods, even though the filtering step actually decreased the accuracy of the new method (probably due to the loss of data, cf. Figures 3A and B and 4B and C). These results suggest that, first, the high-frequency recording noise in our data is not always associated with partial blinks and changes in pupil size and, second, that the proposed method appears to be resistant to noise that cannot be accounted for by standard noise-removal procedures. Moreover, the proposed method does not require the application of these procedures for efficient microsaccade detection. 
Figure 4
 
Performance comparison of the EK, OM, and the new microsaccade detection method on prefiltered data. (A) Per-subject data loss due to prefiltering. The loss is measured by the proportion of true microsaccades (MS num.) and the proportion of recorded time (Time) removed during prefiltering of the original data. (B–C) Mean ROC curves ± SEM (B) and per-subject performance (C) for the three methods.
Figure 4
 
Performance comparison of the EK, OM, and the new microsaccade detection method on prefiltered data. (A) Per-subject data loss due to prefiltering. The loss is measured by the proportion of true microsaccades (MS num.) and the proportion of recorded time (Time) removed during prefiltering of the original data. (B–C) Mean ROC curves ± SEM (B) and per-subject performance (C) for the three methods.
Detection of microsaccadic FEMs in low-noise regime
We next verified whether the difference in performance between the three methods is caused by specific noise properties rather than other possible reasons. We thus compared the performance of the three methods on a dataset from a different study, in which eye tracking was performed using a high-precision double-Purkinje-image (DPI) system with a much lower level of recording noise (Poletti et al., 2017; see Methods). Under these conditions, the performance of the new method was similar to that of the OM method, and both of these methods outperformed the EK method (Figure 5A and B). This result suggests that the method proposed here is more efficient than the two previous methods in the high-noise regime, while its performance is comparable to the OM method for low-noise data. In agreement with the above considerations about possible high-frequency optical artifacts in video-based eye-tracking systems, we found a significantly higher probability of high-speed events in our data as compared to DPI recordings, suggesting the presence of high frequency noise (Figure 5C). These high-frequency oscillations are sometimes accompanied by oscillations in pupil size (e.g., when the pupil is partially occluded by eyelid or eyelashes), but this is not always the case (e.g., when the oscillations are caused by errors in the detection of corneal reflection). 
Figure 5
 
Performance comparison of the EK, OM, and the new microsaccade detection method on low-noise DPI data. (A–B) Mean ROC curves ±SEM (A) and per-subject performance (B) for the three methods. (C) Velocity distributions in the horizontal (left) and vertical (right) components of eye traces, recorded in the current study (orange) and using a DPI eyetracker (blue).
Figure 5
 
Performance comparison of the EK, OM, and the new microsaccade detection method on low-noise DPI data. (A–B) Mean ROC curves ±SEM (A) and per-subject performance (B) for the three methods. (C) Velocity distributions in the horizontal (left) and vertical (right) components of eye traces, recorded in the current study (orange) and using a DPI eyetracker (blue).
Preliminary assessment of Bayesian detection methods
Daye and Optican (2014) proposed a Bayesian detection of (micro)saccades, based on a generative model for the velocity signal, subject to motor and measurement Gaussian noises. In their algorithm, jumps in velocity that correspond to saccades are detected as abrupt changes in velocity distribution, quantified by the variance γ of the ratio of velocity distributions predicted on successive time steps. If the assumptions of the model are not violated and the two noise parameters are well chosen, the application of the method results in a strong decrease of noise, after which saccades can be detected by thresholding. Because of the stochastic nature of Bayesian prediction, several passes of the algorithm are required, after which candidate microsaccades are grouped together, or discarded, based on three heuristic grouping parameters. In total, at least six parameters have to be suitably chosen to ensure correct detection. Figure 6A shows a γ-plot corresponding to the velocity trace from Figure 2B for noise parameters adjusted by hand. Even though the noise is strongly reduced, the issue of the threshold assignment and the choice of grouping parameters remains. While it is possible that some parameter values could result in a high-detection performance, the choice of these parameters on a per-subject basis is a problem that makes practical application of this method and its comparison with our algorithm difficult. 
Figure 6
 
Application of Bayesian methods for microsaccade detection. (A) Noise reduction in the velocity signal shown in Figure 2B as a result of the algorithm proposed by Daye and Optican (2014). Parameter values: N = 200, m = 10, Ω = 0.1, ξ = 5, λ = 0.005 (see Daye & Optican, 2014, for details). Four additional parameters need to be specified to extract microsaccades from the denoised trace. Manually labeled microsaccades are shown by the red triangles. (B–C) Microsaccades detected by the algorithm proposed by Mihali et al. (2017). Several events are detected (shown by the black triangles and vertical black lines) for each true microsaccade, both for the data from video-based eye-tracker (B) and DPI eye tracker (C).
Figure 6
 
Application of Bayesian methods for microsaccade detection. (A) Noise reduction in the velocity signal shown in Figure 2B as a result of the algorithm proposed by Daye and Optican (2014). Parameter values: N = 200, m = 10, Ω = 0.1, ξ = 5, λ = 0.005 (see Daye & Optican, 2014, for details). Four additional parameters need to be specified to extract microsaccades from the denoised trace. Manually labeled microsaccades are shown by the red triangles. (B–C) Microsaccades detected by the algorithm proposed by Mihali et al. (2017). Several events are detected (shown by the black triangles and vertical black lines) for each true microsaccade, both for the data from video-based eye-tracker (B) and DPI eye tracker (C).
More recently, Mihali et al. (2017) proposed a Bayesian approach for microsaccade detection that uses a more elaborate generative model for saccade generation by assuming a piecewise-linear velocity profile for microsaccades, and Gaussian motor and measurement noises. In their algorithm, motor and measurement noise variances are estimated from the data and it is shown that the performance is not very sensitive to other adjustable parameters, controlling prior distributions for microsaccade frequency and duration. Unfortunately, the application of this method to our data resulted in a variable number of detected events for each true microsaccade, resulting in a high number of false positives (several examples are shown in Figure 6B and C). This problem is potentially related to the fact that our recordings were performed with the default setting of the EyeLink eye tracker parameter “Heuristic filter” (“on”) that enables online trajectory smoothing (see “Caveat” in Mihali et al., 2017). While it is possible that grouping of detected events based on their number or time lags might solve the problem, the choice of these additional parameters based on the data remains an open problem. 
Case study: Properties of fixational saccades and microsaccades in aged subjects
Analysis of fixational saccades extracted from our recordings (including labeled and unlabeled data) in aged subjects, shows that they follow a so-called main sequence—that is, exhibit a high linear correlation in logarithmic coordinates (all saccades: r = 0.94, p < 10–6; saccades smaller than 1°: r = 0.9, p < 10–6; Figure 7A and D). Mean (±SEM) amplitude (Figure 7B and E) and velocity (Figure 7C and F) of fixational saccades were equal to 1.13° ± 0.01° and 85.11°/s ± 0.39°/s, respectively, with a mean frequency of 1.69 ± 0.15 Hz. Microsaccade frequency was below 1 Hz on average (Figure 7G). A vast majority of fixational saccades had a larger horizontal component (Figure 7H), independently of their amplitude. This analysis can be compared to a previous study that addressed saccades and microsaccades in subjects with ages between 60 and 70 years old (as a part of a larger cohort of younger subjects; Port et al., 2016). In their visual search task (“Where's Waldo” puzzles) Port et al. (2016) observed similar saccade frequencies, but higher amplitudes and mean velocities, as well as a large subset of saccades with a vertical bias. These differences can be attributed to the differences in the task requirements, since stimulus characteristics during visual search influence the spatial distribution of visual fixations (Najemnik & Geisler, 2008). 
Figure 7
 
Case study: Analysis of (micro)saccade properties in aged subjects. (A–C) Main sequence (A), normalized amplitude (B), and velocity (C) distributions. (D–F) Main sequence, amplitude, and velocity distributions for saccades smaller than 1° in amplitude. (G) Saccade frequency as a function of amplitude. (H) Normalized distribution of saccade directions in polar coordinates. (I) Normalized distribution of IMSI. Inset: The same data with a logarithmic vertical axis and linear regression fit (red line).
Figure 7
 
Case study: Analysis of (micro)saccade properties in aged subjects. (A–C) Main sequence (A), normalized amplitude (B), and velocity (C) distributions. (D–F) Main sequence, amplitude, and velocity distributions for saccades smaller than 1° in amplitude. (G) Saccade frequency as a function of amplitude. (H) Normalized distribution of saccade directions in polar coordinates. (I) Normalized distribution of IMSI. Inset: The same data with a logarithmic vertical axis and linear regression fit (red line).
The distribution of inter-microsaccade-intervals (IMSI) in our recordings suggests that microsaccade generation in aged subjects can be described by a Poisson process, since distribution of IMSI is approximately exponential (Engbert & Mergenthaler, 2006). The peak of the distribution (at ≈300 ms) occurs earlier than what has been observed in younger subjects (Engbert & Mergenthaler, 2006), indicating a higher frequency of an underlying periodic component of microsaccade generation process. 
Feature and parameter sensitivity analysis
The choice of feature space, in which candidate microsaccades are represented, can influence the detection accuracy of an algorithm. Appropriately reducing the size of the feature vector can often result in a faster and more accurate detection procedure, while an overly restricted set of features results in low sensitivity. A useful property of the proposed algorithm is that it is flexible with respect to the choice of features. In particular, the feature space can be easily reduced—for instance, to treat either the horizontal or vertical velocity components—or it can be extended to include position and acceleration data. Such changes in the feature space do not require any modification of the algorithm (apart from changes in the input data). To illustrate the flexibility of the algorithm with respect to the choice of features, we assessed its detection accuracy when the feature space was reduced to include only the horizontal (x) component of the velocity traces. Since the vast majority of microsaccades in the fixation task have a nonzero component along the horizontal direction (Figure 7D), such a reduction of the feature space is expected to increase the detection rate (since an approximately constant nonzero y amplitude during a horizontal microsaccade leads to a decreased signal-to-noise ratio of the microsaccade velocity signal). Reducing the input data to only include the horizontal velocity component indeed resulted in a better detection performance (Figure 8A and B). To test whether adding position and acceleration data can further improve the detection performance, and to illustrate the ability of the algorithm to work with multiple clusters, the feature space was extended to include full position, velocity, and acceleration data along the horizontal component. Since the shape of a microsaccade in the positive x direction is inverted with respect to the negative one, the clustering step results in two clusters in the feature space (Figure 8C and D). Such an extension of the feature space did not result in any further improvement in accuracy (in fact it slightly decreased the sensitivity of the algorithm, see Figure 8A), thus indicating that velocity profile contains most of the information useful for microsaccade detection. Note that for all the tested feature combinations the performance of the algorithm was better than that of the two other methods (Figure 8B). 
Figure 8
 
Flexibility of the algorithm with respect to the choice of the feature space. (A) Average ROC curves for the new method when the feature vectors were composed of two velocity components (red circles); of only the horizontal velocity component (blue triangles); and of horizontal position, velocity and acceleration (yellow triangles). (B) Average (across subjects) performance of the new method with different features (see [A]) in comparison with the EK method (black cross) and the OM method (black square). The value of dc for the new method was chosen using the procedure described in Methods. For the EK method λ = 6. (C) Decision graph for the same subject as in Figure 3 when the feature vectors included position, velocity, and acceleration data. The two cluster centers correspond to prototypical microsaccades in opposite directions. (D) MDS representation of the candidate microsaccades after clustering and separating noise events for the same subject (red and green dots correspond to two clusters in [C]).
Figure 8
 
Flexibility of the algorithm with respect to the choice of the feature space. (A) Average ROC curves for the new method when the feature vectors were composed of two velocity components (red circles); of only the horizontal velocity component (blue triangles); and of horizontal position, velocity and acceleration (yellow triangles). (B) Average (across subjects) performance of the new method with different features (see [A]) in comparison with the EK method (black cross) and the OM method (black square). The value of dc for the new method was chosen using the procedure described in Methods. For the EK method λ = 6. (C) Decision graph for the same subject as in Figure 3 when the feature vectors included position, velocity, and acceleration data. The two cluster centers correspond to prototypical microsaccades in opposite directions. (D) MDS representation of the candidate microsaccades after clustering and separating noise events for the same subject (red and green dots correspond to two clusters in [C]).
Even if the feature space is appropriately chosen, the detection accuracy is further determined by the decision threshold. We thus studied the sensitivity of the performance to the choice of the free parameter—that is, the characteristic distance dc between microsaccade candidates in the correlation feature space. For each subject, the ROC-optimal value of the parameter corresponds to the point with the maximal Youden's index, given by the difference between the true positive rate and the (adjusted) false positive rate (Youden, 1950). We thus estimated the drop in the Youden's index due to an incorrectly estimated dc value (Figure 9A and B). An underestimation of dc by about 0.05 or an overestimation by about 0.1 (in units of correlation distance, 0 < dc < 2) resulted in the drop in performance by about 10% on average, even though it is different from subject to subject (Figure 9B). To assess the efficiency of our proposed estimation procedure for the value of the free parameter, we quantified how close the estimated value was to the ROC-optimal value (Figure 9C). Thus, our procedure for the choice of dc based on cluster density overestimates the optimal value by about 0.04, leading to the corresponding performance drop of only about 2.1% on average. 
Figure 9
 
Sensitivity of the new method to the choice of the free parameter. (A) Youden's index (±SEM across subjects) as a function of dc estimation error. The optimal parameter value \(d_c^{opt}\) is the parameter value corresponding to the maximal Youden's index (calculated based on the labeled data and different for each subject). Dashed line: 10% drop in the average Youden's index from its maximum. (B) Each curve shows Youden's index as a function of dc estimation error for one subject, normalized to a maximum value of 1. (C) Drop in performance (measured by the relative decrease in Youden's index) as a function of the error \(d_c^{opt} - d_c^{est}\), where \(d_c^{est}\) is produced by the estimation procedure proposed in Methods (separately for each subject). Each circle corresponds to one subject.
Figure 9
 
Sensitivity of the new method to the choice of the free parameter. (A) Youden's index (±SEM across subjects) as a function of dc estimation error. The optimal parameter value \(d_c^{opt}\) is the parameter value corresponding to the maximal Youden's index (calculated based on the labeled data and different for each subject). Dashed line: 10% drop in the average Youden's index from its maximum. (B) Each curve shows Youden's index as a function of dc estimation error for one subject, normalized to a maximum value of 1. (C) Drop in performance (measured by the relative decrease in Youden's index) as a function of the error \(d_c^{opt} - d_c^{est}\), where \(d_c^{est}\) is produced by the estimation procedure proposed in Methods (separately for each subject). Each circle corresponds to one subject.
Finally, in order to assess the detection performance with respect to the microsaccade size, we estimated the precision (i.e., the probability that a detected microsaccade is a true microsaccades, according to manual labels) and sensitivity (i.e., the true positive rate) of the algorithm as a function of the (absolute value of) microsaccade amplitude. The new algorithm was as precise as the OM method, but with a considerably higher sensitivity (Figure 10). Compared to the EK method, our detection algorithm was considerably more precise, and was also more sensitive for fixational saccades with amplitudes under 1°. 
Figure 10
 
Detection performance of the new method as a function of microsaccade size. (A) Precision (i.e., the fraction of true labeled microsaccades among all events detected as microsaccades by the algorithm) as a function of the absolute value of the microsaccade amplitude for the OM (yellow), EK (blue), and the new method (red). (B) Sensitivity (i.e., the true positive rate), as a function of the amplitude.
Figure 10
 
Detection performance of the new method as a function of microsaccade size. (A) Precision (i.e., the fraction of true labeled microsaccades among all events detected as microsaccades by the algorithm) as a function of the absolute value of the microsaccade amplitude for the OM (yellow), EK (blue), and the new method (red). (B) Sensitivity (i.e., the true positive rate), as a function of the amplitude.
Discussion
Most of existing microsaccade detection methods are based on the identification of high-speed events in the velocity and acceleration profiles of recorded eye position as a function of time (Holmqvist et al., 2011). To separate candidate microsaccades from noise, either a single velocity-based threshold can be chosen (Engbert & Kliegl, 2003; Holmqvist et al., 2011) or a set of manually chosen features of velocity and acceleration profiles can be used to construct a feature space. In this feature space, noise and microsaccades are assumed to form separate clusters that can be detected using standard machine-learning techniques such as K-means (Otero-Millan et al., 2014). These detection methods rely on the assumption that all high-speed events are microsaccades or that statistical properties of noise can be captured by a small number of velocity- and acceleration-based features. However, these assumptions may not hold true depending on the age of the subjects or their clinical condition with respect to visual functions. High-frequency events caused by optical artifacts can have very different statistics depending on the origin of the noise, which may be the reason for the relatively inadequate performance of previously developed methods when applied to our data. The method of microsaccade detection proposed in this paper can take into account all available information (e.g., position, velocity, and acceleration) around each candidate microsaccade, and it makes use of a recently developed fast clustering algorithm by Rodriguez and Laio (2014) to determine microsaccade clusters. Importantly, the proposed method does not make any assumptions about noise properties, the only requirement being that noise events be sufficiently different from a typical microsaccade. This method is shown to work better than the EK and OM methods with noisy data, and it eliminates the need for data preprocessing based on pupil size, thus avoiding a considerable amount of data loss. It is therefore suitable for (semi)automatic microsaccade detection for a large number of subjects and/or experimental conditions, and a high amount of variability between the subjects due, for instance, to differences in age or clinical conditions. 
More recently, Bayesian methods for microsaccade detection were proposed that assume a particular generative model of saccade generation, and the performance of these methods depends on how close the model accords with reality. Moreover, computational tractability of these models often requires Gaussian noise assumptions. When either the generative model or Gaussian noise assumption is not adequate, these methods may fail. In addition, the generative model, noise, and priors are controlled by parameters that have to be, ideally, determined based on the data. The assessment of performance of these methods can be difficult in practice since a method's failure might be caused by any of the above issues and an advanced mathematical background is required to understand how to address a particular issue. An advantage of these methods is that they are not data-hungry and they work equally well on large and small datasets (given that the generative model and parameters are suitably chosen). Feature-based clustering methods, like the one proposed in the present work, are in contrast conceptually simple and usually faster, but in order to work reliably they need a dataset with a sufficient number of microsaccades to form a cluster. In our analysis this was the case even when working only with labeled trials (160 s total time per subject). The problem of insufficient number of microsaccades can easily be detected from the decision graph (see Figures 1B and 8C), where it would correspond to the absence of outliers for microsaccade clusters. 
While our method is proposed in the context of aging research, it is not limited to the age-related ocular noise. Indeed, any type of noise can in principle be considered as the only requirement for successful detection of (micro)saccades is that they have a stereotyped shape (independent of amplitude). In particular, this algorithm can be suitable (but has not been tested) for eye recordings performed with head-mounted eye trackers (Holmqvist et al., 2011). We note, however, that the initial choice of microsaccade candidates based on the EK method assumes approximately zero velocity between saccades, as is the case for head-fixed ocular fixation tasks. This assumption may not be correct during other oculomotor tasks, such as smooth pursuit, or in freely moving subjects, as in these conditions eyes may move with a high velocity between saccades (Daye & Optican, 2014). In this case, a single velocity threshold may be inadequate to detect initial saccade candidates. This issue, however, can be resolved by using saccade-detection methods adapted to smooth pursuit (e.g., Daye & Optican, 2014) in the initial event detection phase (with a permissive setting of parameters to ensure high sensitivity), and then subjected to unsupervised clustering to ensure high precision. 
We have shown that our algorithm can detect saccades based on position, velocity, acceleration, or a combination of (subsets of) these features. The quality of a particular feature set with respect to microsaccade detection can be judged from decision graphs, in which a clear separation of the microsaccade cluster(s) from the rest of the points normally correspond to a high detection performance. While in our experimental study we used monocular recordings, binocularity is an important property of genuine saccades (Engbert & Kliegl, 2003; Collewijn & Kowler, 2008). Although not tested, extending the feature set by binocular recording traces can further improve the detection performance of our method. 
We propose an automatic procedure for the estimation of the single free parameter, based on the approximated density profile of the data. This procedure assumes that microsaccades form the densest cluster in the correlation space, and so the border of the cluster is detected using the fastest drop in the density. If the recording period is too short, or a too low number of microsaccades occurred, this assumption may not hold true, resulting in the failure of the automatic estimation procedure. In that case an experimenter intervention may be required to choose a suitable smoothing of the k-dist curve, so that the drop in density can be detected (once per subject). Alternatively, the experimenter can manually choose the dc value that corresponds to the fastest density drop by visual inspection of the k-dist graph. As our analysis shows, an overestimation of this value by as much as 0.1 will not result in a substantial drop in performance. 
As a case study, we applied the detection method to investigate the impact of natural aging of fixational microsaccade characteristics. While a substantial number of studies addressed the influence of age on saccades, only one study, to our knowledge, specifically examined saccades with amplitudes under 0.5° (Port et al., 2016). In this study, seven subjects aged between 60 and 66 were assessed as a part of a larger cohort that included 343 subjects, most of whom were children and adults under 30 years old. The results of this study suggested a slight but significant increase of the frequency and peak velocity of microsaccades with age, although the efficiency of their detection method (an ad hoc modification of the EK algorithm) with respect to older subjects in the cohort is not clear. The present study thus complements these previous findings in subjects older than 69 years and demonstrates that the proposed algorithm can reliably detect microsaccades of sizes up to 0.2° from video-based recordings in the presence of high-frequency noise. 
Overall, the proposed method for the detection of (micro)saccades is noise-resistant, fast, and flexible with respect to the choice of features and it does not require manual data preprocessing. This method can be applied to detect other events with a stereotyped shape in eye-tracking data and provides an estimation procedure for the per-subject choice of the free parameter. This method compares favorably to other existing methods in a high-noise regime and it can be useful in aging and clinical research involving video-based eye trackers. 
Acknowledgments
We thank Silvia Marchesotti for her assistance in testing the algorithm. This research was supported by ANR – Essilor SilverSight Chair ANR-14-CHIN-0001. 
Commercial relationships: none. 
Corresponding author: Denis Sheynikhovich. 
Address: Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France. 
References
Abadi, R., Clement, R., & Gowen, G. (2003). Levels of fixation. In Harris L. & Jenkin M. (Eds.), Levels of perception (pp. 213–229). New York: Springer.
Bonneh, Y., Adini, Y., Fried, M., & Arieli, A. (2011). An oculomotor trace of cognitive engagement. Journal of Vision, 11 (11): 473, doi:10.1167/11.11.473. [Abstract]
Collewijn, H., & Kowler, E. (2008). The significance of microsaccades for vision and oculomotor control. Journal of Vision, 8 (14): 20, 1–21, https://doi.org/10.1167/8.14.20. [PubMed] [Article]
Cornsweet, T. N. (1956). Determination of the stimuli for involuntary drifts and saccadic eye movements. Journal of the Optical Society of America, 46 (11), 987–993.
Crabb, D. P., Smith, N. D., & Zhu, H. (2014). What's on TV? Detecting age-related neurodegenerative eye disease using eye movement scanpaths. Frontiers in Aging Neuroscience, 6, 312, https://doi.org/10.3389/fnagi.2014.00312.
Damasceno, R. W., Avgitidou, G., Belfort, R.Jr., Dantas, P. E. C., Holbach, L. M., & Heindl, L. M. (2015). Eyelid aging: Pathophysiology and clinical management. Arquivos Brasileiros de Oftalmologia, 78 (5), 328–331, https://doi.org/10.5935/0004-2749.20150087.
Dave, R. N. (1991). Characterization and detection of noise in clustering. Pattern Recognition Letters, 12 (11), 657–664, https://doi.org/10.1016/0167-8655(91)90002-4.
Daye, P. M., & Optican, L. M. (2014). Saccade detection using a particle filter. Journal of Neuroscience Methods, 235, 15–168, https://doi.org/10.1016/j.jneumeth.2014.06.020.
Ditchburn, R. W., Fender, D. H., & Mayne, S. (1959). Vision with controlled movements of the retinal image. The Journal of Physiology, 145 (1), 98–107.
Engbert, R. (2006). Microsaccades: A microcosm for research on oculomotor control, attention, and visual perception. Progress in Brain Research, 154, 177–192, https://doi.org/10.1016/S0079-6123(06)54009-9.
Engbert, R., & Kliegl, R. (2003). Microsaccades uncover the orientation of covert attention. Vision Research, 43 (9), 1035–1045, https://doi.org/10.1016/S0042-6989(03)00084-1.
Engbert, R., & Mergenthaler, K. (2006). Microsaccades are triggered by low retinal image slip. Proceedings of the National Academy of Sciences, USA, 103 (18), 7192–7197, https://doi.org/10.1073/pnas.0509557103.
Ester, M., Kriegel, H. P., Sander, J., & Xu, X. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. In E. Simoudis, J. Han, & U. Fayyad (Eds.), Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, 226–231.
Hafed, Z. M., Chen, C.-Y., & Tian, X. (2015). Vision, perception, and attention through the lens of microsaccades: Mechanisms and implications. Frontiers in Systems Neuroscience, 9, 167, https://doi.org/10.3389/fnsys.2015.00167.
Hafed, Z. M., & Clark, J. J. (2002). Microsaccades as an overt measure of covert attention shifts. Vision Research, 42 (22), 2533–2545, https://doi.org/10.1016/S0042-6989(02)00263-8.
Holmqvist, K., Nystrom, M., Andersson, R., Dewhurst, R., Halszka, J., & van de Weijer, J. (2011). Eye tracking: A comprehensive guide to methods and measures. New York: Oxford University Press.
Hunter, M. C., & Hoffman, M. A. (2001). Postural control: Visual and cognitive manipulations. Gait & Posture, 13 (1), 41–48, https://doi.org/10.1016/S0966-6362(00)00089-8.
Irving, E. L., Steinbach, M. J., Lillakas, L., Babu, R. J., & Hutchings, N. (2006). Horizontal saccade dynamics across the human life span. Investigative Ophthalmology and Visual Science, 47 (6), 2478–2484, https://doi.org/10.1167/iovs.05-1311.
Jahn, K. (2002). Suppression of eye movements improves balance. Brain, 125 (9), 2005–2011, https://doi.org/10.1093/brain/awf204.
Kagan, I., Gur, M., & Snodderly, D. M. (2008). Saccades and drifts differentially modulate neuronal activity in V1: Effects of retinal image motion, position, and extraretinal influences. Journal of Vision, 8 (14): 19, 1–25, https://doi.org/10.1167/8.14.19. [PubMed] [Article]
Kapoula, Z., Yang, Q., Otero-Millan, J., Xiao, S., Macknik, S. L., Lang, A.,… Martinez-Conde, S. (2014). Distinctive features of microsaccades in Alzheimer's disease and in mild cognitive impairment. Age (Dordrecht, Netherlands), 36 (2), 535–543, https://doi.org/10.1007/s11357-013-9582-3.
Ko, H.-K., Poletti, M., & Rucci, M. (2010). Microsaccades precisely relocate gaze in a high visual acuity task. Nature Neuroscience, 13 (12), 1549–1553, https://doi.org/10.1038/nn.2663.
Kuang, X., Poletti, M., Victor, J. D. D., & Rucci, M. (2012). Temporal encoding of spatial information during active visual fixation. Current Biology, 22 (6), 510–514, https://doi.org/10.1016/j.cub.2012.01.050.
Leigh, R. J., & Zee, D. S. (2015). The neurology of eye movements (5th ed.). New York: Oxford University Press.
Macedo, A. F., Crossland, M. D., & Rubin, G. S. (2011). Investigating unstable fixation in patients with macular disease. Investigative Opthalmology & Visual Science, 52 (3), 1275, https://doi.org/10.1167/iovs.09-4334.
Martinez-Conde, S., Macknik, S. L., & Hubel, D. H. (2000). Microsaccadic eye movements and firing of single cells in the striate cortex of macaque monkeys. Nature Neuroscience, 3 (3), 251–258, https://doi.org/10.1038/72961.
Martinez-Conde, S., Macknik, S. L., Troncoso, X. G., & Dyar, T. A. (2006). Microsaccades counteract visual fading during fixation. Neuron, 49 (2), 297–305, https://doi.org/10.1016/j.neuron.2005.11.033.
Mihali, A., van Opheusden, B., & Ma, W. J. (2017). Bayesian microsaccade detection. Journal of Vision, 17 (1): 13, 1–23, https://doi.org/10.1167/17.1.13. [PubMed] [Article]
Moller, F., & Bek, T. (1998). The relation between visual acuity and the size of fixational eye movements in patients with diabetic and non-diabetic macular disease. Acta Ophthalmologica Scandinavica, 76 (1), 38–42, https://doi.org/10.1034/j.1600-0420.1998.760107.x.
Najemnik, J., & Geisler, W. S. (2008). Eye movement statistics in humans are consistent with an optimal search strategy. Journal of Vision, 8 (3)4, 1–14, https://doi.org/10.1167/8.3.4. [PubMed] [Article]
Nyström, M., Andersson, R., Holmqvist, K., & van de Weijer, J. (2013). The influence of calibration method and eye physiology on eyetracking data quality. Behavior Research Methods, 45 (1), 272–288, https://doi.org/10.3758/s13428-012-0247-4.
Nyström, M., & Holmqvist, K. (2010). An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data. Behavior Research Methods, 42 (1), 188–204, https://doi.org/10.3758/BRM.42.1.188.
Otero-Millan, J., Castro, J., Macknik, S., & Martinez-Conde, S. (2014). Unsupervised clustering method to detect microsaccades. Journal of Vision, 14 (2): 18, 1–17, https://doi.org/10.1167/14.2.18. [PubMed] [Article]
Otero-Millan, J., Serra, A., Leigh, R. J., Troncoso, X. G., Macknik, S. L., & Martinez-Conde, S. (2011). Distinctive features of saccadic intrusions and microsaccades in progressive supranuclear palsy. The Journal of Neuroscience, 31 (12), 4379–4387, https://doi.org/10.1523/JNEUROSCI.2600-10.2011.
Parkinson, J., & Maxner, C. (2005). Eye movement abnormalities in Alzheimer disease: Case presentation and literature review. American Orthoptic Journal, 55 (1), 90–96, https://doi.org/10.3368/aoj.55.1.90.
Poletti, M., & Rucci, M. (2010). Eye movements under various conditions of image fading. Journal of Vision, 10 (3): 6, 1–18, https://doi.org/10.1167/10.3.6. [PubMed] [Article]
Poletti, M., Rucci, M., & Carrasco, M. (2017). Selective attention within the foveola. Nature Neuroscience, 20 (10), 1413–1417, https://doi.org/10.1038/nn.4622.
Port, N. L., Trimberger, J., Hitzeman, S., Redick, B., & Beckerman, S. (2016). Micro and regular saccades across the lifespan during a visual search of “Where's Waldo” puzzles. Vision Research, 118, 144–157, https://doi.org/10.1016/j.visres.2015.05.013.
Rodriguez, A., & Laio, A. (2014, June 27). Clustering by fast search and find of density peaks. Science, 344 (6191), 1492–1496, https://doi.org/10.1126/science.1242072.
Rolfs, M., Engbert, R., & Kliegl, R. (2004). Perception and motor control: The link between fixational eye movements and postural sway. Journal of Vision, 4 (8): 655, https://doi.org/10.1167/4.8.655. [Abstract]
Steinman, R. M., Haddad, G. M., Skavenski, A. A., & Wyman, D. (1973, August 31). Miniature eye movement. Science, 181 (4102), 810–819, https://doi.org/10.1126/science.181.4102.810.
Tian, X., Yoshida, M., & Hafed, Z. M. (2016). A microsaccadic account of attentional capture and inhibition of return in Posner cueing. Frontiers in Systems Neuroscience, 10, 23, https://doi.org/10.3389/fnsys.2016.00023.
Youden, W. J. (1950). Index for rating diagnostic tests. Cancer, 3 (1), 32–35, https://doi.org/10.1002/1097-0142(1950)3:1.
Yuval-Greenberg, S., Merriam, E. P., & Heeger, D. J. (2014). Spontaneous microsaccades reflect shifts in covert attention. Journal of Neuroscience, 34 (41), 13693–13700, https://doi.org/10.1523/JNEUROSCI.0582-14.2014.
Figure 1
 
Unsupervised clustering in the correlation feature space. (A) A multidimensional scaling (MDS) representation of candidate microsaccades. Each dot corresponds to a feature vector. A cluster on the left (shown by the red dots) represents potential microsaccades that are highly correlated and thus lie close to each other in the correlation space. Noise events are less correlated with microsaccades and between themselves and thus form a low-density cloud of points (shown by the black dots). Note that the visible distance between the points on the plot does not reflect the true distance, as the feature vectors are projected onto the MDS plane B. The decision graph represents candidate microsaccades (small circles) as points in δρ space. The cluster center is detected as the point with the largest product δρ (the outlier in the upper-right part of the plot). Inset: The position trace of cluster center (i.e., the prototypical microsaccade). (C) The characteristic distance dc is chosen as the point of the fastest increase of the k-dist curve. This value is used to separate microsaccades from noise, giving rise to the color labeling in (A).
Figure 1
 
Unsupervised clustering in the correlation feature space. (A) A multidimensional scaling (MDS) representation of candidate microsaccades. Each dot corresponds to a feature vector. A cluster on the left (shown by the red dots) represents potential microsaccades that are highly correlated and thus lie close to each other in the correlation space. Noise events are less correlated with microsaccades and between themselves and thus form a low-density cloud of points (shown by the black dots). Note that the visible distance between the points on the plot does not reflect the true distance, as the feature vectors are projected onto the MDS plane B. The decision graph represents candidate microsaccades (small circles) as points in δρ space. The cluster center is detected as the point with the largest product δρ (the outlier in the upper-right part of the plot). Inset: The position trace of cluster center (i.e., the prototypical microsaccade). (C) The characteristic distance dc is chosen as the point of the fastest increase of the k-dist curve. This value is used to separate microsaccades from noise, giving rise to the color labeling in (A).
Figure 2
 
Example traces and extracted microsaccades. (A–B) Horizontal position (top) and velocity \(v = \sqrt {v_x^2 + v_y^2} \) (bottom) during 10-s recording period for two different subjects. Microsaccades detected by the algorithm and manually labeled microsaccades are shown by the black and red triangles, respectively. (C–D) Shape of the microsaccades detected by the algorithm in the two subjects across all labeled trials. Top: Horizontal (left) and vertical (right) eye coordinate for correctly detected microsaccades (true positives). Mean microsaccade shape is shown by the black line. Bottom: Horizontal (left) and vertical (right) eye coordinate for false positives. Note the change in vertical scale. Only saccades with amplitude <2° are shown for clarity.
Figure 2
 
Example traces and extracted microsaccades. (A–B) Horizontal position (top) and velocity \(v = \sqrt {v_x^2 + v_y^2} \) (bottom) during 10-s recording period for two different subjects. Microsaccades detected by the algorithm and manually labeled microsaccades are shown by the black and red triangles, respectively. (C–D) Shape of the microsaccades detected by the algorithm in the two subjects across all labeled trials. Top: Horizontal (left) and vertical (right) eye coordinate for correctly detected microsaccades (true positives). Mean microsaccade shape is shown by the black line. Bottom: Horizontal (left) and vertical (right) eye coordinate for false positives. Note the change in vertical scale. Only saccades with amplitude <2° are shown for clarity.
Figure 3
 
Performance comparison of the EK (in black), OM (in blue), and the new microsaccade detection method (in red) using eye recordings from aged subjects. (A) Average ROC curves (±SEM) across subjects for the three methods. Since the OM method has no free parameters, its average performance is shown by a point. (B) Each symbol corresponds to the performance of the method (in the ROC space) applied to the eye-tracking data of one subject. For the EK method (small circles) λ = 6, while for the new method (triangles) the parameter is chosen automatically, separately for each subject, using the estimation procedure described in Methods. Performance of the OM method is shown by the crosses. (C–D) ROC curves for all subjects resulting from the new method (C) and the EK method (D).
Figure 3
 
Performance comparison of the EK (in black), OM (in blue), and the new microsaccade detection method (in red) using eye recordings from aged subjects. (A) Average ROC curves (±SEM) across subjects for the three methods. Since the OM method has no free parameters, its average performance is shown by a point. (B) Each symbol corresponds to the performance of the method (in the ROC space) applied to the eye-tracking data of one subject. For the EK method (small circles) λ = 6, while for the new method (triangles) the parameter is chosen automatically, separately for each subject, using the estimation procedure described in Methods. Performance of the OM method is shown by the crosses. (C–D) ROC curves for all subjects resulting from the new method (C) and the EK method (D).
Figure 4
 
Performance comparison of the EK, OM, and the new microsaccade detection method on prefiltered data. (A) Per-subject data loss due to prefiltering. The loss is measured by the proportion of true microsaccades (MS num.) and the proportion of recorded time (Time) removed during prefiltering of the original data. (B–C) Mean ROC curves ± SEM (B) and per-subject performance (C) for the three methods.
Figure 4
 
Performance comparison of the EK, OM, and the new microsaccade detection method on prefiltered data. (A) Per-subject data loss due to prefiltering. The loss is measured by the proportion of true microsaccades (MS num.) and the proportion of recorded time (Time) removed during prefiltering of the original data. (B–C) Mean ROC curves ± SEM (B) and per-subject performance (C) for the three methods.
Figure 5
 
Performance comparison of the EK, OM, and the new microsaccade detection method on low-noise DPI data. (A–B) Mean ROC curves ±SEM (A) and per-subject performance (B) for the three methods. (C) Velocity distributions in the horizontal (left) and vertical (right) components of eye traces, recorded in the current study (orange) and using a DPI eyetracker (blue).
Figure 5
 
Performance comparison of the EK, OM, and the new microsaccade detection method on low-noise DPI data. (A–B) Mean ROC curves ±SEM (A) and per-subject performance (B) for the three methods. (C) Velocity distributions in the horizontal (left) and vertical (right) components of eye traces, recorded in the current study (orange) and using a DPI eyetracker (blue).
Figure 6
 
Application of Bayesian methods for microsaccade detection. (A) Noise reduction in the velocity signal shown in Figure 2B as a result of the algorithm proposed by Daye and Optican (2014). Parameter values: N = 200, m = 10, Ω = 0.1, ξ = 5, λ = 0.005 (see Daye & Optican, 2014, for details). Four additional parameters need to be specified to extract microsaccades from the denoised trace. Manually labeled microsaccades are shown by the red triangles. (B–C) Microsaccades detected by the algorithm proposed by Mihali et al. (2017). Several events are detected (shown by the black triangles and vertical black lines) for each true microsaccade, both for the data from video-based eye-tracker (B) and DPI eye tracker (C).
Figure 6
 
Application of Bayesian methods for microsaccade detection. (A) Noise reduction in the velocity signal shown in Figure 2B as a result of the algorithm proposed by Daye and Optican (2014). Parameter values: N = 200, m = 10, Ω = 0.1, ξ = 5, λ = 0.005 (see Daye & Optican, 2014, for details). Four additional parameters need to be specified to extract microsaccades from the denoised trace. Manually labeled microsaccades are shown by the red triangles. (B–C) Microsaccades detected by the algorithm proposed by Mihali et al. (2017). Several events are detected (shown by the black triangles and vertical black lines) for each true microsaccade, both for the data from video-based eye-tracker (B) and DPI eye tracker (C).
Figure 7
 
Case study: Analysis of (micro)saccade properties in aged subjects. (A–C) Main sequence (A), normalized amplitude (B), and velocity (C) distributions. (D–F) Main sequence, amplitude, and velocity distributions for saccades smaller than 1° in amplitude. (G) Saccade frequency as a function of amplitude. (H) Normalized distribution of saccade directions in polar coordinates. (I) Normalized distribution of IMSI. Inset: The same data with a logarithmic vertical axis and linear regression fit (red line).
Figure 7
 
Case study: Analysis of (micro)saccade properties in aged subjects. (A–C) Main sequence (A), normalized amplitude (B), and velocity (C) distributions. (D–F) Main sequence, amplitude, and velocity distributions for saccades smaller than 1° in amplitude. (G) Saccade frequency as a function of amplitude. (H) Normalized distribution of saccade directions in polar coordinates. (I) Normalized distribution of IMSI. Inset: The same data with a logarithmic vertical axis and linear regression fit (red line).
Figure 8
 
Flexibility of the algorithm with respect to the choice of the feature space. (A) Average ROC curves for the new method when the feature vectors were composed of two velocity components (red circles); of only the horizontal velocity component (blue triangles); and of horizontal position, velocity and acceleration (yellow triangles). (B) Average (across subjects) performance of the new method with different features (see [A]) in comparison with the EK method (black cross) and the OM method (black square). The value of dc for the new method was chosen using the procedure described in Methods. For the EK method λ = 6. (C) Decision graph for the same subject as in Figure 3 when the feature vectors included position, velocity, and acceleration data. The two cluster centers correspond to prototypical microsaccades in opposite directions. (D) MDS representation of the candidate microsaccades after clustering and separating noise events for the same subject (red and green dots correspond to two clusters in [C]).
Figure 8
 
Flexibility of the algorithm with respect to the choice of the feature space. (A) Average ROC curves for the new method when the feature vectors were composed of two velocity components (red circles); of only the horizontal velocity component (blue triangles); and of horizontal position, velocity and acceleration (yellow triangles). (B) Average (across subjects) performance of the new method with different features (see [A]) in comparison with the EK method (black cross) and the OM method (black square). The value of dc for the new method was chosen using the procedure described in Methods. For the EK method λ = 6. (C) Decision graph for the same subject as in Figure 3 when the feature vectors included position, velocity, and acceleration data. The two cluster centers correspond to prototypical microsaccades in opposite directions. (D) MDS representation of the candidate microsaccades after clustering and separating noise events for the same subject (red and green dots correspond to two clusters in [C]).
Figure 9
 
Sensitivity of the new method to the choice of the free parameter. (A) Youden's index (±SEM across subjects) as a function of dc estimation error. The optimal parameter value \(d_c^{opt}\) is the parameter value corresponding to the maximal Youden's index (calculated based on the labeled data and different for each subject). Dashed line: 10% drop in the average Youden's index from its maximum. (B) Each curve shows Youden's index as a function of dc estimation error for one subject, normalized to a maximum value of 1. (C) Drop in performance (measured by the relative decrease in Youden's index) as a function of the error \(d_c^{opt} - d_c^{est}\), where \(d_c^{est}\) is produced by the estimation procedure proposed in Methods (separately for each subject). Each circle corresponds to one subject.
Figure 9
 
Sensitivity of the new method to the choice of the free parameter. (A) Youden's index (±SEM across subjects) as a function of dc estimation error. The optimal parameter value \(d_c^{opt}\) is the parameter value corresponding to the maximal Youden's index (calculated based on the labeled data and different for each subject). Dashed line: 10% drop in the average Youden's index from its maximum. (B) Each curve shows Youden's index as a function of dc estimation error for one subject, normalized to a maximum value of 1. (C) Drop in performance (measured by the relative decrease in Youden's index) as a function of the error \(d_c^{opt} - d_c^{est}\), where \(d_c^{est}\) is produced by the estimation procedure proposed in Methods (separately for each subject). Each circle corresponds to one subject.
Figure 10
 
Detection performance of the new method as a function of microsaccade size. (A) Precision (i.e., the fraction of true labeled microsaccades among all events detected as microsaccades by the algorithm) as a function of the absolute value of the microsaccade amplitude for the OM (yellow), EK (blue), and the new method (red). (B) Sensitivity (i.e., the true positive rate), as a function of the amplitude.
Figure 10
 
Detection performance of the new method as a function of microsaccade size. (A) Precision (i.e., the fraction of true labeled microsaccades among all events detected as microsaccades by the algorithm) as a function of the absolute value of the microsaccade amplitude for the OM (yellow), EK (blue), and the new method (red). (B) Sensitivity (i.e., the true positive rate), as a function of the amplitude.
Table 1
 
Performance comparison of the EK, OM, and the new microsaccade detection method. Notes: Precision = the fraction of true microsaccades as labeled by the experts among all events detected as microsaccades by the algorithm; Sensitivity = true positive rate; Adjusted FPR = adjusted false positive rate (see Methods); Youden's index (or informedness) = performance index corresponding to the height above the chance line in the ROC space, see text for details.
Table 1
 
Performance comparison of the EK, OM, and the new microsaccade detection method. Notes: Precision = the fraction of true microsaccades as labeled by the experts among all events detected as microsaccades by the algorithm; Sensitivity = true positive rate; Adjusted FPR = adjusted false positive rate (see Methods); Youden's index (or informedness) = performance index corresponding to the height above the chance line in the ROC space, see text for details.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×