All this leads us to the observation that even though SP is an as important part of viewing behavior as are e.g., saccades, it is substantially underrepresented and often entirely overlooked in current eye movement detection approaches (Olsen,
2012; Mould, Foster, Amano, & Oakley,
2012; Kasneci, Kasneci, Kübler, & Rosenstiel,
2014; Anantrasirichai et al.,
2016; Steil et al.,
2018; Zemblys et al.,
2019), highlighting the need to develop accurate pursuit classification algorithms (Andersson et al.,
2017). It is of interest to note that one common property of all eye movement classification methods to date is that they only process one gaze recording of a single observer at a time, thus never accounting for the element of synchrony in the eye movements performed by various observers for the same stimulus (Startsev, Göb, & Dorr,
2019). This limitation has its benefits in terms of online applicability and the absence of additional data set restrictions, and it also seems to be sufficient for detecting saccades and fixations, which have relatively defined speed and acceleration ranges. For SPs, however, simple analysis of the speed of the gaze might not be sufficient to differentiate them from drifts (Yarbus,
1967, Chapter VI, Section 2), noisy fixations, or slow saccades (we present speed distributions later in the paper). Some algorithms, therefore, include acceleration thresholds in order to avoid misclassification of slow saccades as pursuits (e.g., (Mital, Smith, Hill, & Henderson,
2011) or the SR Research saccade detector (SR Research,
2009)). Mital et al. (
2011) then simply combine all “nonsaccadic eye movements” into one category. While this is sufficient for some applications, various areas of eye movement research require distinguishing between different ways of looking at the gaze targets, in terms of execution or perception (Schütz, Braun, & Gegenfurtner,
2011; Spering et al.,
2011; Silberg et al.,
2019).