Abstract
Studies of eye movements require accurate gaze, fixation and saccade detection, and most recent studies use video based eye-trackers for this purpose. We present two methods which significantly improve current eye tracking technology, with only minor additions to standard experimental protocols. First, for video-based eye trackers, we characterize a significant pupil-size dependent artifact which systematically biases reported gaze position. By varying display luminance while subjects maintain fixation, we observe corresponding changes in pupil size inducing a gaze position error and obtain an empirical solution to correct it. Applying our technique in software to a commercial video-based eye tracker, we obtain a substantial improvement in the accuracy of gaze position. After correction, the standard deviation of gaze positions around a point of fixation during a 10 second interval reduces by as much as 7.5× and 5.9× in the worst case, with an average reduction of 2.29× and 2.95× across subjects (n = 6) and screen positions (m = 9), for horizontal and vertical directions, respectively. Additionally, we describe a simple yet effective method for predicting the next fixation during a saccade in flight. Leveraging the relationship between peak velocity and time left in a saccade, we are able to fit model parameters to individual subjects and then use on-line velocity data to predict future fixations. To evaluate the scheme, subjects free-viewed a four minute introduction of a nature documentary. For a stimulus display refresh rate of 100 Hz, we correctly predict fixation onsets to within a frame 95% of the time. Our methodology improves gaze accuracy and allows experimenters a direct access to a window of time immediately around the onset of fixation, opening the door for gaze and saccade contingent experiments using current commercial eye trackers.
NEI Grant T32 EY007045, NSF Grant IIS-0705939, Redwood Center Endowment.