September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Improving gaze accuracy and predicting fixation in real time with video based eye trackers
Author Affiliations
  • Paul Ivanov
    Vision Science, UC Berkeley, USA
    Redwood Center for Theoretical Neuroscience, UC Berkeley, USA
  • Tim Blanche
    Redwood Center for Theoretical Neuroscience, UC Berkeley, USA
    Helen Wills Neuroscience Institute, UC Berkeley, USA
Journal of Vision September 2011, Vol.11, 505. doi:https://doi.org/10.1167/11.11.505
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Paul Ivanov, Tim Blanche; Improving gaze accuracy and predicting fixation in real time with video based eye trackers. Journal of Vision 2011;11(11):505. https://doi.org/10.1167/11.11.505.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Studies of eye movements require accurate gaze, fixation and saccade detection, and most recent studies use video based eye-trackers for this purpose. We present two methods which significantly improve current eye tracking technology, with only minor additions to standard experimental protocols. First, for video-based eye trackers, we characterize a significant pupil-size dependent artifact which systematically biases reported gaze position. By varying display luminance while subjects maintain fixation, we observe corresponding changes in pupil size inducing a gaze position error and obtain an empirical solution to correct it. Applying our technique in software to a commercial video-based eye tracker, we obtain a substantial improvement in the accuracy of gaze position. After correction, the standard deviation of gaze positions around a point of fixation during a 10 second interval reduces by as much as 7.5× and 5.9× in the worst case, with an average reduction of 2.29× and 2.95× across subjects (n = 6) and screen positions (m = 9), for horizontal and vertical directions, respectively. Additionally, we describe a simple yet effective method for predicting the next fixation during a saccade in flight. Leveraging the relationship between peak velocity and time left in a saccade, we are able to fit model parameters to individual subjects and then use on-line velocity data to predict future fixations. To evaluate the scheme, subjects free-viewed a four minute introduction of a nature documentary. For a stimulus display refresh rate of 100 Hz, we correctly predict fixation onsets to within a frame 95% of the time. Our methodology improves gaze accuracy and allows experimenters a direct access to a window of time immediately around the onset of fixation, opening the door for gaze and saccade contingent experiments using current commercial eye trackers.

NEI Grant T32 EY007045, NSF Grant IIS-0705939, Redwood Center Endowment. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×