September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Classification and Statistics of Gaze In World Events
Author Affiliations
  • Rakshit Kothari
    Imaging Science, College of Science, Rochester Institute of Technology
  • Zhizhuo Yang
    Imaging Science, College of Science, Rochester Institute of Technology
  • Kamran Binaee
    Imaging Science, College of Science, Rochester Institute of Technology
  • Reynold Bailey
    Imaging Science, College of Science, Rochester Institute of Technology
  • Christopher Kanan
    Imaging Science, College of Science, Rochester Institute of Technology
  • Jeff Pelz
    Imaging Science, College of Science, Rochester Institute of Technology
  • Gabriel Diaz
    Imaging Science, College of Science, Rochester Institute of Technology
Journal of Vision September 2018, Vol.18, 376. doi:https://doi.org/10.1167/18.10.376
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rakshit Kothari, Zhizhuo Yang, Kamran Binaee, Reynold Bailey, Christopher Kanan, Jeff Pelz, Gabriel Diaz; Classification and Statistics of Gaze In World Events. Journal of Vision 2018;18(10):376. https://doi.org/10.1167/18.10.376.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It is known that the head and eyes function synergistically to collect task-relevant visual information needed to guide action. However, investigation of eye/head coordination has been difficult because most gaze event classifiers algorithmically define fixation as a period when the eye-in-head velocity signal is stable. However, when the head can move, fixations also arise from coordinated movements of the eyes and head, for example, through the vestibulo-ocular reflex. To identify fixations when the head is free requires that one accounts for head rotation. Our approach was to instrument multiple subjects' with a 6-axis Inertial Measurement Unit and a 120 Hz SMI ETG2 to record angular velocity of the eyes and head as they performed two tasks (ball catching & indoor walking) for 5 mins each. This yielded over 40 mins of gaze data. Four experts manually annotated a portion of the dataset as periods of gaze fixations (GF), gaze pursuits (GP), and gaze shifts (GS). Each data sample was labelled by the majority vote from the labelers. This dataset was then used to train a novel 2-stage Forward-Backward Recurrent Window (FBRW) classifier for automated event labelling. Inter-labeler reliability (Fleiss-kappa) was used to compare the performance of trained classifiers and human labelers. We found that 64 to 78 ms duration provides enough context for classification of samples with an accuracy above 99% on a subset of the labelled data that was not used during the training phase. In addition, analysis of Fleiss' kappa indicates that the algorithm classifies at rate on-par with human labelers. This algorithm provides new insight into the statistics of natural eye/head coordination. For example, preliminary statistics indicate that fixation occurs very rarely through stabilization of the eye-in-head vector alone, but through coordinated movements of the eyes and head with an average gain of 1.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×