September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Realization of an Inverse Yarbus Process via Hidden Markov Models for Visual-Task Inference
Author Affiliations
  • Amin Haji Abolhassani
    Centre for Intelligent Machines, Department of Electrical and Computer Engineering, McGill University, Canada
  • James J. Clark
    Centre for Intelligent Machines, Department of Electrical and Computer Engineering, McGill University, Canada
Journal of Vision September 2011, Vol.11, 218. doi:10.1167/11.11.218
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Amin Haji Abolhassani, James J. Clark; Realization of an Inverse Yarbus Process via Hidden Markov Models for Visual-Task Inference. Journal of Vision 2011;11(11):218. doi: 10.1167/11.11.218.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It has been known for a long time that visual task greatly influences eye movement patterns. Perhaps the best demonstration of this is the celebrated study of Yarbus showing that different eye movement scanpaths emerge depending on the visual. Forward Yarbus process, the effect of visual task on eye movement pattern, has been investigated for various tasks. In this work, we have developed an inverse Yarbus process whereby we can infer the visual task by observing the measurements of a viewer's eye movements while executing the visual task. To do so, first we need to track the allocation of attention, for different tasks entail attending various locations in an image and therefore tracking attention will lead us to task inference. Eye position does not tell the whole story when it comes to tracking attention. While it is well known that there is a strong link between eye movements and attention, the attentional focus is nevertheless frequently well away from the current eye position. Eye tracking methods may be appropriate when the subject is carrying out a task that requires foveation. However, these methods are of little use (and even counter-productive) when the subject is engaged in tasks requiring peripheral vigilance. The model we have developed for attention tracking uses Hidden Markov Models (HMMs), where covert (and overt) attention is represented by the hidden states of task-dependent HMMs. Fixation locations, thus, correspond to the observations of an HMM and were used in training (by using Baum-Welch algorithm) task-dependent models whereby we could evaluate the likelihood of observing an eye trajectory given a task (forward algorithm). Having this likelihood term, we were able to use the Bayesian inference and recognize the ongoing task by viewing the eye movements of subjects while performing a number of simple visual tasks.

Le Fonds Québécois de la Recherche sur la Nature et les Technologies (FQRNT). 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×