Purchase this article with an account.
Amin Haji-Abolhassani, James J. Clark; A computational model for task inference in visual search. Journal of Vision 2013;13(3):29. doi: https://doi.org/10.1167/13.3.29.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
We develop a probabilistic framework to infer the ongoing task in visual search by revealing what the subject is looking for during a search process. Based on the level of difficulty, two types of tasks, easy and difficult, are investigated in this work, and individual models are customized for them according to their specific dynamics. We use Hidden Markov Models (HMMs) to serve as a model for the human cognitive process that is responsible for directing the center of gaze (COG) according to the task at hand during visual search and generating task-dependent eye trajectories. This generative model, then, is used to estimate the likelihood term in a Bayesian inference formulation to infer the task given the eye trajectory. In the easy task, focus of attention (FOA) often lands on targets, whereas in the difficult one, in addition to the on-target foci of attention, deployment of attention on nontarget objects happens very often. Therefore, we suggest a single-state and a multi-state HMM to serve as the cognitive process model of attention for the easy and difficult tasks, respectively.
This PDF is available to Subscribers Only