Abstract
Numerous investigations have revealed that eye movements and fixation locations differ as a function of how an individual is processing a scene (e.g., Castelhano et al., 2009; Dodd et al., 2009; Land & Hayhoe, 2001; Mills et al., 2011, Yarbus, 1967). As a consequence, a common question of interest is whether a participant’s task can be predicted from their observed pattern of eye movements. To that end, a number of researchers have taken a cue from the machine learning literature and attempted to train a task set classifier with varying degrees of success (e.g., Borji & Itti, 2014; Greene et al., 2012; Henderson et al., 2013). In the present experiments, we examine whether human participants can effectively classify task set based on the eye movements of others and how their performance compares to that of a recent classifier (MacInnes et al., VSS, 2014). Participants view either a) the fixation locations and fixation durations of an individual scanning a scene (independent of scanpath), b) the scanpaths of an individual scanning a scene (independent of fixation durations), or c) video playback of eye movement locations (preserving scanpath and duration information), as they attempt to determine whether the original task was visual search, memorization, or pleasantness rating. Moreover, eye movement information is provided to participants under conditions in which the original scene is present, or with the original scene absent. Participants perform this task at above-chance levels though there is considerable variability in performance as a function of task type (e.g. better at identifying search), whether the scene is present or absent, and whether the original task was performed under blocked or mixed (task-switching) conditions. These results provide important insight into our understanding of scene perception and the manner in which individuals interpret the eye movements of others.
Meeting abstract presented at VSS 2015