Abstract
In an influential yet anecdotal illustration, Yarbus suggested that human eye movement patterns are modulated top-down by different task demands. While the hypothesis that it is possible to decode the observer's task from eye movements has received some support (e.g., Iqbal & Bailey (2004); Henderson et al. (2013)), recently Greene et al. (2012) argued against it by reporting a failure. Here, we perform a more systematic investigation of this problem and probe a larger number of experimental factors than previously. Our main goal is to determine the informativeness of eye movements for task and mental state decoding. We argue that task decoding accuracy depends critically on three factors: 1) spatial image information, 2) classification technique, and 3) image and observer idiosyncrasies. We perform two experiments. In the first experiment, we re-analyze the data of Greene et al. (2012) and contrary to their conclusion, we report that it is possible to decode the observer's task from aggregate eye movement features slightly but significantly above chance, using a Boosting classier (34.12% correct vs. 25% chance-level; binomial test, p = 1.07 e-04). In the second experiment, we repeat and extend Yarbus' original experiment by collecting eye movements of 21 observers viewing 15 natural scenes (including Yarbus' scene) under Yarbus' seven questions. We show that task decoding is possible, also moderately but significantly above chance (24.21% vs. 14.29% chance-level; binomial test, p = 2.45 e-06). We also find that task decoding accuracy is higher for images that contain more relevant information to answer the questions than for other images. Thus, we conclude that Yarbus' idea is supported by our data and continues to be an inspiration for future computational and experimental eye movement research. From a broader perspective, we discuss techniques, features, limitations, societal and technological impacts, and future directions in task decoding from eye movements.
Meeting abstract presented at VSS 2014