Journal of Vision Cover Image for Volume 17, Issue 10
September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Evaluating the Importance of Top-Down "Semantic" Features to Decoding Observer Task from Eye Movements
Author Affiliations
  • Dylan Rose
    Psychology, Northeastern University
  • Peter Bex
    Psychology, Northeastern University
Journal of Vision August 2017, Vol.17, 1124. doi:https://doi.org/10.1167/17.10.1124
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Dylan Rose, Peter Bex; Evaluating the Importance of Top-Down "Semantic" Features to Decoding Observer Task from Eye Movements. Journal of Vision 2017;17(10):1124. https://doi.org/10.1167/17.10.1124.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Since Yarbus (1967), there has been considerable debate over the influence "top-down" factors, such as scene knowledge or task, have on eye movement behavior. Many studies have therefore attempted to decode some feature of an observer's cognitive state from their eye movements. However, until recently, it has been challenging to embed such "high-level" information directly into images, so few studies have examined the potential role that top-down scene or task features could play in a decoding procedure designed to infer attributes of the observer's cognition. We evaluated the importance of a novel set of such high-level features with respect to the performance of a classifier built to decode an observer's task during natural scene inspection. This was achieved by asking subjects to perform one of three tasks while viewing each of 210 natural scene images taken from the LabelMe image database: free viewing, object counting, and inspection made in preparation for a written description of the scene For each subject/trial pairing, three types of features were computed: eye movement features, image salience features of inspected objects and of the entire scene, and a novel set of "semantic" features. These latter described the semantic relatedness of all labeled objects within a scene between themselves, between themselves and a scene gist label, and between the objects sequentially inspected by the subject. Semantic relatedness was calculated using cosine similarity values within a shared vector-space language model (word2vec, Mikolov et al., 2013). A random-forest classifier applied to this data achieved significantly above chance levels of accuracy. Variable importance measures rated these semantic features as among the most important to classifier performance. We therefore suggest that these or similar features should be computed and used in studies that aim to infer an observer's cognitive state from the pattern of their eye movements in a scene.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×