September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Task Decoding using Recurrence Quantification Analysis of Eye Movements
Author Affiliations
  • Daniel LaCombe, Jr.
    Department of Psychology, Florida Atlantic University
  • Elan Barenholtz
    Department of Psychology, Florida Atlantic University Center for Complex Systems, Florida Atlantic University
Journal of Vision September 2015, Vol.15, 1271. doi:https://doi.org/10.1167/15.12.1271
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Daniel LaCombe, Jr., Elan Barenholtz; Task Decoding using Recurrence Quantification Analysis of Eye Movements. Journal of Vision 2015;15(12):1271. https://doi.org/10.1167/15.12.1271.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In recent years, there has been a surge of interest in the possibility of using machine-learning techniques to decode generating properties of eye-movement data (e.g., observer/stimulus-identity). Previous approaches have considered only aggregate or purely spatial measures of eye movements. Here we explore a relatively new approach to eye movement quantification, Recurrence Quantification Analysis (RQA)— which allows analysis of spatio-temporal fixation patterns —and assess its diagnostic power with respect to task decoding. Fifty participants completed both aesthetic-judgment and visual-search tasks over natural images of indoor scenes. Six different sets of features were extracted from the eye movement data: Aggregate (nFix, meanFixDur, meanSacAmp, area); fixMap (smoothed fixation map); RQA (recurrence, determinism, laminarity, center of recurrence mass); RQA2 (RQA features with the addition of size, regression, latency of recurrence mass); RQAprob (probabilistic version of RQA); RQAprob2 (probabilistic version of RQA2). These feature vectors were then used to train six separate support vector machines using an n-fold cross validation procedure in order to classify a scanpath as being generated under either an aesthetic-judgment or visual-search task. Analyses indicated that all classifiers decoded task significantly better than chance. Pairwise comparisons with Bonferonni-corrected alpha values revealed that all RQA feature sets afforded significantly greater decoding accuracy than the aggregate features. The superior performance of RQA features compared to the others may be that they are relatively invariant to changes in observer or stimulus; although RQA features significantly decoded observer- and stimulus-identity, analyses indicated that spatial distribution of fixations were most informative about stimulus-identity whereas aggregate measures were most informative about observer-identity. Therefore, changes in RQA values could be more confidently attributed to changes in task, rather than observer or stimulus, relative to the other feature sets. These findings have significant implications for the application of RQA in studying eye-movement dynamics in top-down attention.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×