September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Information Fusion Based on Fixation Patterns and Semantic Analysis for Observer Identification during Reading
Author Affiliations
  • Akram Bayat
    Computer Science, University of Massachusetts Boston
  • AmirHossein Bayat
    Computer Engineering, Iran University of Science and Technology
  • Marc Pomplun
    Computer Science, University of Massachusetts Boston
Journal of Vision August 2017, Vol.17, 531. doi:https://doi.org/10.1167/17.10.531
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Akram Bayat, AmirHossein Bayat, Marc Pomplun; Information Fusion Based on Fixation Patterns and Semantic Analysis for Observer Identification during Reading. Journal of Vision 2017;17(10):531. https://doi.org/10.1167/17.10.531.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

This work presents a novel technique to identify unique individual readers based on an effective fusion scheme that combines fixation patterns with syntactic and semantic word relationships in a text. Previous eye-movement identification methods for reading used intricate eye-movement variables that were sensitive to various factors unrelated to reader identification (Holand & Oleg, 2011; Bayat & Pomplun, 2016). In contrast, the current technique was developed based on only eye fixation (location and duration) variables that are interpolated in a vector representation of words of a text. We use eye-movement data that were previously collected in our lab by Attar et al. (2016). In this experiment, forty participants read six easily readable passages with general topics (food, health, science, and history). The vector representations of words in all six passages were computed using the skip-gam model that provides linear structure representations of words (Mikolov et al., 2013). This pre-trained Google News corpus word vector model consists of 3 million 300-dimensional English word vectors. Using this vector space model, each word was mapped into a 300-dimensional vector. Moreover, a 3-dimensional weight vector was adopted for each word in a passage by evaluating the distance of the nearest fixation point to that word and its immediate neighbors. This weight vector was multiplied by the vector representations of corresponding words. By computing the average of the resulting vectors, a 300-dimensional feature vector was derived for each passage associated with each participant. By combining Logistic and Multilayer Perceptron as our classification algorithms, we reached an overall accuracy of 96.84% which is higher than the accuracies obtained by other eye-movement based biometric methods. The present finding suggests that an average vector representing interpolated eye fixation and sematic text information differs systematically across individuals, which leads to high and consistent identification accuracy.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×