Abstract
This work presents a novel technique to identify unique individual readers based on an effective fusion scheme that combines fixation patterns with syntactic and semantic word relationships in a text. Previous eye-movement identification methods for reading used intricate eye-movement variables that were sensitive to various factors unrelated to reader identification (Holand & Oleg, 2011; Bayat & Pomplun, 2016). In contrast, the current technique was developed based on only eye fixation (location and duration) variables that are interpolated in a vector representation of words of a text. We use eye-movement data that were previously collected in our lab by Attar et al. (2016). In this experiment, forty participants read six easily readable passages with general topics (food, health, science, and history). The vector representations of words in all six passages were computed using the skip-gam model that provides linear structure representations of words (Mikolov et al., 2013). This pre-trained Google News corpus word vector model consists of 3 million 300-dimensional English word vectors. Using this vector space model, each word was mapped into a 300-dimensional vector. Moreover, a 3-dimensional weight vector was adopted for each word in a passage by evaluating the distance of the nearest fixation point to that word and its immediate neighbors. This weight vector was multiplied by the vector representations of corresponding words. By computing the average of the resulting vectors, a 300-dimensional feature vector was derived for each passage associated with each participant. By combining Logistic and Multilayer Perceptron as our classification algorithms, we reached an overall accuracy of 96.84% which is higher than the accuracies obtained by other eye-movement based biometric methods. The present finding suggests that an average vector representing interpolated eye fixation and sematic text information differs systematically across individuals, which leads to high and consistent identification accuracy.
Meeting abstract presented at VSS 2017