September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
A new method for comparing scanpaths based on vectors and dimensions
Author Affiliations
  • Richard Dewhurst
    Humanities Laboratory, Lund University, Lund, Sweden
  • Jalszka Jarodzka
    Centre for Learning Sciences and Technologies, Netherlands Open University, Heerlen, The Netherlands
  • Kenneth Holmqvist
    Humanities Laboratory, Lund University, Lund, Sweden
  • Tom Foulsham
    Brain and Attention Research Laboratory, University of British Columbia, Vancouver, British Columbia, Canada
  • Marcus Nyström
    Humanities Laboratory, Lund University, Lund, Sweden
Journal of Vision September 2011, Vol.11, 502. doi:https://doi.org/10.1167/11.11.502
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Richard Dewhurst, Jalszka Jarodzka, Kenneth Holmqvist, Tom Foulsham, Marcus Nyström; A new method for comparing scanpaths based on vectors and dimensions. Journal of Vision 2011;11(11):502. https://doi.org/10.1167/11.11.502.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We make different sequences of eye movements – or scanpaths – depending on what we are viewing and the current task we are carrying out (e.g. Land, Mennie, & Rusted, 1999). In recent years, research efforts have been very informative in identifying commonalities between scanpath pairs, allowing us to quantify, for example, the similarity in eye movement behaviour between experts and novices (Underwood, Humphrey, & Foulsham, 2008), or between encoding and recognition of the same image (Foulsham & Underwood, 2008). However, common methods for comparing scanpaths (e.g., ‘string-edit’, based on Levenshtein, 1966, or ‘positon measures’, see Mannan, Ruddock, & Wooding, 1995) fail to capture both the spatial and temporal aspects of scanpaths. Even the newest techniques (e.g., ‘Scanmatch’, Cristino, Mathôt, Theeuwes, & Gilchrist, 2010) are restricted by the fact that they rely on the division of space into Areas of Interest (AOIs), thus limiting the spatial resolution of the similarity metric produced. Here we validate a new algorithm for comparing scanpaths (Jarodzka, Holmqvist, & Nyström, 2010) with eye movement data from human observers. Instead of relying on the quantization of space into AOIs, our method represents scanpaths as geometrical vectors, which retain temporal order and spatial position. Scanpaths are then compared across several dimensions – shape, position, length, direction, and duration – and a similarity value is returned for each. Using this new multidimensional approach, our data from two experiments highlights aspects of scanpath similarity which cannot otherwise be quantified: when scanpaths are clearly similar, but are spatially downscaled, for instance. Moreover, we show how scanpath similarity changes depending on task, using our algorithm in comparison to the most popular alternatives. This data demonstrates that our vector-based multi-dimensional approach to scanpath comparison is favorable to others, and should encourage a shift away from methods which are rooted in the Levenstein principle or spatial position alone.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×