August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Looking from different viewpoints: an eye movement study on novel object and face recognition.
Author Affiliations
  • Filipe Cristino
    Wolfson Centre for Clinical and Cognitive Neuroscience, School of Psychology, Bangor University
  • Candy Patterson
    Wolfson Centre for Clinical and Cognitive Neuroscience, School of Psychology, Bangor University
  • Charles Leek
    Wolfson Centre for Clinical and Cognitive Neuroscience, School of Psychology, Bangor University
Journal of Vision August 2012, Vol.12, 404. doi:https://doi.org/10.1167/12.9.404
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Filipe Cristino, Candy Patterson, Charles Leek; Looking from different viewpoints: an eye movement study on novel object and face recognition.. Journal of Vision 2012;12(9):404. https://doi.org/10.1167/12.9.404.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Eye movements have been widely studied, using images and videos in laboratories or portable eye trackers in the real world. Although a good understanding of the saccadic system and extensive models of gaze have been developed over the years, only a few studies have focused on the consistency of eye movements across viewpoints. We have developed a new technique to compute and map the depth of collected eye movements on stimuli rendered from 3D mesh objects using a traditional corneal reflection eye tracker (SR EyeLink 1000). Having eye movements mapped into 3D space (and not on an image space) allowed us to compare fixations across viewpoints. Fixation sequences (scanpaths) were also studied across viewpoints using the ScanMatch method (Cristino et al. 2010, Behavioural and Research Methods, 42, 692-700), extended to work with 3D eye movements. In a set of experiments where participants were asked to perform a recognition task on either a set of 3D objects or faces, we recorded their gaze while performing the task. Participants either viewed the stimuli monocularly or stereoscopically as anaglyph images. The stimuli were shown from different viewpoints during the learning and testing phases. A high degree of gaze consistency was found across the different viewpoints, particularly between learning and testing phases. Scanpaths were also similar across viewpoints, suggesting not only that the gazed spatial locations are alike, but also their temporal order.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×