August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Eye fixations in video: Quantifying the effects of meaning and action on inter-observer convergence
Author Affiliations
  • Tom Foulsham
    Department of Psychology, University of Essex
  • Rachel Grenfell-Essam
    Department of Psychology, University of Essex
Journal of Vision August 2014, Vol.14, 759. doi:10.1167/14.10.759
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Tom Foulsham, Rachel Grenfell-Essam; Eye fixations in video: Quantifying the effects of meaning and action on inter-observer convergence. Journal of Vision 2014;14(10):759. doi: 10.1167/14.10.759.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

There has been significant recent interest in measuring neural and cognitive responses during dynamic scenes and video. In particular, it has been noted that the attention of different observers often synchronises, and that such moments can be identified when participants' eye fixations converge in space and time. The current studies aimed to establish whether convergence in fixation locations was associated with explicit self-reports of "important" moments. In two experiments, participants' eye movements were recorded while they watched a set of movie clips featuring a range of visual content. We developed a novel method for quantifying the inter-observer convergence, based on ROC analysis, and applied this method to find the moments when participants were most likely to be looking in the same place at the same time. These moments occurred more often than expected by chance, and were highest for clips involving action. Critically, there was a reliable correlation between inter-observer agreement and explicit self-reports of important moments, indicating that attention converged at meaningful times. Additional analysis of the visual and semantic content at fixation allows a data-driven approach to video, and provides a rich source of information for those investigating natural, active vision.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×