October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Eye movements reveal event understanding in visual narratives
Author Affiliations
  • Karissa B. Payne
    Kansas State University
  • Maverick E. Smith
    Kansas State University
  • John P. Hutson
    Georgia State University
  • Joseph P. Magliano
    Georgia State University
  • Lester C. Loschky
    Kansas State University
Journal of Vision October 2020, Vol.20, 1645. doi:https://doi.org/10.1167/jov.20.11.1645
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Karissa B. Payne, Maverick E. Smith, John P. Hutson, Joseph P. Magliano, Lester C. Loschky; Eye movements reveal event understanding in visual narratives. Journal of Vision 2020;20(11):1645. https://doi.org/10.1167/jov.20.11.1645.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

What guides eye movements while viewing visual narratives? More specifically, do comprehension processes influence attentional selection when reading wordless picture stories? According to the Scene Perception & Event Comprehension Theory (SPECT) there are front-end processes, such as attentional selection, that occur during single eye fixations, and back-end processes, such as building an event model, that occur in working memory and long-term memory. Here we have investigated how attentional selection may be influenced by event models while people view visual narratives. Prior research has shown that as more situational changes occur in a visual narrative (e.g., space, time, characters, goals and sub-goals), viewers are more likely to perceive an event boundary (i.e., beginning of an event) (Magliano et al., 2011). Other research has shown that viewing times increase at event boundaries (Hard, Recchia & Tversky, 2011; Smith, Newberry & Bailey, 2019). In an eye-tracking study using the “Boy, Dog, Frog” picture stories, we replicated the findings that spatiotemporal and character changes produced higher event segmentation. We also replicated the findings that viewing time was longer at event boundaries. We then extended those findings to eye movements. We asked whether the longer viewing times at event boundaries, when more event indices changed, were due to longer fixations (i.e., increased processing load), or more fixations (i.e., more search for information). Longer viewing times were strongly associated with more fixations, not longer fixations, supporting the search hypothesis. Both viewing times and the number of fixations were found to be significantly predicted by spatiotemporal changes, character changes, and the beginning of superordinate goals. Further analyses will assess whether these additional fixations are simply allocated to the new and salient content in an image, or if they are also specifically directed towards the contents necessary to make inferences about the characters’ goals in the narrative.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×