Journal of Vision Cover Image for Volume 24, Issue 10
September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Predictive Looking and Predictive Looking Errors in Everyday Activities
Author Affiliations
  • Sophie Su
    Washington University in Saint Louis
  • Matthew Bezdek
    Elder Research
  • Tan Nguyen
    Washington University in Saint Louis
  • Christopher Hall
    University of Virginia
  • Jeff Zacks
    Washington University in Saint Louis
Journal of Vision September 2024, Vol.24, 687. doi:https://doi.org/10.1167/jov.24.10.687
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sophie Su, Matthew Bezdek, Tan Nguyen, Christopher Hall, Jeff Zacks; Predictive Looking and Predictive Looking Errors in Everyday Activities. Journal of Vision 2024;24(10):687. https://doi.org/10.1167/jov.24.10.687.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Where people look in pictures and movies has been shown to be based not only on the most salient point in the current scene, but also on predictions of what is going to happen next. The accuracy of these predictions fluctuates during movie watching. Some theories of event comprehension propose that spikes in prediction error can trigger working memory updating and the segmentation of ongoing experience into meaningful events. One previous study of predictive looking found evidence for this proposal (Eisenberg et al., 2018, CR:PI), but the paradigm used in that study could only obtain predictions intermittently, because it analyzed predictive looking to objects that an actor was about to contact. Here, we developed a continuous measure of prediction error by modeling predictive looking towards the actor's hands, and we operationalized this prediction error as the residuals from the predictive looking model. Viewers’ gaze was tracked while they watched movies of everyday activities, and mixed-effects models were used to predict the actor’ hand positions from viewers’ previous gaze location. Stepwise model comparison indicated that viewers look predictively as current gaze position accounts for hand location as far as 9 seconds in the future.  We compared the time course of gaze predictions with that of predictions generated from a computational model of event comprehension and found that gaze predictions showed higher error at moments when the computational model had higher errors. Furthermore, spikes in gaze prediction error were predictive of increases in event segmentation in a separate group of viewers. These results support proposals that event segmentation is driven by spikes in prediction error, and this method promises to give a general approach for measuring ongoing prediction error noninvasively.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×