October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Understanding dynamic scenes: How driving can teach us about scene perception
Author Affiliations & Notes
  • Benjamin Wolfe
    Massachusetts Institute of Technology
  • Ruth Rosenholtz
    Massachusetts Institute of Technology
  • Footnotes
    Acknowledgements  This work was supported by the Toyota-CSAIL Joint Research Center at MIT.
Journal of Vision October 2020, Vol.20, 145. doi:https://doi.org/10.1167/jov.20.11.145
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Benjamin Wolfe, Ruth Rosenholtz; Understanding dynamic scenes: How driving can teach us about scene perception. Journal of Vision 2020;20(11):145. https://doi.org/10.1167/jov.20.11.145.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Scene perception in daily life requires understanding dynamic natural scenes as we interact with and move through them. Given that our environment continues to change while we process the information from the previous moment, how can we keep up? To probe this perceptual question, we have used a driving paradigm, since it combines natural stimuli and navigational tasks with the need for rapid responses. We have previously shown that observers can report localized, task-relevant changes in a dynamic road scene using only peripheral vision, detect emergent hazards from video clips as short as 220 ms, and understand the environment well enough to evade a potential crash after only watching a 403 ms video. Here, we expand that research to examine more of the information gathering process, asking where observers look, particularly under time pressure. To enable this, we extended the freely-available Road Hazard Stimuli dataset to include spatial annotations of hazardous objects. This allows us to determine when (and if) observers initially look at the hazardous objects in these dynamic scenes. In an experiment examining where observers look when returning their gaze to the road, we find that their first saccade is rarely to the hazardous object, but they show a benefit (in duration threshold) of making this saccade even when it is poorly targeted relative to the hazard (mean duration threshold; 594 ms when saccading vs. 754 ms for peripheral-only, n=6). These results not only inform our understanding of eye movements in dynamic natural scenes, but also models of how observers gather information across the field of view. Scene perception in dynamic scenes involves more than where the observer looks; improving information acquisition by better leveraging peripheral vision facilitates scene understanding.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.