December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Navigational affordances are automatically computed during scene perception: Evidence from behavioral change blindness and a computational model of active attention
Author Affiliations & Notes
  • Mario Belledonne
    Yale University
  • Yihan Bao
    Yale University
  • Ilker Yildirim
    Yale University
  • Footnotes
    Acknowledgements  This project was funded by an AFOSR Young Investigator Program award to IY
Journal of Vision December 2022, Vol.22, 4128. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mario Belledonne, Yihan Bao, Ilker Yildirim; Navigational affordances are automatically computed during scene perception: Evidence from behavioral change blindness and a computational model of active attention. Journal of Vision 2022;22(14):4128.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Scene perception poses a major computational challenge: how does the mind, at a glance, selectively capture actionable content about our surroundings? Here, we argue that attention, via implicit or default tasks such as navigation, influence which regions of the scene are selectively processed. In the context of a change detection task over realistic indoor scenes that had no overt relationship to navigation, we measured how changes in geometry resulting in a difference to shortest paths to the exit affected performance in comparison to trials with path preserving changes. We controlled for low-level visual features by creating pairs of trails with identical starting conditions except for the exit location such that the same change in geometry caused a difference in pathing for only one instance. Despite having no cues for navigation, subjects nevertheless detected changes impacting pathing more readily. However, the current literature on scene perception is incapable of explaining such phenomena, currently forming two disparate bodies of research: work exploring representational targets in cognitive neuroscience and change blindness, characterizing selective processing. Here, we present a unified account via an active attention architecture that guides perception to prioritize aspects of the world relevant to an observer’s goals. In the context of indoor scenes, this architecture uses hypothesis testing to quantify changes in the observer’s goals, i.e. shortest paths to exits, resulting from perceptual updates to distinct spatial “trackers” that evenly divide scene geometry. The greater the impact of updates on pathing, the more computational resources and geometric level-of-detail invested for those trackers. Indeed, for the behavioral experiment, estimated attention explains why some changes are more readily detected: Increased detection rate falls from greater amounts of goal-driven perceptual processing to trackers containing path-modifying differences. These results verify the presence and role of navigational affordances as an implicit goal driving scene perception.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.