December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Spatial affordances can automatically trigger dynamic visual routines: Spontaneous path tracing in task-irrelevant mazes
Author Affiliations & Notes
  • Kimberly W. Wong
    Yale University
  • Brian Scholl
    Yale University
  • Footnotes
    Acknowledgements  This project was funded by ONR MURI #N00014-16-1-2007 awarded to BJS.
Journal of Vision December 2022, Vol.22, 3353. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kimberly W. Wong, Brian Scholl; Spatial affordances can automatically trigger dynamic visual routines: Spontaneous path tracing in task-irrelevant mazes. Journal of Vision 2022;22(14):3353.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Visual processing usually seems both incidental and instantaneous. But imagine viewing a jumble of shoelaces, and wondering whether two particular tips are part of the same lace. You can answer this by looking, but doing so may require something dynamic happening in vision (as the lace is effectively ‘traced’). Such tasks are thought to involve ‘visual routines’: dynamic visual procedures that efficiently compute various properties on demand, such as whether two points lie on the same curve. Past work has suggested that visual routines are invoked by observers’ particular (conscious, voluntary) goals, but here we explore the possibility that some visual routines may also be automatically triggered by certain stimuli themselves. In short, we suggest that certain stimuli effectively *afford* the operation of particular visual routines (as in Gibsonian affordances). We explored this using stimuli that are familiar in everyday experience, yet relatively novel in human vision science: mazes. You might often solve mazes by drawing paths with a pencil — but even without a pencil, you might find yourself tracing along various paths *mentally*. Observers had to compare the visual properties of two probes that were presented along the paths of a maze. Critically, the maze itself was entirely task-irrelevant, but we predicted that simply *seeing* the visual structure of a maze in the first place would afford automatic mental path tracing. Observers were indeed slower to compare probes that were further from each other along the paths, even when controlling for lower-level visual properties (such as the probes’ brute linear separation, i.e. ignoring the maze ‘walls’). This novel combination of two prominent themes from our field — affordances and visual routines — suggests that at least some visual routines may operate in an automatic (fast, incidental, and stimulus-driven) fashion, as a part of basic visual processing itself.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.