December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Memory-based predictions across head-turns in naturalistic scene perception
Author Affiliations
  • Anna Mynick
    Dartmouth College
  • Allie Burrows
    Dartmouth College
  • Brenda D. Garcia
    Dartmouth College
  • Thomas L. Botch
    Dartmouth College
  • Adam Steel
    Dartmouth College
  • Caroline E. Robertson
    Dartmouth College
Journal of Vision December 2022, Vol.22, 4093. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anna Mynick, Allie Burrows, Brenda D. Garcia, Thomas L. Botch, Adam Steel, Caroline E. Robertson; Memory-based predictions across head-turns in naturalistic scene perception. Journal of Vision 2022;22(14):4093.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

In familiar environments, memory-based predictions of upcoming, out-of-sight scene views are thought to supplement perceptual experience. Yet, how the visual system implements memory-guided predictions is not well understood. Using head-mounted virtual reality (VR), we tested whether memory for a 360º environment facilitates rapid perceptual judgements in that environment across head-turns. In Experiment 1 (N=21), subjects learned 18 real-world panoramas in VR. On each trial of the Priming Test, subjects turned left or right toward a snapshot from a learned panorama (Target Image; 110º width) and made a perceptual (open/closed) judgement. Before target onset, a brief (300ms) Prime Image appeared directly ahead, depicting either: 1) an adjacent snapshot from the same panorama (Same-scene prime), 2) a snapshot from a different panorama (Different-scene prime), or 3) a blank snapshot (Neutral prime). Compared to Neutral, Same-scene primes boosted reaction times (RTs) and Different-scene primes slowed RTs (p<.001). Thus, a single scene-view primes perceptual judgements of adjacent views in 360º space. Experiment 2 (N=21) examined whether priming respects a scene’s spatial structure. In the Spatially Incongruent condition, the prime and target were from the same scene, but targets appeared 180º opposite their true location. Spatial incongruence slowed RTs (Spatially Congruent < Neutral < Spatially Incongruent, p<0.001), indicating that primed content is yoked to the 360º structure of a scene. Experiment 3 (N = 17) examined whether priming is skewed in the direction of planned head-turns. On each trial, before prime onset, an arrow indicated a direction to plan a head-turn in. Arrows either predicted the target’s location correctly (Valid) or incorrectly (Invalid). We found an interaction between prime (Same/Neutral) and arrow (Valid/Invalid) conditions (p<0.05), with stronger priming in the direction of intended head-turns. Together, these results suggest that reinstatement of unseen, adjacent views may facilitate ongoing perception in naturalistic contexts.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.