August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Panoramic Memory Shapes Visual Representations of Scenes
Author Affiliations
  • Caroline Robertson
    Harvard Society of Fellows, Harvard, Cambridge, MA
  • Katherine Hermann
    McGovern Institute for Brain Research, MIT, Cambridge, MA
  • Anna Mynick
    Wellesley College, Wellesley, MA
  • Dwight Kravitz
    George Washington University, Washington, DC
  • Nancy Kanwisher
    McGovern Institute for Brain Research, MIT, Cambridge, MA
Journal of Vision September 2016, Vol.16, 323. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Caroline Robertson, Katherine Hermann, Anna Mynick, Dwight Kravitz, Nancy Kanwisher; Panoramic Memory Shapes Visual Representations of Scenes. Journal of Vision 2016;16(12):323.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

As we navigate around our visual environment, our awareness of the place we are in seems to extend beyond the specific part of the environment that is currently in view to include a broader representation of the scene all around us. Here, we tested whether memory of a broad, panoramic space influences the ongoing representations of discrete views from within that panorama. Specifically, we introduced participants (N=21 behavioral study; N=12 fMRI study) to dynamic fragments of novel 360° spatial expanses, some of which contained overlapping visual content (Overlap Condition), and others of which did not (No-Overlap Condition) (Study Phase). Then, we tested whether discrete, non-overlapping snapshots from opposite edges of these spatial expanses become associated in memory, and acquire representational similarity in the brain, as a function of whether the visual information connecting them is known or unknown (Overlap vs. No-Overlap). On each trial of the fMRI study, participants were presented with single snapshots from the studied spatial expanses. Classification analyses showed that all independently-localized regions of the scene network (PPA, RSC, OPA) were able to discriminate individual scenes (all p< 0.001) and spatial layout (open/closed) (all p< 0.05). Importantly, representations in one region of the scene network were also sensitivity to broader spatial knowledge: the RSC showed significantly greater representational similarity between pairs of snapshots that had appeared in the Overlap vs. the No-Overlap condition (p< 0.008). Behaviorally, memory for the association between two snapshots was higher if they were were drawn from the Overlap vs. No-Overlap conditions (all p< 0.02), as was spatiotopic position memory (p< 0.02). Our results demonstrate that images of visual scenes evoke greater representational similarity in the brain when the visual information that unites them has been previously observed, suggesting that moment-to-moment visual representations are shaped by broader visuospatial representations in memory.

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.