September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Mapping a scene from afar: Allocentric representation of locations in scene-space
Author Affiliations & Notes
  • Anna Shafer-Skelton
    University of Pennsylvania
  • Russell Epstein
    University of Pennsylvania
  • Footnotes
    Acknowledgements  This work was supported by a NIH-NEI grant awarded to RAE (R01-EY022350)
Journal of Vision September 2024, Vol.24, 421. doi:https://doi.org/10.1167/jov.24.10.421
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anna Shafer-Skelton, Russell Epstein; Mapping a scene from afar: Allocentric representation of locations in scene-space. Journal of Vision 2024;24(10):421. https://doi.org/10.1167/jov.24.10.421.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Spatial neuroscience has discovered a great deal about how animals—primarily rodents—encode allocentric (world-centered) cognitive maps. We hypothesized that humans might be able to form such maps from afar, through visual processing alone. Previous work in vision science has explored how we extract the overall shape of scenes from particular points of view, but little is known about how we form allocentric representations of discrete locations within a scene—a key feature of a cognitive map. We tested for such a representation in two behavioral experiments. In Exp. 1, N=30 participants viewed images of a 3D-rendered courtyard, taken from one of 4 possible viewpoints outside and slightly above the courtyard, spaced 90 degrees apart. On each trial, participants saw two courtyard images separated by a brief (500ms) delay. Within each image was an indicator object (a car), in one of six possible allocentric locations; participants reported whether the indicator object was facing the same or different allocentric direction in the two images. The task was designed to direct attention to the location of the indicator object within the allocentric framework of the courtyard without requiring explicit reporting of that location. We observed a significant performance benefit in across-viewpoint trials when the indicator object was in the same allocentric location in both images compared to when it was in different allocentric locations (BIS p=0.009; we also report d-prime: p=0.023, RT: p=0.062). In Exp. 2 (N=30), we replicated this same-location benefit when participants viewed a continuous stream of courtyard images and performed a 1-back task on the facing direction of the indicator object (BIS p=0.004; secondary measures d-prime: p=0.026, RT: 0.023). These results show evidence for an allocentric representation of within-scene locations—a critical ingredient of allocentric cognitive maps—formed via visual exploration, without traversing the space.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×