September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Memory routines for the transformation of visuospatial representations
Author Affiliations
  • Benjamin Bernstein
    Northwestern University
  • Brandon Liverence
    Northwestern University
  • Steven Franconeri
    Northwestern University
Journal of Vision September 2015, Vol.15, 1291. doi:10.1167/15.12.1291
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Benjamin Bernstein, Brandon Liverence, Steven Franconeri; Memory routines for the transformation of visuospatial representations. Journal of Vision 2015;15(12):1291. doi: 10.1167/15.12.1291.

      Download citation file:


      © 2017 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

Tasks like map-reading, perspective-taking, and mental simulation all involve transforming (e.g., rotating or reflecting) internal visuospatial representations to match current/imagined viewpoints. Here, we explored the memory routines that implement such transformations, by quantifying their associated mental costs. Participants searched for targets within a 4x4 configuration of real-world objects (viewed one-at-a-time through a central window), using keypresses to move between positions (thereby changing which object was visible inside the window). The configuration remained stable for 30 trials (enabling participants to acquire a robust memory representation) but then changed; participants then completed 10 additional trials. In E1, we contrasted the costs of transforming intrinsic (object-centered) versus global (world-centered) reference frames of visuospatial representations. There were 3 transformation types: 1) Intrinsic: objects’ orientations rotated 180° but positions remained unchanged; 2) Global: positions and orientations rotated 180° in sync; and 3) Both: locations rotated 180° but orientations stayed upright. We observed slower RTs after transformation versus before for Global (2.06sec/trial) and Intrinsic (0.63sec/trial), which differed significantly. Notably, though Global (where objects’ positions changed and orientations were “upside-down”) was more visually distinct from the initial configuration than Both, Global was significantly less costly than Both (2.85sec/trial), underscoring the contribution of intrinsic reference. These results suggest that visual scenes are redundantly encoded in terms of multiple spatial reference frames, and each must be transformed independently. E2 compared 90° and 180° rotations to horizontal and vertical mirror reflections. We observed graded rotation costs (180° worse than 90°), implicating a memory routine for long-term memory akin to mental rotation in short-term memory, and a trend towards larger costs for vertical versus horizontal reflections, possibly implicating two distinct reflection routines. These results help establish a taxonomy of memory routines, and validate our novel paradigm as a powerful tool for studying the flexibility and durability of visuospatial memory representations.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×