Abstract
Tasks like map-reading, perspective-taking, and mental simulation all involve transforming (e.g., rotating or reflecting) internal visuospatial representations to match current/imagined viewpoints. Here, we explored the memory routines that implement such transformations, by quantifying their associated mental costs. Participants searched for targets within a 4x4 configuration of real-world objects (viewed one-at-a-time through a central window), using keypresses to move between positions (thereby changing which object was visible inside the window). The configuration remained stable for 30 trials (enabling participants to acquire a robust memory representation) but then changed; participants then completed 10 additional trials. In E1, we contrasted the costs of transforming intrinsic (object-centered) versus global (world-centered) reference frames of visuospatial representations. There were 3 transformation types: 1) Intrinsic: objects’ orientations rotated 180° but positions remained unchanged; 2) Global: positions and orientations rotated 180° in sync; and 3) Both: locations rotated 180° but orientations stayed upright. We observed slower RTs after transformation versus before for Global (2.06sec/trial) and Intrinsic (0.63sec/trial), which differed significantly. Notably, though Global (where objects’ positions changed and orientations were “upside-down”) was more visually distinct from the initial configuration than Both, Global was significantly less costly than Both (2.85sec/trial), underscoring the contribution of intrinsic reference. These results suggest that visual scenes are redundantly encoded in terms of multiple spatial reference frames, and each must be transformed independently. E2 compared 90° and 180° rotations to horizontal and vertical mirror reflections. We observed graded rotation costs (180° worse than 90°), implicating a memory routine for long-term memory akin to mental rotation in short-term memory, and a trend towards larger costs for vertical versus horizontal reflections, possibly implicating two distinct reflection routines. These results help establish a taxonomy of memory routines, and validate our novel paradigm as a powerful tool for studying the flexibility and durability of visuospatial memory representations.
Meeting abstract presented at VSS 2015