Abstract
To successfully interact with objects, we must maintain stable representations of their locations in the world. However, their images on our retina may be displaced several times per second by large, rapid eye movements. Are we able to form a seamless world-centered (spatiotopic) representation of objects' locations across eye movements? Golomb & Kanwisher (2012 PNAS) found that memory for an object's location is more accurate in gaze-centered (retinotopic) than world-centered (spatiotopic) coordinates, and that spatiotopic memory progressively deteriorates more than retinotopic memory with each eye movement. This suggests that the native coordinate system of visual memory is retinotopic, raising questions about how we effectively act on objects in the world. One possibility is that perception and action rely on different coordinates; that is, the intention to act on an object engages more ecologically-relevant spatiotopic representations. Here, we investigated whether the intention to act on an object's location could improve memory for its spatiotopic location. Twelve participants were asked to remember a spatial location across a short delay, during which they completed a variable number of eye movements (0-2). Participants completed four versions of this task: they reported either the retinotopic or spatiotopic location, either using a mouse to click on the remembered location (as in Golomb & Kanwisher, 2012) or using their finger to reach and interact directly with the touchscreen. We again found a pattern where spatiotopic errors were greater and accumulated faster than retinotopic errors in the mouse task. Critically, we found a similar pattern in the reaching task; if anything, spatiotopic errors were amplified. These results further support the hypothesis that spatial memory is natively retinotopic—even in in cases where spatiotopic coordinates are particularly behaviorally relevant, participants are still more accurate at remembering locations in retinotopic coordinates.
Meeting abstract presented at VSS 2017