Abstract
The presence of an allocentric landmark can have both explicit (instruction-dependent) and implicit influences on reaching performance (Byrne and Crawford 2010; Chen et al., 2011 Klinghammer et al. 2015, 2017). However, it is not known how the instruction itself (to rely either on egocentric versus allocentric cues) influences memory-guided reaching.
Here, 13 participants performed a task with two instruction conditions (egocentric vs. allocentric), but with similar sensory and motor conditions. In both conditions, participants fixated gaze near the centre of a display aligned with the right shoulder, and an LED target briefly appeared (alongside a visual landmark) in one visual field. After a mask/memory delay period, the landmark re-appeared in the same or opposite visual field. In the allocentric condition, participants were instructed remember the initial location of the target relative to the landmark, and to reach relative to the shifted landmark. In the egocentric condition, subjects were instructed to ignore the landmark and point toward the remembered location of the target. To equalize motor aspects (when the landmark shifted opposite), on 50% of the egocentric trials subjects were instructed to anti-point i.e., opposite to the remembered target.
When the landmark stayed within the same visual field, the allocentric instruction yielded significantly more accurate pointing than the egocentric instruction, despite identical visual and motor conditions. Likewise, when the landmark shifted to the opposite side, pointing was significantly better following the allocentric instruction (compared to motor-matched anti-reaches). This was true regardless of whether the data were plotted in allocentric (target releative-to-landmark) or egocentric (target-relative-gaze) coordinates.
These results show that in the presence of a visual landmark, memory-guided pointing improves when participants are explicitly instructed to point relative to the landmark. This suggests that explicit attention to a visual landmark better recruits allocentric coding mechanisms that can augment implicit egocentric visuomotor transformations.