Abstract
Goal-directed reaching movements rely on both egocentric and allocentric target representations. So far, it is widely unclear which factors determine the use of objects as allocentric cues for reaching. In a series of experiments we asked participants to encode object arrangements in a naturalistic visual scene presented either on a computer screen or in a virtual 3D environment. After a brief delay, a test scene reappeared with one object missing (= reach target) and other objects systematically shifted horizontally or in depth. After the test scene vanished, participants had to reach towards the remembered location of the missing target on a grey screen. On the basis of reaching errors, we quantified to which extend object shifts and thus, allocentric cues, influenced reaching behavior. We found that reaching errors systematically varied with horizontal object displacements, but only when the shifted objects were task-relevant, i.e. the shifted objects served as potential reach targets. This effect increased with the number of objects shifted in the scene and was more pronounced when the object shifts were spatially coherent. The results were similar for 2D and virtual 3D scenes. However, object shifts in depth led to a weaker and more variable influence on reaching and depended on the spatial distance of the reach target to the observer. Our results demonstrate that task relevance and image coherence are important, interacting factors which determine the integration of allocentric information for goal-directed reaching movements in naturalistic visual scenes.
Meeting abstract presented at VSS 2016