Abstract
When interacting with our environment we generally make use of egocentric and allocentric object information by coding objects relative to the observer or relative to the environment, respectively. Bayesian theories suggest that the brain integrates both sources of information optimally for perception and action. However, experimental evidence for egocentric and allocentric integration is sparse and moreover has only been studied using abstract stimuli lacking ecological relevance. Here, we investigated the use of egocentric and allocentric information during reaching in natural scenes. Participants encoded a breakfast scene containing 6 objects on a table (local objects) and 3 objects in the environment (global objects). After a 2s delay, a visual test scene reappeared for 1s in which one local object was missing (=target) and the remaining one, three or five local objects or one of the global objects were shifted to the left or to the right. The test scene was followed by a grey screen which signaled the participant to reach to the target as precisely as possible. When shifting objects we predicted no change in reaching endpoints if participants used egocentric object coding and large shifts of endpoints if allocentric information (local or global) dominated. We found that reaching movements were most affected by local allocentric shifts showing an increase in endpoint errors with the number of local objects shifted. Allocentric weights ranged between 10% and 40% depending on the number of shifted local objects, but there was no consistent effect of global allocentric cues. We are currently testing whether and how reach trajectories are affected by spatial shifts of local and global objects in the scene. Our findings suggest that allocentric cues are indeed used during goal-directed reaching. Moreover, the integration of egocentric and allocentric object information seems to depend on the ecological relevance of the available allocentric cues.
Meeting abstract presented at VSS 2014