Abstract
When reaching for an object, we make use of spatial representations that our brain constantly builds in a fraction of a second. Decades of research established that we encode action targets not only relative to ourselves, egocentrically, but also relative to other objects in the scene, allocentrically. The vast majority of experiments relied on static scenes. In these scenes, participants had to memorize the location of objects before they were asked to reach or point to a cued object location from their memory. Little is known whether these findings can be generalized to larger dynamic environments. To test this, we created virtual reality experiments in which participants were faced with a throw-in situation, like in soccer. The task was to memorize the landing position of a ball that was thrown by an avatar. The ball was thrown either closer to the avatar or closer to the participant. After the ball had landed, we removed elements of the scene that could be used as landmarks (i.e., the avatar and the midfield line) before they reappeared for a short time, either laterally shifted or not. Participants were then prompted to take a real ball, walk to the memorized landing position and place it there. A use of landmarks should be reflected in a systematic bias of the reproduced ball landing position in direction of the landmark shift. We found that this was the case, similar to previous findings in classic reaching and pointing experiments. Moreover, we found differences in the weighting between both landmarks depending on whether the ball landed closer to the avatar or to the participant. Our findings suggest that our brain builds robust spatial representations that can be used for different sized environments, different response modes and for static as well as dynamic target objects.