Abstract
Humans can keep track of visual objects in an environment between different reference frames. Information about a view can thus agglomerate over glances. However, it is largely unknown how such multiple views are integrated, for example, the front- and backsides of a room never seen together. In order to examine the process of spatial integration, participants learned 2D arrays consisting of three objects during two screen presentations. In the first presentation they saw object A and B, in the second presentation object B and C. During the test they were presented with object A and asked to report on the location of object C. We manipulated the location and orientation of the array during the presentation and test. The participants performed generally better when the array was static throughout the presentation and test compared to when it moved. Current spatial memory theories focus on rotational costs and are largely silent about translational costs. However, the present results suggest that a simple translation of environmental reference frames does involve costs and therefore should be addressed more explicitly. The results further showed that the participants performed better when the orientation of the array during the test coincided with the orientation of the array in the first presentation than in the second presentation. This suggests that the participants integrated novel spatial information with respect to the reference frame of the previously seen information. When relating views through an object present in both views, the previously seen view seems to provide the stage for integration.
Meeting abstract presented at VSS 2013