Abstract
People can keep track of the visual direction of objects in their environment as they move, despite those objects being out of view. However, there are no detailed models to suggest how this may be done. Generating a 3D model and keeping track of the observer's location in that model is a possibility, but this would predict either no errors or a consistent pattern of errors dependent on the accuracy of the 3D model. 20 observers viewed four coloured boxes arranged in two visual directions in a real room with maze-like walls, and viewed the same box layouts in a virtual replica. Observers viewed the boxes as long as they needed, then walked to one of three pointing zones from where, using a tracked hand-held device they would 'shoot' 32 times in a random order at the four boxes. The boxes were not visible at any time after the participant left the viewing zone. In some conditions, participants took a short cut to the shooting zones. We found that pointing errors in this task varied over a wide range (mean bias, averaged over 72 trials and 20 observers per condition, varied by at least +/- 30 deg) but these biases were not random. They were highly correlated in the real and virtual conditions, although biases were larger in the virtual condition. In the maze and the direct walking conditions, biases were also highly correlated despite large differences for different pointing zones. Data from an experiment in which pointing zones were located on two different sides of the target boxes ruled out a model based on a consistent mislocalisation of the target boxes. Instead, the data suggest a systematic error in computation of pointing direction such as a compression in the gain of the rate of updating visual direction.
Meeting abstract presented at VSS 2014