Abstract
Observers make large, systematic errors when they point to an unseen target (VSS 2014). Here, we report modelling of these errors. In the experiment (reported VSS 2014), participants in a real or virtual environment viewed a set of 4 target boxes from one location and then walked behind a set of screens with the targets obscured from view. They were then told which 'pointing zone' to go to and there they pointed to the location of the remembered targets. Participants' head and hand were tracked throughout to allow actual and true pointing directions to be compared. Pointing errors depended not only on the layout of the targets and the pointing zone but also the orientation of the obscuring screen. Models based only on a distorted initial representation of the target layout or on cumulative errors in walking (odometry) cannot account for this effect of screen orientation. The most successful model emerged from allowing the layout of the boxes to vary freely according to an affine distortion in a screen-dependent coordinate frame (6 free parameters). This model shifted all the target boxes to lie almost in a single plane parallel to and significantly behind the obscuring screen. The same model accounted well for other data gathered with four different screen orientations. Any model that assumes the configuration of the target boxes is planar will predict that the ordering of pointing to different boxes (e.g. yellow box is to the left of the blue box) will remain constant independent of pointing zone; this is a characteristic error that participants make. Current theories based on a particular, fixed distortion of visual space cannot explain these pointing data. The heuristics participants use differ systematically and significantly from 3D geometric calculations, even with a distorted representation of visual space.
Meeting abstract presented at VSS 2015