Abstract
We questioned the ability for monocular human observer to scale absolute distance under sagittal (in-depth) head motion. The computer-generated stimuli represented spheres of 2 apparent sizes, 6 or 15 degrees, covered with randomly distributed dots. They were presented to the subjects at eye-level at distances ranging from 30cm to 238cm. In the self-motion (SM) condition, the stimuli simulated a stationary sphere in a virtual 3D space while subjects moved their head along the sagittal axis with an amplitude of 10cm (frequency: 0.3 Hz). This head motion was recorded and was later applied to the sphere observed by a stationary subject in the object motion (OM) condition. Subjects had to indicate the position of the sphere among 4 distance intervals. The average psychometric curves of 4 subjects indicate that distance estimates in conditions OM and SM covaried strongly with stimulus distance and were accurate (median error of 7.64 per cent and 13.09 per cent in conditions SM and OM respectively). This was surprising because subjects had in principle no cue to absolute distance in condition OM. In a second experiment, we randomized the head motion among 3 amplitudes (5cm, 10cm and 15cm). We found that responses covaried exclusively with image divergence in condition OM (R=−0.71, p<0.05), and with distance in condition SM (R=0.87, p<0.05). Therefore, when amplitude is randomized in condition OM, subjects are not able to estimate distance. On the contrary, they are able to use the information related to their own sagittal motion to scale absolute distances in condition SM.