Abstract
For self-generated motion parallax, a sense of head velocity is needed to estimate distance from object motion. This information can be obtained from proprioceptive and visual sources. If visual and kinesthetic information are incongruent, the visual motion of objects will not match the sensed physical velocity of the head, resulting in a distortion of perceived distances. We assessed this prediction by varying the gain between physical observer head motion and the simulated motion. Given that the relative and absolute motion parallax would be greater than expected from head motion when gain was greater than 1.0, we anticipated that this manipulation would result in objects appearing closer to the observer. Using an HMD, we presented targets 1 to 3 meters away from the observer within a cue rich environment with textured walls and floors. Participants stood and swayed laterally at a rate of 0.5 Hz paced using a metronome. Lateral gain was applied by amplifying their real position by factors of 1.0 to 3.0, then using that to set the instantaneous viewpoint within the virtual environment. After presentation, the target disappeared and the participant performed a blind walk and reached for it. Their hand position was recorded and we computed positional errors relative to the target. We found no effect of motion parallax gain manipulation on binocular reaching accuracy. In a second study we evaluated the role of stereopsis in counteracting the anticipated distortion in perceived space by testing observers monocularly. In this case, distances were perceived as nearer as gain increased, but the effects were relatively small. Taken together our results suggest that observers are flexible in their interpretation of observer produced motion parallax during active head movement. This provides considerable tolerance of spatial perception to mismatches between physical and virtual motion in rich virtual environments.