Abstract
To effectively process spatial information in a natural environment, humans typically use a combination of visual, proprioceptive, vestibular, and temporal cues. We assessed the relative contributions of visual and proprioceptive/efferent information to distance reproduction using virtual reality. Subjects (Ss) were instructed to move forward down a straight, virtual hallway by pedaling a stationary bike at a constant speed (with minimal vestibular input), while simultaneously receiving optic flow information through a head mounted display. The virtual environment consisted of an empty, seemingly infinite hallway mapped with random surface texture. Each trial consisted of two distances: a stimulus distance, which varied in length from trial to trial, and a response distance. Ss were required to respond by reproducing the magnitude of the stimulus distance. In one condition, the relation between visual and non-visual information remained congruent. The results showed that Ss could reproduce distance with reasonable accuracy. In a second condition, a visual and proprioceptive incongruency was created through software by varying the optic flow gain (OFG) between the two distances within a trial. While the OFG of one of the distances (either stimulus or response) was held constant, the OFG of the other distance was varied among three values. As a result of the OFG variation, if Ss relied exclusively on vision or exclusively on proprioception, this would lead to different responses. The results showed that when OFG was varied between three different magnitudes, three separate response functions were observed. The magnitude of separation between these response functions seemed to indicate that visual and proprioceptive cues contributed about equally to the final estimate. These results for distance reproduction are comparable to our psychophysical results observed for distance ratio estimation and distance discrimination reported elsewhere (VSS 2003, EBR, 2004).
Supported by an NSERC and a CFI grant to HJS.