Purchase this article with an account.
Frank H. Durgin, Melissa J. Kearns; The calibration of optic flow produced by walking: The environment matters. Journal of Vision 2002;2(7):429. doi: https://doi.org/10.1167/2.7.429.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The question of how perceptions differ between VR and the real world has far reaching implications for the interpretation of perceptual studies conducted in VR. Here we investigated how accurately optic flow and walking speed could be matched in VR. Banton, et al. (2000) reported that optic flow had to be increased by about 50% in a VR HMD to seem normal to subjects walking on a treadmill. The present study sought to test whether this apparently poor calibration was really due to VR itself, to oddities of treadmill locomotion, or to the specific visual environment (VE) used.
On each trial, subjects (22 Swarthmore students without prior VR experience) walked 5–7 m through one of two VEs, and made a FC decision about the speed of the VE relative to their walking speed. Both VEs were richly-textured corridors, but one of them also had randomly positioned textured pillars (providing inter-object parallax and salient cues about time-to-passage). Half the subjects walked on solid ground (head position monitored by an optical wide-area tracker); the other half walked on a treadmill at a normal speed. The gain of optic flow was altered only along the axis of the corridors (to preserve frontal motion-parallax information). Multiple interleaved staircases were used to measure subjects' estimate of normal gain in each of the two environments.
With pillars in the corridor, matched gains in the both the treadmill and solid-ground conditions did not differ reliably from 1. Even in the absence of pillars, matched gains in the two conditions rose only about 10%.
Evidently, there can be good calibration in VR when the visual environment is sufficiently rich. Moreover, performance is comparable when matching either enforced (treadmill) speeds or voluntarily produced (solid ground) motions. Given these results, we are now in a position to use VR to examine what visual information is used for detecting a discrepancy between locomotor activity and the resulting visual experience.
Swarthmore College Faculty Research Grant, NSF LIS IRI-9720327
This PDF is available to Subscribers Only