Abstract
To what extent does providing people with a low-latency, fully tracked, geometrically detailed and faithfully-sized avatar representation of themselves within a head-mounted display (HMD)-based immersive virtual environment (IVE) affect the accuracy with which they estimate egocentric distances within that environment?
Previous studies have found that under most common conditions, people tend to understimate egocentric distances in IVEs, yet the reasons this underestimation occurs remain poorly understood. One theory is that, due to the many inherent uncertainties in being immersed in a novel virtual environment, people hesitate, at least initially, to assume that they can act on the visual stimulus provided by the HMD in the same way as they would act on the equivalent visual stimulus obtained in the real world. This suggests that we might be able to facilitate accurate spatial perception in IVEs by reducing these uncertainties.
In particular, although previous studies have shown that peoples' default ability to accurately judge egocentric distances in the real world is not impaired when they are prevented from looking down and viewing their bodies, it remains unknown what effects might result from providing people with a plausibly realistic self-embodiment in an IVE.
We have developed low-overhead methods for locally re-sizing a pre-defined avatar model to conform to an individual's body measurements. Using a 12-camera Vicon MX40+ tracking system, we can dynamically update the position of the avatar according to the movements of the person in real-time. In this poster we will present the results of a between-subjects experiment in which people are immersed in a novel IVE either with or without an avatar self-representation and are asked to indicate distance judgments via blind-walking to randomly-placed targets. To facilitate the between-subjects comparison, each participant's distance estimation accuracy in the virtual environment is measured relative to his/her baseline accuracy in the real world.
This research was funded by the National Science Foundation (IIS-0713587).