Purchase this article with an account.
Michiteru Kitazaki, Tomoya Yoshino; Self-motion sensation in virtual reality improves spatial updating for mobile observer. Journal of Vision 2002;2(7):633. doi: 10.1167/2.7.633.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Human performance of detecting layout change is view-dependent if observer's viewpoint remained constant, but view-independent if the observer walked to a new viewing position (Simons & Wang, 1998, Psych.Sci.) or its motion was visually simulated in virtual reality (not for accuracy but reaction time, Christou & Bulthoff, 1999, Max-Planck-Institut Tech.Rep.). We assumed self-motion sensation in virtual reality could improve this view-independent performance. So, we used a large visual display with rich motion information. [Methods] Expt1: We simulated 5 objects on a table (2×2m top) centered in a simple room (9×9×9m) with texture-mapped (16×16 checkers) floor and walls. This scene was projected on a screen (2.3×2m) and observed from a distance 2.7m. Viewpoint was simulated to move by rotating the room (47deg) around center vertical axis or remained constant. Retinal projection of objects was manipulated independently: the objects and the table rotated together around the vertical axis (different retinal view) or remained constant (same retinal view). Subjects (n=10) observed the entire scene for 3s, then the viewpoint and/or the table moved for 7s while objects were occluded and one of those moved to a new position. They again saw the entire scene for 3s and identified the moved object. Expt2: We conducted a similar experiment using 3500 spheres randomly positioned in a space around the table instead of the room (n=10). [Results] ANOVA showed a main effect of retinal view (Expt1. p<.003, Expt2. p<.008) and an interaction of viewpoint change and retinal view (Expt1. p<.04, Expt2. p<.004). These indicated that the performance to detect layout change was view-dependent for stationary observer, but view-independent when observer's self motion was visually/virtually simulated on a large display with rich motion information. Subjects also reported self-motion sensation in Expt2. These suggesting that visually-induced self-motion sensation improves spatial updating. —Supported by Nissan Science Foundation and Japan Society for the Promotion of Science
This PDF is available to Subscribers Only