November 2002
Volume 2, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   November 2002
Self-motion sensation in virtual reality improves spatial updating for mobile observer
Author Affiliations
  • Michiteru Kitazaki
    Toyohashi University of Technology, Japan
Journal of Vision November 2002, Vol.2, 633. doi:https://doi.org/10.1167/2.7.633
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michiteru Kitazaki, Tomoya Yoshino; Self-motion sensation in virtual reality improves spatial updating for mobile observer. Journal of Vision 2002;2(7):633. https://doi.org/10.1167/2.7.633.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Human performance of detecting layout change is view-dependent if observer's viewpoint remained constant, but view-independent if the observer walked to a new viewing position (Simons & Wang, 1998, Psych.Sci.) or its motion was visually simulated in virtual reality (not for accuracy but reaction time, Christou & Bulthoff, 1999, Max-Planck-Institut Tech.Rep.). We assumed self-motion sensation in virtual reality could improve this view-independent performance. So, we used a large visual display with rich motion information. [Methods] Expt1: We simulated 5 objects on a table (2×2m top) centered in a simple room (9×9×9m) with texture-mapped (16×16 checkers) floor and walls. This scene was projected on a screen (2.3×2m) and observed from a distance 2.7m. Viewpoint was simulated to move by rotating the room (47deg) around center vertical axis or remained constant. Retinal projection of objects was manipulated independently: the objects and the table rotated together around the vertical axis (different retinal view) or remained constant (same retinal view). Subjects (n=10) observed the entire scene for 3s, then the viewpoint and/or the table moved for 7s while objects were occluded and one of those moved to a new position. They again saw the entire scene for 3s and identified the moved object. Expt2: We conducted a similar experiment using 3500 spheres randomly positioned in a space around the table instead of the room (n=10). [Results] ANOVA showed a main effect of retinal view (Expt1. p<.003, Expt2. p<.008) and an interaction of viewpoint change and retinal view (Expt1. p<.04, Expt2. p<.004). These indicated that the performance to detect layout change was view-dependent for stationary observer, but view-independent when observer's self motion was visually/virtually simulated on a large display with rich motion information. Subjects also reported self-motion sensation in Expt2. These suggesting that visually-induced self-motion sensation improves spatial updating. —Supported by Nissan Science Foundation and Japan Society for the Promotion of Science

Kitazaki, M., Yoshino, T.(2002). Self-motion sensation in virtual reality improves spatial updating for mobile observer [Abstract]. Journal of Vision, 2( 7): 633, 633a, http://journalofvision.org/2/7/633/, doi:10.1167/2.7.633. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×