August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Rich-cue virtual environments can be disadvantageous when discriminating navigation models
Author Affiliations
  • Ellis Gootjes-Dreesbach
    School of Psychology & Clinical Language Sciences, University of Reading
  • Lyndsey Pickup
    School of Psychology & Clinical Language Sciences, University of Reading
  • Andrew Fitzgibbon
    Microsoft Research Ltd
  • Andrew Glennerster
    School of Psychology & Clinical Language Sciences, University of Reading
Journal of Vision September 2016, Vol.16, 292. doi:https://doi.org/10.1167/16.12.292
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ellis Gootjes-Dreesbach, Lyndsey Pickup, Andrew Fitzgibbon, Andrew Glennerster; Rich-cue virtual environments can be disadvantageous when discriminating navigation models. Journal of Vision 2016;16(12):292. https://doi.org/10.1167/16.12.292.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We have shown that, in a sparse cue environment, small changes in scene layout can significantly affect the precision with which observers can return to a previously-viewed location (Pickup, L.C., Fitzgibbon, A.W. and Glennerster, A. (2013) Biological Cybernetics, 107, 449-464). The scene consisted of three very long vertical poles viewed from one of three locations, with stereo and motion parallax cues available. The participants were transported (virtually) to a different part of the scene and had to return to the original location. The spread of errors varied systematically with the configuration of poles in this sparse scene. There was no floor or background and no informative size cues (the poles were one pixel wide), so the only visual cues for determining scene layout and observer location were the changing angles (and binocular disparity) between the poles as the observer moved. We have developed a model of navigation based on 3D reconstruction of the scene (Pickup et al, 2013) and a quite different type of model based on matching 'view-based' parameters at the estimated 'home' location (Pickup, L.C., Fitzgibbon, A.W., Gilson, S.J., and Glennerster, A. (2011) IVMSP Workshop, 2011 IEEE 10th 135-140). Here, we make an explicit comparison between the two types of models. Likelihoods of the data fall within the distribution of likelihoods sampled from the view-based model but not from the 3D reconstruction model. We have repeated the navigation experiment in a rich-cue environment so that the same vertical poles are now viewed in a room with a floor, furniture and paintings on the wall. The variance of homing data and the dependence on scene structure is significantly reduced in the rich-cue condition making it much harder to discriminate rival models.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×