August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Navigating in a changing world: enhancing the discrimination between view-based and Cartesian models.
Author Affiliations
  • Lyndsey Pickup
    School of Psychology and Clinical Language Sciences, University of Reading
  • Andrew Glennerster
    School of Psychology and Clinical Language Sciences, University of Reading
Journal of Vision August 2012, Vol.12, 1197. doi:https://doi.org/10.1167/12.9.1197
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lyndsey Pickup, Andrew Glennerster; Navigating in a changing world: enhancing the discrimination between view-based and Cartesian models.. Journal of Vision 2012;12(9):1197. https://doi.org/10.1167/12.9.1197.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

View-based and Cartesian representations provide rival accounts of visual navigation in humans. We previously (VSS 2011) developed Cartesian and view-based models to explain human performance on a simple homing task in an immersive virtual reality environment. Here we show how the discriminability of the two models can be enhanced by introducing subtle (unnoticed) changes in the scene between the reference and 'homing' intervals.

In interval one, participants were shown three very long coloured vertical poles from one viewing location with some head movement permitted so that both binocular stereopsis and motion parallax over a baseline of up to 80cm provide information about the 3D layout and position of the poles relative to the participant's location. The poles were easily distinguishable from one another, and designed to have constant angular width irrespective of viewing distance. The participant was then transported (virtually) to another location in the scene and, in interval two, they attempted to navigate to the initial viewing point relative to the poles.

Critically, the location of one of the poles was changed slightly between intervals one and two, where the exact shift was chosen so that rival models could be distinguished most readily. Specifically, our models predicted distributions that differed from one another not only in shape, but also in the actual mean point to which people were expected to walk in the virtual room. In the case of view-based models, the shifting pole also allows us to discard many candidate models from the large set we have proposed previously. Overall, the view-based models continue to provide a better description of the human data on this new dataset, with likelihoods averaging four times those of the 3D-based models.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×