September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Human egocentric position estimation
Author Affiliations
  • Arash Yazdanbakhsh
    Vision Laboratory, Center for Computational Neuroscience and Neural Technology, Boston University
  • Celia Gagliardi
    Vision Laboratory, Center for Computational Neuroscience and Neural Technology, Boston University
Journal of Vision September 2015, Vol.15, 955. doi:https://doi.org/10.1167/15.12.955
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Arash Yazdanbakhsh, Celia Gagliardi; Human egocentric position estimation. Journal of Vision 2015;15(12):955. https://doi.org/10.1167/15.12.955.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The study of egocentric position estimation may shed light on visuospatial memory processes, such as physical space representation in our visual system. However, little has been done to test egocentric position estimation in humans and the few studies to date have investigated egocentric position estimation with context to ground or other stable visual cues. In this study, human participants are introduced to a three-dimensional virtual reality (VR) generated by a 120 Hz projector with a polarization filter and polarizing glasses that separated left and right eye images in a frame sequential stereoscopic configuration. Participants were placed in the center of a VR cylinder where 5,000 sesame dots were distributed at random distances (112cm to 168cm) from the participant, azimuth (+/- 29.12°), and height (120cm to 188cm from the floor). Nine possible targets were located on a horizontal plane 50cm from the participant’s eye level on a three by three matrix at right, left, center azimuths (+/- 18.64°, 0°), and near, middle, and far distances (126cm, 137cm, 147cm from the participant). Participants were presented with one randomly selected target for 3 to 5 seconds before the target was relocated to a random location within the sesame dot field. The task was to reposition the target back to the original position. Depth was induced using disparity and size cues, except for sesame dots that had only disparity cues. Errors in distance repositioning occurred along the line of sight and repositioning showed an attraction bias toward the relocated target. Elevation repositioning was higher for farther targets (P < 0.05, F = 3.19). Azimuth repositioning for left targets were significantly shifted to the right (P< 0.01, F= 18.42). Standard deviations for distance repositioning were invariant against target distance, suggesting that egocentric visuospatial representation is based on Cartesian metrics rather than polar coordinates.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×