Abstract
The study of egocentric position estimation may shed light on visuospatial memory processes, such as physical space representation in our visual system. However, little has been done to test egocentric position estimation in humans and the few studies to date have investigated egocentric position estimation with context to ground or other stable visual cues. In this study, human participants are introduced to a three-dimensional virtual reality (VR) generated by a 120 Hz projector with a polarization filter and polarizing glasses that separated left and right eye images in a frame sequential stereoscopic configuration. Participants were placed in the center of a VR cylinder where 5,000 sesame dots were distributed at random distances (112cm to 168cm) from the participant, azimuth (+/- 29.12°), and height (120cm to 188cm from the floor). Nine possible targets were located on a horizontal plane 50cm from the participant’s eye level on a three by three matrix at right, left, center azimuths (+/- 18.64°, 0°), and near, middle, and far distances (126cm, 137cm, 147cm from the participant). Participants were presented with one randomly selected target for 3 to 5 seconds before the target was relocated to a random location within the sesame dot field. The task was to reposition the target back to the original position. Depth was induced using disparity and size cues, except for sesame dots that had only disparity cues. Errors in distance repositioning occurred along the line of sight and repositioning showed an attraction bias toward the relocated target. Elevation repositioning was higher for farther targets (P < 0.05, F = 3.19). Azimuth repositioning for left targets were significantly shifted to the right (P< 0.01, F= 18.42). Standard deviations for distance repositioning were invariant against target distance, suggesting that egocentric visuospatial representation is based on Cartesian metrics rather than polar coordinates.
Meeting abstract presented at VSS 2015