Purchase this article with an account.
Celia Gagliardi, Arash Yazdanbakhsh; Investigating human memory of self-position using a virtual 3-dimensional visual environment. Journal of Vision 2016;16(12):354. doi: https://doi.org/10.1167/16.12.354.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Knowing one's location in space is crucial for navigation, especially in unknown environments. Recently, systems involved in self-localization for spatial navigation have been found in rodent brains, but the mechanisms for self-localization are not completely understood. Human behavioral experiments can enhance understanding of self-localization systems in human brains. In this study, human observers are placed in a visual virtual reality (VR) to perform a self-positioning task. The VR is created using a 120 Hz projector and a polarizing filter which separates left and right eye images when passed through 3D glasses. Disparity and size cues generate the perception of depth. Participants are placed in a virtual position on the ground for five seconds and then moved by changing their virtual environment around a stable object. Their task is to return to their initial position on the ground with button controls. Principal component analyses show that errors in self-repositioning are not along any particular axes and each participant has a unique pattern of repositioning. Trial durations do not affect accuracy of repositioning and lack of disparity cues increases the standard error of repositioning in all directions. Some participants have lower errors when initial self-positions appear to be on one or the other side of the stable object, suggesting a link between memory of self-position and a preferred point of reference. Future directions of this project are to explore between-subject differences and the effect of stimulus presentation at different frequencies on memory of self-position.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only