Purchase this article with an account.
Harun Karimpur, Siavash Eftekharifar, Nikolaus F. Troje, Katja Fiehler; Spatial coding for memory-guided reaching in visual and pictorial spaces. Journal of Vision 2020;20(4):1. doi: https://doi.org/10.1167/jov.20.4.1.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
An essential difference between pictorial space displayed as paintings, photographs, or computer screens, and the visual space experienced in the real world is that the observer has a defined location, and thus valid information about distance and direction of objects, in the latter but not in the former. Thus egocentric information should be more reliable in visual space, whereas allocentric information should be more reliable in pictorial space. The majority of studies relied on pictorial representations (images on a computer screen), leaving it unclear whether the same coding mechanisms apply in visual space. Using a memory-guided reaching task in virtual reality, we investigated allocentric coding in both visual space (on a table in virtual reality) and pictorial space (on a monitor that is on the table in virtual reality). Our results suggest that the brain uses allocentric information to represent objects in both pictorial and visual space. Contrary to our hypothesis, the influence of allocentric cues was stronger in visual space than in pictorial space, also after controlling for retinal stimulus size, confounding allocentric cues, and differences in presentation depth. We discuss possible reasons for stronger allocentric coding in visual than in pictorial space.
This PDF is available to Subscribers Only