Purchase this article with an account.
Chia-Ling Li, M. Aivar, Matthew Tong, Mary Hayhoe; Visual search in large-scale spaces: Spatial memory and head movements. Journal of Vision 2017;17(10):926. doi: https://doi.org/10.1167/17.10.926.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
In everyday search, both visual information and spatial memory are used, even when targets are within the field of view. Yet little is known about the relative importance of these two kinds of cue and how they work together. It seems likely that spatial memory representations guide not only eye movements but also head and body movements (Aivar et al., 2015; Won et al., 2015). We explored this issue in a large-scale immersive environment. Subjects searched for targets located on four surfaces in each of the two rooms in a virtual reality apartment. At the start of each trial, the target was presented on a TV at the end of the hallway separating two rooms. Subjects turned around to make either a left or right turn to enter a room. To test if memory aided search by guiding head movements once they entered the room, we analyzed the angles between head and target direction while making the first fixation to the target. The results showed that the angles from the head to target became about 24 degrees smaller after only three repetitions of search (at least for the target located at locations that are easiest to orient to upon entrance). The next question is whether memory could help prepare the body to move in the right direction even before entering the room. The angles between head and target direction one second before entering the room were analyzed. Smaller angles were found with experience (about 11 degrees), even though targets were not visible. Together, the findings suggested that spatial memory for the target location is used for advanced planning of body movements, even before the search scene is visible. Thus making movement decisions based on memory allows for more efficient search when body movements are involved.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only