Abstract
Typically in a large-scale environment, there are a number of visual cues that can play an important role for localization and path selection. We investigated how individuals distributed their gaze when acquiring knowledge about a new space and how these gaze patterns lead to a knowledge representation that supported spatial localization within this space. We trained and tested subjects in a reality indoor environment. Visual landmarks, in the form of unique “pictures” were placed at a uniform distance within the environment. We recorded the total amount of time spent looking at each landmark while the participant explored and learned the environment. After a training period, participants were tested with tailored environments in which we removed half of the visual cues. Participants were given a single view and instructed to indicate where in the environment the view was generated. In one condition we removed the visual landmarks that participants gazed at for the least amount of time during exploration. In a second condition we removed the visual landmarks that had the highest gaze time. Participants were also tested with all the landmarks present. We compared the accuracy of responses in the two conditions to look at the effect on performance. We found that there was no difference in accuracy when participants were viewing all of the visual landmarks versus only the high-gaze-time landmarks (t(6)= 0.8, p[[gt]].05). However, there was a significant difference in performance between the low-gaze-time landmarks and when all of the landmarks were present (t(6)=0.04, p[[lt]]0.05). However, the distribution of the high- versus low-gaze-time landmarks was not distributed equitably. The high-gaze-time landmarks were typically found at the end of hallways, while the low-gaze-time landmarks were found within the corridors. These results, suggest that human observers are strategic in choosing which landmarks to encode about an environment.