Abstract
Spatial navigation tasks require us to memorize the location of a goal, using sensory cues from multiple sources. Information about one's position in relation to the goal comes from the kinaesthetic and proprioceptive senses of the human body. Also external reference points, such as landmarks or beacons provide information about the spatial position of the individual in given surroundings. A single landmark provides ambiguous cues if not combined with additional information. How does this ambiguity affect the accuracy and precision in human navigation? To study general mechanisms of landmark navigation we used the same experimental paradigm for two different senses: audition and vision. Participants learned the position of a goal, determined by a varying number of landmarks and were then relocated to a new position from where they had to return to the goal location. We tested the performance (a) with blindfolded participants and auditory landmarks and (b) sighted participants in a virtual-reality setup with visual landmarks. We quantify navigation performance using the distance of trajectory end-points towards the goal. We find that participants were unable to resolve the ambiguity provided by one, two and three auditory landmarks when the landmarks are not individually identifiable. A very similar finding comes from experiments with visual landmarks: the participants' performance is closely linked to the ambiguity of the landmarks. These data support the use of a method called snapshot matching, which is well studied in homing insects. We test this hypothesis against the idea that the participants memorize and use only single individual landmarks. In a second set of experiments we aim to find out how humans select reliable and useful landmarks and what homing strategies they use when uncertain about the information available.
Meeting abstract presented at VSS 2016