Abstract
Humans use a multitude of cues to orient themselves in space. For homing in unknown terrain it is fundamental to keep track of the starting position and of one's position and orientation relative to this home location. This is commonly referred to as spatial updating. When space is explored by walking around, our senses work together by providing signals for this fundamental ability (e.g. from visual or auditory landmarks) and also locomotion provides a rich amount of information, involving kinesthetic signals and efference copies from the motor system for path integration. How do humans integrate different signals into a coherent representation of space? Here we tested 6 blindfolded participants in a triangle completion task. In a sports hall (size: 25x45m) participants were guided either on the first two legs of a triangular path or transported on a wheel chair to the release position on an indirect, strongly meandering path (5-10x longer than the path home). The participant was instructed to walk a straight line back to the starting position, which sometimes was tagged by an auditory landmark. The task was repeated with different kinds of information available: path integration only (walking, no audio cues), audio landmark only (the participant is passively transported and walks back home using audio landmark cues only), or both (blindfolded walking with audio cues). We find that humans can use either source of spatial information alone for homing, with audio landmarks allowing a higher homing precision than path integration. When both sources are available, the multimodal estimate is a weighted sum of the different unimodal estimates, with the weight of each sensory modality proportional to its precision. These results suggest that humans can estimate the precision of each sensory source of information and combine the sensory signals in a statistically optimal fashion.
Meeting abstract presented at VSS 2014