Abstract
In combining information from multiple sources, the brain must determine which cues to use, and how to weight them. When the cues specify similar values, a maximum-likelihood model, which weights cues in proportion to their inverse variances, accurately predicts performance. In such cases, it is likely that discrepancies between the cue values are due to measurement noise, so using the weighted average is a good strategy. If, however, one of the cues specifies a very different value, the discrepancy relative to the other cues is more likely to be due to a bias in the cue estimate or to the cue coming from a different object. In this case, using the weighted average is not a good strategy. A statistically robust model would down-weight the outlying cue. We asked whether the signal from one cue is excluded when it conflicts greatly with signals from other cues. We created a three-cue environment in which visual, haptic, and auditory positions in space were specified independently. Observers indicated the perceived location of the stimulus when there were small or large conflicts between the cues. When conflicts were small, observers used the weighted average of the three cues with weights inversely proportional to cue variances. When the conflict between two cues was small, but was large relative to the third cue, the weight given the third cue was significantly reduced. We conclude that the brain displays statistical robustness in combining information from different sensory modalities.
AFOSR Grant F49620-01-1-0417