Abstract
Although there is a rich history of speculations about how stimuli in one sense affect perception and behavior in another, one of its most fundamental issues remains unexplored: whether the neural computations underlying the synthesis of information from different senses (multisensory integration) differ from those underlying the integration of information within a given sense (unisensory integration). Examining this issue experimentally was the motivation for the present research project. Cats were trained to detect and then approach either a visual or an auditory stimulus that could appear at any of 7 possible locations (0°, ± 15°, ± 30°, ± 45°), or maintain fixation in response to a “catch” (no stimulus) trial. During the testing phase, the cats were required to detect and then move toward a variety of stimulus combinations (visual alone, auditory alone, coincident visual-visual, coincident auditory-auditory, coincident visual-auditory, or catch trials). The data obtained strongly suggest that the probability of correct responses is significantly enhanced by stimulus combinations, but far more for the cross-modal stimulus combinations than for the within-modal stimulus combinations. The present data indicate that the underlying computations render multisensory integration substantially more effective than unisensory integration in facilitating performance at the behavioral level, which parallels data from single neurons in a midbrain structure (i.e., the superior colliculus) known to be involved in orientation behavior. These results support the contention that there are fundamental differences between multisensory and unisensory integration. Supported by NIH grants NS22543 and NS36916.