Abstract
The world provides contextual information to multiple sensory modalities that we humans can utilize in order to construct coherent representations of the environment. In order to investigate whether crossmodal semantic context modulates visual awareness, we measured participants' dominant percept under conditions of binocular rivalry while they listening to an ongoing background auditory soundtrack. Binocular rivalry refers to the phenomenon whereby when different figures are presented to the corresponding location in each eye, observers perceive each figure as being dominant in alternation over time. In Experiment 1, the participants viewed a dichoptic figure consisting of a bird and a car in silence (no-sound condition) or else they listened to either a bird, a car, or a restaurant soundtrack. The target of participants' attentional control over the dichoptic figure and the relative luminance contrast between the figures presented to each eye were varied in Experiments 2 and 3, and the meaning of the sound (i.e., bird or car) that participants listened to was independent of the manipulations taking place in the visual modality. In all three experiments, a robust modulation of binocular rivalry by auditory semantic context was observed. We therefore suggest that this crossmodal semantic congruency effect cannot simply be attributed to the meaning of the soundtrack automatically guiding participants' attention or else biasing their responses; instead, auditory semantic contextual cues likely operate by enhancing the representation of semantically-congruent visual stimuli. These results indicate that crossmodal semantic congruency can serve as a constraint helping to resolve perceptual conflict in the visual system. We further suggest that when considering how the dominant percept in binocular rivalry (and so, human visual awareness) emerges, information from other sensory modalities also needs to be considered; and, in turn, that multisensory stimulation provides a novel means of probing the mechanisms underlying human visual awareness.
This research was supported by a joint project funded by the British Academy (CQ ROKK0) and the National Science Council in Taiwan (NSC 97-2911-I-002-038). Yi-Chuan Chen is supported by the Ministry of Education in Taiwan (SAS-96109-1-US-37).