Abstract
In the McGurk Effect, cross-modally discrepant auditory and visual speech information is resolved into a unified percept. For example, the sound of a person articulating "ba" paired with a video display of a person articulating "ga," typically creates the heard percept "da." Furthermore, the McGurk Effect is robust to certain variables, including cross-modally incongruent gender and whether the stimuli are spoken or sung, but is affected by other variables, such as whether the audiovisually inconsistent phoneme creates a word or non-word. We tested the influence of emotion on the McGurk Effect. In Experiment 1, we recorded the audiovisual utterances of a model articulating /ba/, /da/, and /ga/ using happy, mad, sad, and neutral tones of voice and facial gestures. In experimental trials, auditory /ba/ was dubbed onto visual /ga/ to create McGurk stimuli typically heard as /da/. Emotion stimuli were included in the auditory channel, the visual channel, neither channel, or both channels. The comparison of interest was the strength of the McGurk effect between stimuli with and without emotion. Experiment 2 tested the strength of the McGurk Effect using the same stimuli as before, but with a reduction in available emotion information. We reduced visible emotion information by masking the visual stimuli so only the articulatory gestures of the mouth were visible. We found that the strength of the McGurk Effect is reduced by emotional expressions (p <0.001). Furthermore, when we reduced the amount of visible emotion information in our stimuli in Experiment 2, the strength of the McGurk Effect was equivalent across all stimuli. Results may suggest that emotion information drains perceptual resources used in the audiovisual integration of speech. Findings will be discussed in light of the idea that the objects of perception in both cases may be the intended gestures of the communicator.
Meeting abstract presented at VSS 2014