August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
The Influence of Emotion on Audiovisual Integration in the McGurk Effect
Author Affiliations
  • Theresa Cook
    University of California Riverside
  • James Dias
    University of California Riverside
  • Lawrence Rosenblum
    University of California Riverside
Journal of Vision August 2014, Vol.14, 444. doi:https://doi.org/10.1167/14.10.444
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Theresa Cook, James Dias, Lawrence Rosenblum; The Influence of Emotion on Audiovisual Integration in the McGurk Effect . Journal of Vision 2014;14(10):444. https://doi.org/10.1167/14.10.444.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In the McGurk Effect, cross-modally discrepant auditory and visual speech information is resolved into a unified percept. For example, the sound of a person articulating "ba" paired with a video display of a person articulating "ga," typically creates the heard percept "da." Furthermore, the McGurk Effect is robust to certain variables, including cross-modally incongruent gender and whether the stimuli are spoken or sung, but is affected by other variables, such as whether the audiovisually inconsistent phoneme creates a word or non-word. We tested the influence of emotion on the McGurk Effect. In Experiment 1, we recorded the audiovisual utterances of a model articulating /ba/, /da/, and /ga/ using happy, mad, sad, and neutral tones of voice and facial gestures. In experimental trials, auditory /ba/ was dubbed onto visual /ga/ to create McGurk stimuli typically heard as /da/. Emotion stimuli were included in the auditory channel, the visual channel, neither channel, or both channels. The comparison of interest was the strength of the McGurk effect between stimuli with and without emotion. Experiment 2 tested the strength of the McGurk Effect using the same stimuli as before, but with a reduction in available emotion information. We reduced visible emotion information by masking the visual stimuli so only the articulatory gestures of the mouth were visible. We found that the strength of the McGurk Effect is reduced by emotional expressions (p <0.001). Furthermore, when we reduced the amount of visible emotion information in our stimuli in Experiment 2, the strength of the McGurk Effect was equivalent across all stimuli. Results may suggest that emotion information drains perceptual resources used in the audiovisual integration of speech. Findings will be discussed in light of the idea that the objects of perception in both cases may be the intended gestures of the communicator.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×