Abstract
Perception of multisensory events, such as a person speaking, relies on binding information from distinct sensory modalities into a unitary percept. A temporal window of integration for multisensory events allows flexibility to account for latency differences arising from both variable physical transmission rates through the environment and neural transmission rates within the brain (Shelton, 2010). Previous research has shown that the width of the temporal window is subject to influences of factors including attention, spatial disparity, and stimulus complexity (e.g., Spence & Parise, 2010). The extent to which the temporal window of integration for speech is influenced by emotion, however, remains unknown. In the present study, we demonstrate that an angry expression reduces temporal sensitivity for detecting auditory-visual asynchrony in speech perception. The auditory and visual streams of a video of a person uttering syllables were presented to participants at varying time delays using the method of constant stimuli. For each auditory stream, the accompanying visual stream was manipulated to assume a happy, neutral, or angry facial expression. Participants made unspeeded temporal order judgments indicating whether the auditory or visual stream occurred first. Facial expression did not influence the point of subjective simultaneity, suggesting that facial expression either does not influence the perception of speech onset or does so equally for the visual and auditory modalities. An angry expression significantly increased the just noticeable difference, suggesting that an angry expression reduces sensitivity for detecting temporal asynchronies between auditory and visual speech streams. Our result provides evidence suggesting that emotion processing influences the perception of audiovisual synchrony.
Meeting abstract presented at VSS 2014