August 2012
Volume 12, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Emotional vs. Linguistic Salience in Audiovisual Integration
Author Affiliations
  • Theresa Cook
    University of California Riverside
  • James Dias
    University of California Riverside
  • Lawrence Rosenblum
    University of California Riverside
Journal of Vision August 2012, Vol.12, 961. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Theresa Cook, James Dias, Lawrence Rosenblum; Emotional vs. Linguistic Salience in Audiovisual Integration. Journal of Vision 2012;12(9):961.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Evidence suggests that humans perceptually prioritize and crossmodally integrate both emotional and linguistic information from speaking faces. We compared the perceptual salience of emotional and linguistic speech information in 2 experiments involving crossmodal congruence judgments. We recorded the audiovisual speech of 2 actors saying 5 word-pairs (selected for semantic neutrality and comparable lexical frequency) neutrally and in 3 emotions (happy, angry, sad). Each member of a word-pair differed from the other on 1 visible phoneme (e.g. camper vs. pamper). In Experiment I, participants judged which of two stimuli were more audiovisually congruent. On each trial, 1 stimulus was fully audiovisually congruent [AC] (e.g. participants heard "camper" in a happy voice and saw "camper" articulated with a happy facial expression), while the other stimulus was either a) emotionally congruent and linguistically incongruent [EC] or b) linguistically congruent and emotionally incongruent [LC]. Participants selected AC stimuli as more audiovisually congruent significantly above chance, t(19)=9.292, p<.001, and individuals performed as well on AC/EC trials as on AC/LC trials, t(19)=1.295, p=.211. In Experiment II, half the trials contained 1 AC stimulus compared against a stimulus that was both linguistically and emotionally incongruent [NC], while the other half contained 1 EC compared against 1 LC stimulus. Participants again selected the AC stimuli significantly above chance, t(19)=45.366, p<.001, and exhibited previously undemonstrated distinct intersubject preferences for selecting either emotionally, n=11, M=73.6%, SD=11.1, or linguistically, n=9, M=75.1%, SD=10.3, congruent stimuli as the best audiovisual match. Though Experiment I demonstrated the two types of stimuli are equally discriminable, Experiment II revealed strong individual differences in perceptual prioritization of emotional and linguistic information for the purposes of choosing the best crossmodal match. Results suggest that, in audiovisual integration, the perceptual salience of emotional and linguistic speech information varies among perceivers.

Meeting abstract presented at VSS 2012


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.