July 2013
Volume 13, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Visual and Cognitive Predictors of Speech Intelligibility in Noisy Listening Conditions
Author Affiliations
  • Samantha Jansen
    Wichita State University
  • Evan Palmer
    Wichita State University
  • Alex Chaparro
    Wichita State University
Journal of Vision July 2013, Vol.13, 1075. doi:https://doi.org/10.1167/13.9.1075
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Samantha Jansen, Evan Palmer, Alex Chaparro; Visual and Cognitive Predictors of Speech Intelligibility in Noisy Listening Conditions. Journal of Vision 2013;13(9):1075. doi: https://doi.org/10.1167/13.9.1075.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Research has demonstrated that visual and auditory cues interact, improving speech intelligibility under noisy listening conditions. For instance, recent findings from our lab demonstrated that simulated cataracts hinder the ability of listeners to utilize visual cues to disambiguate speech. The purpose of this study was to determine which measures of visual, auditory, and cognitive performance predict participants’ ability to disambiguate spoken messages in the presence of spoken background noise. We tested 30 young adults with normal visual acuity and hearing sensitivity. Participants completed a battery of visual (monocular/binocular acuity and contrast sensitivity), auditory (left/right ear pure tone thresholds; directed/divided versions of the Dichotic Sentence Identification Test) and cognitive tests (Digit Symbol Substitution Test [DSST], Trail Making Test versions A & B [TMT-A&B]). Speech intelligibility was tested under two conditions: auditory only with no visual input and auditory-visual with normal viewing. Video recordings of Speech in Noise sentences spoken by a talker were presented in the presence of background babble set at a signal-to-noise ratio of -13 dB. Participants wrote what they heard the talker say and the data were scored to calculate speech intelligibility performance based on the percentage of key words correctly reported. Regression analyses show that the best predictors of speech intelligibility were measures of contrast sensitivity and executive functioning, including DSST and TMT-B. The result for contrast sensitivity is consistent with earlier findings suggesting the importance of contrast sensitivity but not acuity in speech intelligibility. The poor predictive power of the auditory measures is likely due to the restricted range of scores in these performance measures since young adults with normal hearing were tested. These results suggest that audiovisual speech integration is dependent on both low-level sensory information and high-level cognitive processes, particularly those associated with executive functioning.

Meeting abstract presented at VSS 2013


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.