August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
An invisible face facilitates speech perception
Author Affiliations
  • Emmanuel Guzman-Martinez
    Department of Psychology and Interdepartmental Neuroscience Program, Northwestern University, Evanston, IL U.S.A.
  • Laura Ortega
    Department of Psychology and Interdepartmental Neuroscience Program, Northwestern University, Evanston, IL U.S.A.
  • Marcia Grabowecky
    Department of Psychology and Interdepartmental Neuroscience Program, Northwestern University, Evanston, IL U.S.A.
  • Satoru Suzuki
    Department of Psychology and Interdepartmental Neuroscience Program, Northwestern University, Evanston, IL U.S.A.
Journal of Vision August 2012, Vol.12, 613. doi:https://doi.org/10.1167/12.9.613
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Emmanuel Guzman-Martinez, Laura Ortega, Marcia Grabowecky, Satoru Suzuki; An invisible face facilitates speech perception. Journal of Vision 2012;12(9):613. https://doi.org/10.1167/12.9.613.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It has been shown that looking at moving lips facilitates speech perception. This facilitation, however, occurs only when the mouth is attended. Recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We therefore investigated whether visual facilitation of speech perception required awareness of lip movements. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether each word indicated a tool. While the participants listened to the words they saw the face of the speaker that articulated either the spoken word — the synchronous condition, or a different word of the same length — the asynchronous condition. Critically, the speaker’s face was either fully visible — the aware trials, or suppressed from awareness using Continuous Flash Suppression where the face was presented to one eye and a strong dynamic mask was presented to the other eye — the suppressed trials. The aware and suppressed trials were randomly intermixed. A dot-detection task was used to ensure that participants attended to the mouth region whether the face was visible or suppressed. A small fraction of the suppressed trials on which the face broke through into awareness were removed from the analysis. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, probably because participants discarded the visual information for being inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are still unconsciously processed by the visual system with sufficiently high temporal accuracy to facilitate speech perception based on crossmodal synchrony.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×