August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Silent lip reading generates speech signals in auditory cortex
Author Affiliations
  • L. Jacob Zweig
    Department of Psychology, Northwestern University
  • Marcia Grabowecky
    Department of Psychology, Northwestern University
  • Satoru Suzuki
    Department of Psychology, Northwestern University
  • Vernon Towle
    Department of Neurology, University of Chicago
  • James Tao
    Department of Neurology, University of Chicago
  • Shasha Wu
    Department of Neurology, University of Chicago
  • David Brang
    Department of Psychology, Northwestern University
Journal of Vision September 2016, Vol.16, 463. doi:https://doi.org/10.1167/16.12.463
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      L. Jacob Zweig, Marcia Grabowecky, Satoru Suzuki, Vernon Towle, James Tao, Shasha Wu, David Brang; Silent lip reading generates speech signals in auditory cortex. Journal of Vision 2016;16(12):463. https://doi.org/10.1167/16.12.463.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual lip movements improve auditory speech perception in noisy environments (e.g., McGettigan et al., 2012) and crossmodally activate auditory cortex (e.g., Pekkola et al., 2005). What specific information about visual lip movements is relayed to auditory cortex? We investigated this question by recording electrcorticographic (ECoG) activity from electrodes implanted within primary/secondary auditory cortex of the brains of epilepsy patients. We presented four representative auditory phonemes (/ba/, /da/, /ta/, and /tha/), or presented the corresponding lip movements, visemes, articulating these phonemes. We constructed an ensemble of deep convolutional neural networks to determine whether the identity of the four phonemes (from auditory trials) and visemes (from visual trials) could be decoded from auditory cortical activity. Reliable decoding of viseme identity would provide evidence of coding of visual lip-movement information in auditory cortex. We first verified that auditory phoneme information was reliability decoded with high accuracy from auditory-evoked activity in auditory cortex. Critically, viseme information was also reliably decoded from visual-evoked activity in both the left and right auditory cortices, indicating that visemes generate phoneme-specific activity in auditory cortex in the absence of any sound. Furthermore, the classifier trained to identify visemes successfully decoded phonemes with comparable accuracy, indicating that the patterns of activity in auditory cortex evoked by visemes (from visual trials) were similar to those evoked by phonemes (from auditory trials). These results suggest that visual lip movements crossmodally activate auditory speech processing in a content-specific manner.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×