August 2012
Volume 12, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Visual influences on selective adaptation in speech perception
Author Affiliations
  • James W. Dias
    University of California, Riverside
  • Theresa C. Cook
    University of California, Riverside
  • Lawrence D. Rosenblum
    University of California, Riverside
Journal of Vision August 2012, Vol.12, 707. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      James W. Dias, Theresa C. Cook, Lawrence D. Rosenblum; Visual influences on selective adaptation in speech perception. Journal of Vision 2012;12(9):707.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Previous research testing audiovisual speech stimuli (videos of faces uttering syllables) suggests that selective adaptation in speech perception is purely auditory in nature (Roberts & Summerfield, 1981; Saldaña & Rosenblum, 1994; but see Bertelson, Vroomen, & de Gelder, 2003). However, this research tested whether audio, visual, or audiovisual adapters influence perception of only auditory phonemes which vary over a continuum. In the current experiment, we tested adapter influences on audiovisual stimuli which vary over a visual continuum. This test continuum consisted of nine audio-/ba/-visual-/va/ McGurk-effect stimuli (e.g. McGurk & MacDonald, 1976) ranging in clarity of visual information. The continuum was created by placing a Gaussian blur at a radius of 6, 9, 12, 15, 18, 21, 24, 27, and 30 pixels over the mouth of a speaker articulating /va/. Prior to any adapter exposure, participants consistently perceived tokens from this visual continuum, dubbed with audio /ba/, as /va/ at radii of 6 to 9 pixels and as /ba/ at radii of 27 to 30 pixels, with middle-clarity stimuli perceived sometimes as /va/ and sometimes as /ba/. Participants first exposed to a clear audio-/ba/-visual-/va/ adapter perceived more continuum stimuli as /ba/, F(1,10)=8.219, p<.05, η2=.451. Participants also perceived more continuum stimuli as /ba/ following exposure to an audio-/va/-visual-/va/ adapter, F(1,8)=18.232, p<.01, η2=.695, and a visual-alone /va/ adapter, F(1,9)=11.704, p<.01, η2=.565. These results suggest that visual information can be selectively adapted in audiovisual speech perception. However, participants exposed to an audio-alone /va/ adapter did not exhibit this influence, F(1,9)=.484, p=.504, η2=.051, and there was no difference in the strength of the audio-/va/-visual-/va/ adapter and the visual-alone /va/ adapter, MD=.019, p=.800. Overall, the results suggests that, at least for blurred mouth stimuli, visual speech perception is susceptible to selective adaptation to salient visual information, but not to auditory, or cross-modal information.

Meeting abstract presented at VSS 2012


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.