August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
McGurk effect appears after learning syllables with non-facial motions.
Author Affiliations
  • Miyuki G. Kamachi
    Faculty of Informatics, Kogakuin University
  • Kazuki Ohkubo
    Faculty of Informatics, Kogakuin University
Journal of Vision August 2014, Vol.14, 1256. doi:https://doi.org/10.1167/14.10.1256
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Miyuki G. Kamachi, Kazuki Ohkubo; McGurk effect appears after learning syllables with non-facial motions.. Journal of Vision 2014;14(10):1256. https://doi.org/10.1167/14.10.1256.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The AV combination of 'bilabial' sounds (e.g. the phoneme /ba/) and 'palatal' mouth movement (e.g. the visual articulation of /ga/) typically results in a 'fusion response', in which a new phoneme different from the originals is perceived (e.g. /da/). On the other hand, when the combination of AV is reversed, subjects basically perceive either of the phonemes of original A or V pronounced, and sometimes report the perception of the mixed phoneme, 'combination response' (e.g. /bga/ or /gba/)(MacDonald and McGurk, 1978; Omata and Mogi, 2008). Moreover, 'fusion' showed larger effect on proportion compared with 'combination' (McDonald and McGurk, 1978). Both types of response were thought to have resulted from our innate or empirical learning of facial speech motions. Here we report an experimental study to reveal whether McGurk effects arise only with facial type movement. In learning session with 1600 trials, participants viewed an object (gray-colored cube) rotating with specific direction (leftward) combined with auditory /pa/ sounds (of 5 speakers), and the same shape of the object rotating with another direction (rightward) with auditory /ka/ sounds. Rotation directions also involved weak depth rotation randomly, but had always rotated clearly toward left or right in frontal parallel plane. Participants were asked to identify each auditory stimulus by selecting one of the speech sounds /pa/, /ka/ and /ta/. In the following test session, the same visual stimuli were presented randomly combined with the same/different types of sound pronounced by the same speakers as learned ones. The proportion of responses in learning-group was compared with that of no-learning-group. Results revealed that fusion type effect was found only in the learning group, showing that non-facial motions were learned and combined as auditory speech irrespectively by the face-specific motions.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×