Purchase this article with an account.
James W. Dias, Theresa C. Cook, Josh J. Dorsi, Dominique C. Simmons, Lawrence D. Rosenblum; Influences of response delay on unconscious imitation of visual speech. Journal of Vision 2014;14(10):441. doi: https://doi.org/10.1167/14.10.441.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Human perceivers unconsciously imitate the subtle articulatory characteristics of perceived speech. This phonetic convergence has been found to manifest during live conversational interactions (e.g., Pardo, 2006) and when shadowing pre-recorded speech (e.g., Goldinger, 1998). Within the shadowing paradigm, participants say aloud the speech they perceive spoken by a pre-recorded talker. Perceivers converge along acoustical speech characteristics when shadowing auditory (heard) and visual (lipread) speech (e.g., Miller, Sanchez, & Rosenblum, 2010), suggesting the information to which perceivers converge may take a common form across sensory modalities. It is known that phonetic convergence to auditory speech decreases when perceivers are required to delay their shadowed responses, suggesting the information to which perceivers converge is subject to influences of stored memory representations encroaching upon an exemplar maintained within working memory (Goldinger, 1998). If the information to which perceivers converge when shadowing visual speech is processed similarly within working memory as the information to which perceivers converge when shadowing auditory speech, then delaying shadowed responses to visual speech should decrease phonetic convergence relative to immediate shadowing of visual speech. In the current investigation, 31 undergraduates (16 male) each shadowed 1 of 4 pre-recorded talkers (2 male). Shadowers shadowed a block of 80 auditory utterances and a block of 80 visual utterances. Each shadowed response was randomly delayed for 0, 1, 2, 3, or 4 seconds. Phonetic convergence to shadowed auditory speech was found to decrease as shadowing delay increased, F(1,123) = 7.816, p <.01, replicating previous findings (Goldinger, 1998). However, phonetic convergence to visual speech increased as shadowing delay increased, F(1,123) = 4.823, p <.05. The results suggest that, in contrast to auditory speech, the longer visual speech information is maintained within working memory, the more similar the subsequent speech production will be to the perceived speech utterance.
Meeting abstract presented at VSS 2014
This PDF is available to Subscribers Only