Abstract
Previous research has shown that infants viewing speaking faces shift their visual fixation from the speaker’s eyes to the speaker’s mouth between 4-8 mo. It is theorized that this shift occurs in order to facilitate language learning, based on audiovisual redundancy in the speech signal. In the current study, we asked whether a similar behavioral trend would be present in adults when encoding speech in an unfamiliar language. We presented English-speaking, monolingual adults with videos of a female reciting short sentences in English and in a non-native language (either Spanish or Icelandic), in separate blocks. In order to ensure participants were encoding the sentences, we had them perform a simple task: on each trial, they were presented with video clips of two different (same language) sentences, shown in sequence, followed by an audio-only recording of one of those sentences. Participants had to choose whether the audio-only sentence matched the first or second video. We found that participants gazed significantly longer at the speaker’s mouth during the unfamiliar-language blocks compared with the native language blocks. These findings demonstrate that adults encoding speech in an unfamiliar language exhibit gaze patterns that are similar, in some respects, to infants who are first learning their native language—that is, there is enhanced allocation of attention to the mouth region. This suggests that people rely on multisensory redundancy when encoding unfamiliar speech signals.
Meeting abstract presented at VSS 2013