July 2013
Volume 13, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Eye fixations during encoding of familiar and unfamiliar language
Author Affiliations
  • Lauren Mavica
    Florida Atlantic University
  • Elan Barenholtz
    Florida Atlantic University
  • David Lewkowicz
    Florida Atlantic University
Journal of Vision July 2013, Vol.13, 1081. doi:10.1167/13.9.1081
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lauren Mavica, Elan Barenholtz, David Lewkowicz; Eye fixations during encoding of familiar and unfamiliar language. Journal of Vision 2013;13(9):1081. doi: 10.1167/13.9.1081.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Previous research has shown that infants viewing speaking faces shift their visual fixation from the speaker’s eyes to the speaker’s mouth between 4-8 mo. It is theorized that this shift occurs in order to facilitate language learning, based on audiovisual redundancy in the speech signal. In the current study, we asked whether a similar behavioral trend would be present in adults when encoding speech in an unfamiliar language. We presented English-speaking, monolingual adults with videos of a female reciting short sentences in English and in a non-native language (either Spanish or Icelandic), in separate blocks. In order to ensure participants were encoding the sentences, we had them perform a simple task: on each trial, they were presented with video clips of two different (same language) sentences, shown in sequence, followed by an audio-only recording of one of those sentences. Participants had to choose whether the audio-only sentence matched the first or second video. We found that participants gazed significantly longer at the speaker’s mouth during the unfamiliar-language blocks compared with the native language blocks. These findings demonstrate that adults encoding speech in an unfamiliar language exhibit gaze patterns that are similar, in some respects, to infants who are first learning their native language—that is, there is enhanced allocation of attention to the mouth region. This suggests that people rely on multisensory redundancy when encoding unfamiliar speech signals.

Meeting abstract presented at VSS 2013


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.