Purchase this article with an account.
Nicole Perez, Michael Kleiman, Elan Barenholtz; Comprehension of an audio versus an audiovisual lecture at 50% time-compression. Journal of Vision 2018;18(10):1140. doi: 10.1167/18.10.1140.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Time-compression—the speeding up of audio-visual presentations without an accompanying change in pitch— is a heavily used technique when viewing video lectures because it allows the same content to be viewed in a shorter duration. Previous studies have demonstrated that comprehension is affected when a lecture is compressed by 50% ( i.e. two times the normal speed) or more. These findings have only considered multimedia recordings with text and figures. However, the visual properties of a speaker—present in an audiovisual, rather than just auditory stimulus— have been shown to enhance speech comprehension under other suboptimal conditions such as noise, low volume, and unfamiliar language. Here, we investigated whether the presentation of a speaker's face benefits comprehension of a time-compressed video. Participants listened to both original and 50% compressed lectures in both audio-only and audiovisual conditions (with different videos used for each of the four combinations of conditions). Eye movements were tracked during the audiovisual lecture. Afterwards, they were tested on their knowledge of the content of the lectures using a questionnaire. Results showed a main effect of speed with higher comprehension scores in the uncompressed conditions. In addition, comprehension scores were significantly better with the visual face than the audio-only condition in the compressed condition. Eye fixation analyses revealed that participants in the compressed condition looked less at the eyes and more at the nose, consistent with other studies finding more centralized fixations under suboptimal auditory conditions. Overall, these results suggest that audiovisual redundancy provides a benefit in encoding of time-compressed speech.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only