Purchase this article with an account.
Bruce C Hansen, Spencer D Kelly, Pearce Decker, Rachel Weinstein, Stewart Lanphier; Visual motion energy signal usage in gesture and speech integration: The role of semantic categorization and task demands . Journal of Vision 2014;14(10):443. doi: https://doi.org/10.1167/14.10.443.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Hand gestures are pervasive with speech and greatly influence comprehension speed and memory of speech, leading some to claim that gesture and speech form an "integrated system" in language comprehension (e.g., Kelly, Özyürek, & Maris, 2010). Furthermore, gestures' facilitation of speech comprehension depends on which early visual spatial frequency (SF) channels (e.g., lower vs. higher SFs) carry the strongest motion energy signal (Kelly, Hansen, & Clark, 2012), and is therefore not exclusively tied to any particular SF band. Motivated by the latter, the present study draws on the flexible scale usage theory (e.g., Morrison & Schyns, 2001) to determine whether task-related semantic constraints can 'push' observers to differentially rely on motion energy in either low or high SFs. Stimuli consisted of 1sec co-speech gesture video clips. The spoken component conveyed either an action concept or an object concept that was either congruent or incongruent with the accompanying gesture. Participants engaged in one of two reaction time (RT) tasks that required either: 1) an explicit judgment regarding the congruency of the co-speech gestures, or 2) categorizing the spoken component of co-speech gestures as "action" or "object". The results showed that when the task required attending to speech and gesture (i.e., congruency judgment task), performance efficiency was regulated by high SF motion energy for object-oriented gestures, and low SF motion energy for action-oriented gestures. No such relationship was observed when participants were not required to attend to gesture (i.e., categorization task); although the RTs suggest that gestures did interfere with processing time on incongruent trials. In summary, co-speech gesture semantics can dictate SF motion energy utility, but only when gesture usage is obligatory. Curiously, when attention is directed away from vision, gestures still influence speech comprehension speed, suggesting that a visual signal other than SF motion energy is contributing to gesture-speech integration.
Meeting abstract presented at VSS 2014
This PDF is available to Subscribers Only