August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Visual motion energy signal usage in gesture and speech integration: The role of semantic categorization and task demands
Author Affiliations
  • Bruce C Hansen
    Department of Psychology and Neuroscience Program, Colgate University, Hamilton, NY, USA
  • Spencer D Kelly
    Department of Psychology and Neuroscience Program, Colgate University, Hamilton, NY, USA
  • Pearce Decker
    Department of Psychology and Neuroscience Program, Colgate University, Hamilton, NY, USA
  • Rachel Weinstein
    Department of Psychology and Neuroscience Program, Colgate University, Hamilton, NY, USA
  • Stewart Lanphier
    Department of Psychology and Neuroscience Program, Colgate University, Hamilton, NY, USA
Journal of Vision August 2014, Vol.14, 443. doi:10.1167/14.10.443
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Bruce C Hansen, Spencer D Kelly, Pearce Decker, Rachel Weinstein, Stewart Lanphier; Visual motion energy signal usage in gesture and speech integration: The role of semantic categorization and task demands . Journal of Vision 2014;14(10):443. doi: 10.1167/14.10.443.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Hand gestures are pervasive with speech and greatly influence comprehension speed and memory of speech, leading some to claim that gesture and speech form an "integrated system" in language comprehension (e.g., Kelly, Özyürek, & Maris, 2010). Furthermore, gestures' facilitation of speech comprehension depends on which early visual spatial frequency (SF) channels (e.g., lower vs. higher SFs) carry the strongest motion energy signal (Kelly, Hansen, & Clark, 2012), and is therefore not exclusively tied to any particular SF band. Motivated by the latter, the present study draws on the flexible scale usage theory (e.g., Morrison & Schyns, 2001) to determine whether task-related semantic constraints can 'push' observers to differentially rely on motion energy in either low or high SFs. Stimuli consisted of 1sec co-speech gesture video clips. The spoken component conveyed either an action concept or an object concept that was either congruent or incongruent with the accompanying gesture. Participants engaged in one of two reaction time (RT) tasks that required either: 1) an explicit judgment regarding the congruency of the co-speech gestures, or 2) categorizing the spoken component of co-speech gestures as "action" or "object". The results showed that when the task required attending to speech and gesture (i.e., congruency judgment task), performance efficiency was regulated by high SF motion energy for object-oriented gestures, and low SF motion energy for action-oriented gestures. No such relationship was observed when participants were not required to attend to gesture (i.e., categorization task); although the RTs suggest that gestures did interfere with processing time on incongruent trials. In summary, co-speech gesture semantics can dictate SF motion energy utility, but only when gesture usage is obligatory. Curiously, when attention is directed away from vision, gestures still influence speech comprehension speed, suggesting that a visual signal other than SF motion energy is contributing to gesture-speech integration.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×