May 2008
Volume 8, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   May 2008
Neural model for the visual recognition of hand actions
Author Affiliations
  • Martin Giese
    Dept. of Cognitive Neurology, Hertie Inst. f. Clinical Brain Research, Tuebingen, Germany, and School of Psychology, Univ. of Bangor, UK
  • Falk Fleischer
    Dept. of Cognitive Neurology, Hertie Inst. f. Clinical Brain Research, Tuebingen, Germany
  • Antonino Casile
    Dept. of Cognitive Neurology, Hertie Inst. f. Clinical Brain Research, Tuebingen, Germany
Journal of Vision May 2008, Vol.8, 53. doi:10.1167/8.6.53
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Martin Giese, Falk Fleischer, Antonino Casile; Neural model for the visual recognition of hand actions. Journal of Vision 2008;8(6):53. doi: 10.1167/8.6.53.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The visual recognition of goal-directed movements is crucial for the learning of actions, and possibly for the understanding of the intentions and goals of others. The discovery of mirror neurons has stimulated a vast amount of research investigating possible links between action perception and action execution (Rizzolatti & Craighero, 2004). However, it remains largely unknown what is the extent of this putative visuo-motor interaction during visual perception of actions and which relevant computational functions are accomplished by purely visual processing.

Here, we present a neurophysiologically inspired model for the recognition of hand movements demonstrating that a substantial degree of performance can be accomplished by the analysis of spatio-temporal visual features. The model integrates a hierarchical neural architecture for extracting relevant form and motion features with simple recurrent neural circuits for the realization of temporal sequence selectivity. Optimized features are learned using a trace learning rule eliminating features which are not contributing to correct classification results (Serre et al., 2007). As novel computational function, the model implements a plausible mechanism that combines the spatial information about goal object and its affordance and the specific posture, position and orientation of the effector. The model is evaluated on video sequences of monkey and human grasping actions.

We demonstrate that well-established physiologically plausible mechanisms account for important aspects of visual action recognition. Specifically, instead of explicit 3D representations of objects and the action the proposed model realizes predictions over time based on learned pattern sequences arising in the visual input. Our results complement those of existing models (Oztop et al., 2006) and motivate a more detailed analysis of the complementary contributions of visual pattern analysis and motor representations on the visual recognition of imitable actions.

Giese, M. Fleischer, F. Casile, A. (2008). Neural model for the visual recognition of hand actions [Abstract]. Journal of Vision, 8(6):53, 53a, http://journalofvision.org/8/6/53/, doi:10.1167/8.6.53. [CrossRef]
Footnotes
 Supported by DFG, the Volkswagenstiftung, and Hermann und Lilly Schilling Foundation.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×