July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Neural model for the encoding of dynamic faces in primate cortex
Author Affiliations
  • Martin A. Giese
    Cognit. Neurology, CIN, HIH, University Clinic Tuebingen\nSection Computational Sensomotorics
  • Girija Ravishankar
    Cognit. Neurology, CIN, HIH, University Clinic Tuebingen\nSection Computational Sensomotorics
  • Gregor Schulz
    Cognit. Neurology, CIN, HIH, University Clinic Tuebingen
  • Uwe J. Ilg
    Cognit. Neurology, CIN, HIH, University Clinic Tuebingen
Journal of Vision July 2013, Vol.13, 181. doi:10.1167/13.9.181
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Martin A. Giese, Girija Ravishankar, Gregor Schulz, Uwe J. Ilg; Neural model for the encoding of dynamic faces in primate cortex. Journal of Vision 2013;13(9):181. doi: 10.1167/13.9.181.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Natural facial expressions are essentially dynamic. However, only few research has investigated the neural mechanisms of the processing of dynamic faces. Exploiting well-established as well as novel physiologically plausible neural mechanisms, we try to reproduce data on the neural processing of dynamic faces, exploring different neural encoding principles that have been shown to relevant for the neural encoding of shapes and faces. METHODS: We devised alternative, physiologically plausible hierarchical neural models for the recognition of dynamic faces that simulate the properties of neurons in face-selective regions, such as the STS or area IT. The model is based on a previous physiologically-inspired models of the processing of static faces [Giese & Leopold, 2005,Neurocomputing 65-66, 93-101]. These models exploit learned hierarchies of feature detectors, where mid-level features are optimized by unsupervised learning. On the highest hierarchy level we implemented different possibilities for the encoding of dynamic faces, exploiting norm-referenced, as well as example-based encoding, together with dynamic neural mechanisms to integrate information over time. The model was tested on video data bases with monkey as well as with human facial expressions. RESULTS: Models based on both coding principles, example- as well as norm-referenced encoding, work successfully and permit to classify monkey and human expressions from real videos correctly. They make specific unique predictions at the level of single cell activity in dynamic face-selective regions. The norm-referenced mechanism was more robust in some cases. CONCLUSIONS: Simple physiologically plausible neural circuits can account for the recognition of dynamic faces. Data from single-cell recordings will allow to decide between different models.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×