Purchase this article with an account.
Martin A. Giese, Girija Ravishankar, Gregor Schulz, Uwe J. Ilg; Neural model for the encoding of dynamic faces in primate cortex. Journal of Vision 2013;13(9):181. doi: 10.1167/13.9.181.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Natural facial expressions are essentially dynamic. However, only few research has investigated the neural mechanisms of the processing of dynamic faces. Exploiting well-established as well as novel physiologically plausible neural mechanisms, we try to reproduce data on the neural processing of dynamic faces, exploring different neural encoding principles that have been shown to relevant for the neural encoding of shapes and faces. METHODS: We devised alternative, physiologically plausible hierarchical neural models for the recognition of dynamic faces that simulate the properties of neurons in face-selective regions, such as the STS or area IT. The model is based on a previous physiologically-inspired models of the processing of static faces [Giese & Leopold, 2005,Neurocomputing 65-66, 93-101]. These models exploit learned hierarchies of feature detectors, where mid-level features are optimized by unsupervised learning. On the highest hierarchy level we implemented different possibilities for the encoding of dynamic faces, exploiting norm-referenced, as well as example-based encoding, together with dynamic neural mechanisms to integrate information over time. The model was tested on video data bases with monkey as well as with human facial expressions. RESULTS: Models based on both coding principles, example- as well as norm-referenced encoding, work successfully and permit to classify monkey and human expressions from real videos correctly. They make specific unique predictions at the level of single cell activity in dynamic face-selective regions. The norm-referenced mechanism was more robust in some cases. CONCLUSIONS: Simple physiologically plausible neural circuits can account for the recognition of dynamic faces. Data from single-cell recordings will allow to decide between different models.
Meeting abstract presented at VSS 2013
This PDF is available to Subscribers Only