August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Perceptual integration of kinematic components for the recognition of emotional facial expressions
Author Affiliations
  • Enrico Chiovetto
    Section for Computational Sensomotorics, Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, Centre for Integrative Neuroscience, University Clinic Tübingen, Tübingen, Germany.
  • Cristóbal Curio
    Max Planck Institute for Biological Cybernetics, Dept. Human Perception, Cognition and Action, Tübingen, Germany
  • Dominik Endres
    Section for Computational Sensomotorics, Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, Centre for Integrative Neuroscience, University Clinic Tübingen, Tübingen, Germany.
  • Martin Giese
    Section for Computational Sensomotorics, Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, Centre for Integrative Neuroscience, University Clinic Tübingen, Tübingen, Germany.
Journal of Vision August 2014, Vol.14, 205. doi:10.1167/14.10.205
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Enrico Chiovetto, Cristóbal Curio, Dominik Endres, Martin Giese; Perceptual integration of kinematic components for the recognition of emotional facial expressions. Journal of Vision 2014;14(10):205. doi: 10.1167/14.10.205.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
 

There is evidence in both motor control (Flash and Hochner 2005; Chiovetto and Giese, 2013) as well as in the study of the perception of facial expressions (Ekman & Friesen, 1978) showing that complex movements can be decomposed into simpler basic components (usually referred to as 'movement primitives' or 'action units'). However, such components have rarely been investigated in the context of dynamic facial movements (as opposed to static pictures of faces). METHODS. By application of dimensionality reduction methods (NMF and anechoic demixing) we identified spatio-temporal components that capture the major part of the variance of dynamic facial expressions, where the motion was parameterized exploiting a 3D facial animation system (Curio et al, 2006). We generated stimuli with varying information content of the identified components and investigated how many components are minimally required to attain natural appearance (Turing test). In addition, we investigated how perception integrates these components, using expression classification and expressiveness rating tasks. The best trade-off between model complexity and approximation quality of the model was determined by Bayesian inference, and compared to the human data. In addition, we developed a Bayesian cue fusion model that correctly accounts for the data. RESULTS. For anechoic mixing models only two components were sufficient to reconstruct three facial expressions with high accuracy, which is perceptually indistinguishable from original expressions. A simple Bayesian cue fusion model provides a good fit of the data on the integration of information conveyed by the different movement components. References: Chiovetto E, Giese MA. PLoS One 2013 19;8(11):e79555. doi: 10.1371/journal.pone.0079555. Curio C, Breidt M, Kleiner M, Vuong QC, Giese MA and Bülthoff HH. Applied Perception in Graphics and Visualization 2006: 77-84. Ekman P and Friesen W. Consulting Psychologists Press, Palo Alto, 1978. Flash T, Hochner B. Curr Opin Neurobiol 2005; 15(6):660-6.

 

Meeting abstract presented at VSS 2014

 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×