September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Idiosyncratic facial motions: Uncovering identity information in facial movements through a landmark-based analysis
Author Affiliations & Notes
  • Hilal Nizamoğlu
    Justus Liebig University in Giessen, Germany
    Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen
  • Katharina Dobs
    Justus Liebig University in Giessen, Germany
    Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen
  • Footnotes
    Acknowledgements  This work was supported by DFG, Germany SFB/TRR 135 (grant number 222641018) TP C9 and S; and by the Research Cluster ‘‘The Adaptive Mind’’, funded by the Hessian Ministry for Higher Education, Research Science and the Arts. We also thank Prof. Dr. Benjamin Straube for the video dataset.
Journal of Vision September 2024, Vol.24, 545. doi:https://doi.org/10.1167/jov.24.10.545
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hilal Nizamoğlu, Katharina Dobs; Idiosyncratic facial motions: Uncovering identity information in facial movements through a landmark-based analysis. Journal of Vision 2024;24(10):545. https://doi.org/10.1167/jov.24.10.545.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous studies on dynamic faces have shown that both rigid and non-rigid facial movements contribute to identity recognition, and that the identity-specific information inherent in these motions varies depending on the type of facial expression. Therefore, it becomes clear that individuals exhibit distinct idiosyncratic patterns in their facial movements, which can serve as cues towards their identity. However, the specific features of facial movements that contribute to this uniqueness remain unclear. Here, we employed machine learning techniques to measure and quantify motion information in facial expressions, using a dataset of six basic emotional facial expressions (anger, disgust, fear, joy, sad, surprise) performed by 12 German and 12 Turkish lay actors. An automated facial landmark detection tool was applied to measure the positional changes of landmarks at the peak of each expression relative to a neutral baseline. We then trained a Linear Discriminant Analysis (LDA) model with these landmark shifts to classify the emotional expressions. This first LDA model was able to classify the type of emotional expression (Accuracy: 44%, p<0.001), independent of the actor’s identity. More strikingly, another LDA model, trained to classify the identities of the 24 actors across different expressions, successfully predicted their identity (Accuracy: 45%, p<0.001). Furthermore, the landmark positional changes provided useful information for classifying actors’ gender (Accuracy: 59%, p<0.01) and country-of-origin (Accuracy: 71%, p<0.001), supporting previous studies on cultural and gender-based variations in facial expressions. In conclusion, our study shows the richness of information embedded in facial motion features, extending beyond emotional expression to contain aspects of the actor’s identity, gender, and cultural background. This landmark-based approach emerges as a promising tool to unravel the nuances of idiosyncrasies in facial movements, offering valuable insights into the intricate interplay of expression, identity, and cultural factors.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×