September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Idiosyncratic Fixation Patterns generalize across Dynamic and Static Facial Expression Recognition
Author Affiliations & Notes
  • Anita Paparelli
    Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Nayla Sokhn
    Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Lisa Stacchi
    Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Antoine Coutrot
    Laboratoire d’Informatique en Image et Systèmes d’information, French Centre National de la Recherche Scientifique, University of Lyon, Lyon, France
  • Anne-Raphaëlle Richoz
    Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Roberto Caldara
    Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Footnotes
    Acknowledgements  This work was supported with funding from the Swiss National Science Foundation awarded to RC (10001C_201145).
Journal of Vision September 2024, Vol.24, 849. doi:https://doi.org/10.1167/jov.24.10.849
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anita Paparelli, Nayla Sokhn, Lisa Stacchi, Antoine Coutrot, Anne-Raphaëlle Richoz, Roberto Caldara; Idiosyncratic Fixation Patterns generalize across Dynamic and Static Facial Expression Recognition. Journal of Vision 2024;24(10):849. https://doi.org/10.1167/jov.24.10.849.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Facial expression recognition (FER) is crucial for understanding the emotional state of others during social interactions. It has been assumed that all humans share universal visual sampling strategies to achieve this feat. While several recent studies have revealed striking idiosyncratic fixation patterns during face identification, very little is yet known about whether such idiosyncrasies extend to the recognition of static and more ecologically valid dynamic facial expressions of emotion (FEE). To this aim, we tracked observers’ eye movements categorizing static and dynamic faces displaying the six basic FEE, all normalized for time presentation (1s), contrast, luminance and the overall sampled energy. We used robust data-driven analyses combining statistical fixation maps (iMap) with hidden Markov Models (EMHMM). Then, by dividing our subjects’ fixations into 12 conditions (2 visual modality x 6 basic expressions) we assessed the generalizability from their grouping with EMHMM. Incorporating both spatial and temporal dimensions of eye-movements provides powerful and well-suited measures to assess the presence of reliable individual differences in face scanning strategies. With the use of these comprehensive statistical computational tools, our data revealed the presence of marked idiosyncratic fixation patterns. Interestingly, these individual visual sampling strategies generalized for the decoding of both static and dynamic modalities of FEEs. Moreover, the fixation patterns varied with the expression at hand. Importantly, altogether our data show that spatiotemporal idiosyncratic gaze strategies also occur for the biologically relevant recognition of emotions, further questioning the universality of this process.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×