August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Face information used to classify identity depends on emotional expression and vice-versa
Author Affiliations & Notes
  • Emily Martin
    Florida International University
  • Jason Hays
    Florida International University
  • Fabian Soto
    Florida International University
  • Footnotes
    Acknowledgements  This work was supported by the National Science Foundation.
Journal of Vision August 2023, Vol.23, 4748. doi:https://doi.org/10.1167/jov.23.9.4748
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Emily Martin, Jason Hays, Fabian Soto; Face information used to classify identity depends on emotional expression and vice-versa. Journal of Vision 2023;23(9):4748. https://doi.org/10.1167/jov.23.9.4748.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Every day we categorize new faces according to dimensions such as identity and emotional expression, using specific face information that can be summarized in what is known as a template. Empirically recovering these templates grants us a richer understanding of the perceptual representation of visual stimuli. Using reverse correlation, a psychophysical technique that estimates these templates from participants’ decisions when presented with noisy stimuli, we identified face features significant in the perception of identity and expression. More importantly, we also assessed invariance at the level of these templates (i.e., template separability); that is, whether the face information used to identify levels of one dimension (e.g., identity) does not vary with changes in the other dimension (e.g., expression). Previous studies have superimposed noise on pixel luminance, which constrains interpretation to the pixel space rather than face space. Alternatively, we used a three-dimensional face modeling toolbox (FaReT) that allows for manipulation and recovery of significant face shape features rather than image pixels. Our new approach allows us to directly visualize interactions between identity and expression by rendering face models that highlight how face features are sampled differently with changes in an irrelevant dimension. Permutation tests found significant violations of template separability for identity and expression across all groups, suggesting a strong interaction between dimensions at the level of the face information sampled for recognition.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×