December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Modelling 3D self-portrait from the individual’s face memory
Author Affiliations
  • Jiayu Zhan
    Institute of Neuroscience and Psychology, University of Glasgow
  • Meng Liu
    Institute of Neuroscience and Psychology, University of Glasgow
  • Oliver Garrod
    Institute of Neuroscience and Psychology, University of Glasgow
  • Rachael Jack
    Institute of Neuroscience and Psychology, University of Glasgow
  • Philippe Schyns
Journal of Vision December 2022, Vol.22, 3515. doi:https://doi.org/10.1167/jov.22.14.3515
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jiayu Zhan, Meng Liu, Oliver Garrod, Rachael Jack, Philippe Schyns; Modelling 3D self-portrait from the individual’s face memory. Journal of Vision 2022;22(14):3515. https://doi.org/10.1167/jov.22.14.3515.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Perception of the self is one of the most important concepts of the human mind—it influences one’s self-esteem and impact everyday social interactions—and is derived from both observing real-world representations (e.g., in mirrors, photographs) and subjective perceptions. Though self-representation applies to the full-body, the face is the most salient cue to individuality, as reflected by its use as a reliable form of identification (e.g., passports, driver licenses, social media selfies). However, due to the intrinsic subjectivity of self-knowledge, little is known about self-face perception. Here, we address this knowledge gap by modelling the 3D subjective self-face representation from the memory of 10 individual participants. Specifically, we used a generative model of face identity to synthesize, for each participant, 12,000 random 3D faces of the same age, gender, and ethnicity as the participants. On each trial, participants viewed 6 of these generated faces, selected (from memory) the face most similar to their own, and rated its similarity on a 6-point scale (from ‘very different' to ‘very similar’). Across trials, we used the 2,000 pairings of <face identity variations; similarity rating> to compute a 3D self-face model for each participant using linear regression. We then compared each participant’s 3D self-face model to their ground truth face (3D captured) by analyzing the specific fit of 3D shape and complexion features. Our analyses thus identified the specific facial features that participants faithfully represented vs. those they distorted (e.g., exaggerated, attenuated). Finally, we characterized the subjective 3D self-face distortions using a broad range of 3D models of social signals (e.g., attractiveness, social traits, social class, emotion). Our new methodology and results therefore provide a unique medium to understand the perception of the self and its specific biases.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×