Abstract
Perception of the self is one of the most important concepts of the human mind—it influences one’s self-esteem and impact everyday social interactions—and is derived from both observing real-world representations (e.g., in mirrors, photographs) and subjective perceptions. Though self-representation applies to the full-body, the face is the most salient cue to individuality, as reflected by its use as a reliable form of identification (e.g., passports, driver licenses, social media selfies). However, due to the intrinsic subjectivity of self-knowledge, little is known about self-face perception. Here, we address this knowledge gap by modelling the 3D subjective self-face representation from the memory of 10 individual participants. Specifically, we used a generative model of face identity to synthesize, for each participant, 12,000 random 3D faces of the same age, gender, and ethnicity as the participants. On each trial, participants viewed 6 of these generated faces, selected (from memory) the face most similar to their own, and rated its similarity on a 6-point scale (from ‘very different' to ‘very similar’). Across trials, we used the 2,000 pairings of <face identity variations; similarity rating> to compute a 3D self-face model for each participant using linear regression. We then compared each participant’s 3D self-face model to their ground truth face (3D captured) by analyzing the specific fit of 3D shape and complexion features. Our analyses thus identified the specific facial features that participants faithfully represented vs. those they distorted (e.g., exaggerated, attenuated). Finally, we characterized the subjective 3D self-face distortions using a broad range of 3D models of social signals (e.g., attractiveness, social traits, social class, emotion). Our new methodology and results therefore provide a unique medium to understand the perception of the self and its specific biases.