Purchase this article with an account.
Jordan Suchow, Joshua Peterson, Thomas Griffiths; A learned generative model of faces for experiments on human identity. Journal of Vision 2018;18(10):352. doi: https://doi.org/10.1167/18.10.352.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Generative models of human appearance and identity have broad applicability to the study of face perception, but the exquisite sensitivity of human face perception means that their utility hinges on alignment of the latent representation to human psychological representations and the photorealism of the generated images. Meeting these requirements is an exacting task, and existing models of human identity and appearance are often unworkably abstract, artificial, uncanny, or heavily biased. Here, we use a variational autoencoder with an autoregressive decoder to learn a latent face space from a uniquely diverse dataset of portraits that control much of the variation irrelevant to human identity and appearance. Our method generates photorealistic portraits of fictive identities with a smooth, navigable latent space. We validate our model's alignment with human sensitivities by introducing a psychophysical Turing test for images, which humans mostly fail, a rare occurrence with any interesting generative image model. We describe several applications of the learned face space to experiments on face perception, memory, and learning.
Meeting abstract presented at VSS 2018
This PDF is available to Subscribers Only