September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
A free and open-source toolkit of three-dimensional models and software to study face perception
Author Affiliations & Notes
  • Jason S Hays
    Florida International University
  • Claudia Wong
    Florida International University
  • Fabian Soto
    Florida International University
Journal of Vision September 2019, Vol.19, 227a. doi:https://doi.org/10.1167/19.10.227a
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jason S Hays, Claudia Wong, Fabian Soto; A free and open-source toolkit of three-dimensional models and software to study face perception. Journal of Vision 2019;19(10):227a. https://doi.org/10.1167/19.10.227a.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A problem in the study of face perception is that results can be confounded by poor stimulus control. Ideally, experiments should precisely manipulate facial features under study and tightly control irrelevant features. Software for 3D face modeling provides such control, but there is a lack of free and open source alternatives specifically created for face perception research. Here, we provide such tools by expanding the open-source software MakeHuman. We present a database of 27 identity models and 6 expression pose models (sadness, anger, happiness, disgust, fear, and surprise), together with software to manipulate the models in ways that are common in the face perception literature, allowing researchers to: (1) create a sequence of renders from interpolations between two or more 3D models (differing in identity, expression, and/or pose), resulting in a “morphing” sequence; (2) create renders by extrapolation in a direction of face space, obtaining 3D “anti-faces” and similar stimuli; (3) obtain videos of dynamic faces from rendered images; (4) obtain average face models; (5) standardize a set of models so that they differ only in facial shape features, and (6) communicate with experiment software (e.g., PsychoPy) to render faces dynamically online. These tools vastly improve both the speed at which face stimuli can be produced and the level of control that researchers have over face stimuli. We show examples of the multiple ways in which these tools can be used in face perception research, and describe human ratings of stimuli produced with the toolkit. Furthermore, by using Markov Chain Monte Carlo (MCMC) with participants, we can sample from the distribution of realistic faces to define completely novel identities and expressions that still fit what people consider realistic.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×