December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
FaReT 2.0: An updated free and open-source toolkit of three-dimensional models and software to study face perception
Author Affiliations
  • Jason Hays
    Florida International University
  • Fabian Soto
    Florida International University
Journal of Vision December 2022, Vol.22, 4074. doi:https://doi.org/10.1167/jov.22.14.4074
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jason Hays, Fabian Soto; FaReT 2.0: An updated free and open-source toolkit of three-dimensional models and software to study face perception. Journal of Vision 2022;22(14):4074. https://doi.org/10.1167/jov.22.14.4074.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous research on face perception has mostly used photographs of faces as stimuli, which do not allow researchers to have full control over the presented stimuli. This is severely limiting to psychophysical research, which requires very precise control over every aspect of stimuli. We originally developed the Face Research Toolkit (FaReT) to help alleviate this problem by using the MakeHuman framework to help generate stimuli with complete control over meaningful features. We have since added features to FaReT to expand its usefulness for face researchers. Previously, videos of faces changing from one expression to another were created by having all of the facial features progress at the same rate. Now we add a motion model in which each facial feature can change more quickly or more slowly in different sections of the animation. For example, FaReT can be set up to allow the eyebrows to rise very quickly as the mouth opens very slowly. All of this can be done without manipulating vertices: instead, it is done by simply specifying the groups of features (i.e., eyebrows, mouth, eyes, etc) and two motion parameters. Furthermore, FaReT can now be used to generate stimuli for classification image experiments, used to determine what features people are looking at to make discerning judgments. Random noise patterns that people classify as a target face feature (e.g., happy) are generated in the meaningful parameter space of MakeHuman, which makes obtained classification images lower-dimensional and more interpretable than published alternatives. FaReT can now also be used to help visualize classification images and other face parameter vectors by rendering a heat map directly onto a face. Finally, we improve FaReT’s database with a new algorithm and plugin that randomly generates a virtually infinite number of completely novel and realistic face models.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×