Abstract
Previous research on face perception has mostly used photographs of faces as stimuli, which do not allow researchers to have full control over the presented stimuli. This is severely limiting to psychophysical research, which requires very precise control over every aspect of stimuli. We originally developed the Face Research Toolkit (FaReT) to help alleviate this problem by using the MakeHuman framework to help generate stimuli with complete control over meaningful features. We have since added features to FaReT to expand its usefulness for face researchers. Previously, videos of faces changing from one expression to another were created by having all of the facial features progress at the same rate. Now we add a motion model in which each facial feature can change more quickly or more slowly in different sections of the animation. For example, FaReT can be set up to allow the eyebrows to rise very quickly as the mouth opens very slowly. All of this can be done without manipulating vertices: instead, it is done by simply specifying the groups of features (i.e., eyebrows, mouth, eyes, etc) and two motion parameters. Furthermore, FaReT can now be used to generate stimuli for classification image experiments, used to determine what features people are looking at to make discerning judgments. Random noise patterns that people classify as a target face feature (e.g., happy) are generated in the meaningful parameter space of MakeHuman, which makes obtained classification images lower-dimensional and more interpretable than published alternatives. FaReT can now also be used to help visualize classification images and other face parameter vectors by rendering a heat map directly onto a face. Finally, we improve FaReT’s database with a new algorithm and plugin that randomly generates a virtually infinite number of completely novel and realistic face models.