Journal of Vision Cover Image for Volume 24, Issue 10
September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
FaReT 2.1: Anatomically precise manipulation of race in 3D face models and a pipeline to import real face scans
Author Affiliations & Notes
  • Emily Martin
    Florida International University
  • Shamim Golafshan
    Florida International University
  • Fabian Soto
    Florida International University
  • Footnotes
    Acknowledgements  This work was supported by the National Science Foundation.
Journal of Vision September 2024, Vol.24, 788. doi:https://doi.org/10.1167/jov.24.10.788
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Emily Martin, Shamim Golafshan, Fabian Soto; FaReT 2.1: Anatomically precise manipulation of race in 3D face models and a pipeline to import real face scans. Journal of Vision 2024;24(10):788. https://doi.org/10.1167/jov.24.10.788.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

FaReT is a free and open-source software developed to increase experimental control in face research via anatomy-driven 3D face modeling in MakeHuman. However, the software has been limited by (1) race models that may not reflect true morphological differences across ancestry groups and (2) a relatively small database of validated identity models. Currently, MakeHuman is equipped with default race targets (i.e., Caucasian, African & Asian), but the origins of these targets are unclear and are most likely qualitatively created for game development purposes. To develop valid race targets for scientific research, we conducted a literature review of anthropometric studies to identify significant face features for each major racial group and created novel race models within FaReT with these feature changes. We validated the final models by comparing anthropometric measurements of face landmarks with the values found in the literature for the same landmarks. To increase our database of identity models, rather than developing additional models, we constructed a pipeline to automatically transform face models described in the widely-used FLAME model space to FaReT space using a deep neural network. Thousands of real face scans are already publicly available in FLAME space, which opens the possibility of using this large dataset in MakeHuman for anatomically precise measurement and manipulation. In addition, it is possible to easily fit new face scans to FLAME and then MakeHuman using our pipeline.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×