September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
A dynamic photorealistic average avatar - separating form and motion
Author Affiliations
  • Harry Griffin
    Cognitive, Perceptual and Brain Sciences, University College London, UK
  • Peter McOwan
    School of Electronic Engineering and Computer Science, Queen Mary, University of London, UK
  • Alan Johnston
    Cognitive, Perceptual and Brain Sciences, University College London, UK
    CoMPLEX, University College London, UK
Journal of Vision September 2011, Vol.11, 675. doi:https://doi.org/10.1167/11.11.675
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Harry Griffin, Peter McOwan, Alan Johnston; A dynamic photorealistic average avatar - separating form and motion. Journal of Vision 2011;11(11):675. https://doi.org/10.1167/11.11.675.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Moving faces, unlike static face images, contain information about change in emotional expression, paralinguistic cues and facial speech. Facial motion also provides cues for identification and categorisation. Acquiring facial motion is complex and generally requires marker-based motion capture, which may only sparsely sample facial motion, or expensive 3D scanning equipment. Presenting realistic facial motion without structural cues to identity, as required to study identification from facial motion alone, is challenging, as accurate animation of an average face is difficult to achieve. When making a static, expressive average face, subjects are asked to hold a constant expression. However, when attempting to average across video sequences of different people, we are faced with an expression correspondence problem. How do we ensure that we are averaging the same expression instantiated on different faces? We present a novel, dynamic facial avatar that overcomes the expression correspondence problem. The avatar is generated from normal video sequences of subjects talking to camera. A separate expression space is created for each individual by registration across frames of their sequence using a biologically-plausible optical flow algorithm and Principal Component Analysis (PCA). Example expressions from a selected individual are then projected into the expression spaces of all models and the average of the resulting images calculated to remove static facial form information. These average images are subsequently subjected to the same registration and PCA process to provide a new expression space for the average avatar. This avatar allows the projection of any individual's facial motion, sampled at pixel resolution, onto a photorealistic identity-free face enabling motion information to be isolated from structural identity information. This strategy provides a much more precise representation of isolated facial movement than can be achieved using standard techniques.

EPSRC. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×