Abstract
Moving faces, unlike static face images, contain information about change in emotional expression, paralinguistic cues and facial speech. Facial motion also provides cues for identification and categorisation. Acquiring facial motion is complex and generally requires marker-based motion capture, which may only sparsely sample facial motion, or expensive 3D scanning equipment. Presenting realistic facial motion without structural cues to identity, as required to study identification from facial motion alone, is challenging, as accurate animation of an average face is difficult to achieve. When making a static, expressive average face, subjects are asked to hold a constant expression. However, when attempting to average across video sequences of different people, we are faced with an expression correspondence problem. How do we ensure that we are averaging the same expression instantiated on different faces? We present a novel, dynamic facial avatar that overcomes the expression correspondence problem. The avatar is generated from normal video sequences of subjects talking to camera. A separate expression space is created for each individual by registration across frames of their sequence using a biologically-plausible optical flow algorithm and Principal Component Analysis (PCA). Example expressions from a selected individual are then projected into the expression spaces of all models and the average of the resulting images calculated to remove static facial form information. These average images are subsequently subjected to the same registration and PCA process to provide a new expression space for the average avatar. This avatar allows the projection of any individual's facial motion, sampled at pixel resolution, onto a photorealistic identity-free face enabling motion information to be isolated from structural identity information. This strategy provides a much more precise representation of isolated facial movement than can be achieved using standard techniques.