Abstract
Strong high-level after-effects have been reported for the recognition of static faces (Webster et al. 1999; Leopold et al. 2001). Presentation of static ‘anti-faces’ biases the perception of neutral test faces temporarily towards perception of specific identities or facial expressions. Recent experiments have demonstrated high-level after-effects also for point-light walkers, resulting in shifts of perceived gender. Our study presents first results on after-effects for dynamic facial expressions. In particular, we investigated how such after-effects depend on facial identity and dynamic vs. static adapting stimuli.
STIMULI: Stimuli were generated using a 3D morphable model for facial expressions based on laser scans. The 3D model is driven by facial motion capture data recorded with a VICON system. We recorded data of two facial expressions (Disgust and Happy) from an amateur actor. In order to create ‘dynamic anti-expressions’ the motion data was projected onto a basis of 17 facial action units. These units were parameterized by motion data obtained from specially trained actors, who are capable of executing individual action units according to FACS (Ekman 1978). Anti-expressions were obtained by inverting the vectors in this linear projection space.
METHOD: After determining a baseline-performance for expression recognition, participants were adapted with dynamic anti-expressions or static adapting stimuli (extreme keyframes of same duration), followed by an expression recognition test. Test stimuli were Disgust and Happy with strongly reduced expression strength (corresponding to vectors of reduced length in linear projection space). Adaptation and test stimuli were derived from faces with same or different identities.
RESULTS: Adaptation with dynamic anti-expressions resulted in selective after-effects: increased recognition for matching test stimuli (p [[lt]] 0.05, N=13). Adaptation effects were significantly reduced for static adapting stimuli, and for different identities of adapting and test face. This suggests identity-specific neural representations of dynamic facial expressions.
Supported by EU Projects BACS and COBOL, Perceptual Graphics DFG, HFSP, Volkswagenstiftung.