Abstract
When we speak, laugh or cry our faces move in complex, non-rigid ways. Can such motion patterns influence our perception of facial identity? To explore this issue we took 3D laser scanned heads from the MPI database and animated them using motion sequences captured from different human actors. During an incidental learning phase, observers were exposed to FACE A moving with MOTION A and FACE B moving with MOTION B. Test stimuli consisted of two sets of morphed heads (shaded, no texture) ranging in 10% steps from FACE A to FACE B. One set of morphs were animated using MOTION A, the other with MOTION B. Observers were instructed to indicate whether each test face was structurally more similar to FACE A or FACE B. Across all levels of the morph sequence, motion biased the perception of identity. This bias was particularly strong at the 50% morph level where structural information was completely ambiguous. Here, “FACE A” responses occurred on 80% of trials in which the morph was animated with MOTION A, but on only 40% of trials in which the same morph was animated using MOTION B. We believe these results are the strongest evidence to date that facial motion can be used by observers to determine facial identity. The use of computer animation techniques in conjunction with motion capture technology appears to be a very fruitful direction for future research on dynamic aspects of face processing.