Effective facial caricatures, similar to those produced by hand, can be generated automatically by exaggerating the differences between individual faces and an average face (Brennan,
1985; Rhodes, Brennan, & Carey,
1987). These exaggerations can be better recognized than the originals (Rhodes et al.,
1987), suggesting that faces are encoded as deviations from a stored prototype. Automatic exaggeration with respect to the dimensions of emotional expression (Calder, Young, Rowland, & Perrett,
1997), sex, attractiveness (Perrett et al.,
1998; Perrett, May, & Yoshikawa,
1994), and age (Burt & Perrett,
1995) have also been successful. Some of these studies exaggerated visual texture as well as shape-based information (Rowland & Perrett,
1995; reviewed Rhodes,
1996). The underlying principal of exaggerating differences from the average has been effectively applied to motion data with differences parameterized spatially (Pollick, Fidopiastis, & Braden,
2001), temporally (Hill & Pollick,
2000), and spatiotemporally (Giese, Knappmeyer, Thornton, & Bülthoff,
2002). In this paper, we use both exaggeration of spatial range and exaggeration of variations in timing within the domain of a particular movement to provide clues to the encoding of facial movement.