Abstract
Human faces are dynamic objects. Recently, we have been using a number of novel tasks to explore the visual systems' sensitivity to these complex, moving stimuli. For example, Thornton & Kourtzi (2002) used an immediate matching task to show that moving primes (video clips) were better cues to identity than static primes (still images); Knappmeyer, Thornton & Bülthoff (2002) used motion capture and computer animation techniques to demonstrate that incidentally learned patterns of characteristic motion could bias the perception of identity when spatial morphs were used to reduce the saliency of form cues. Here, we present two sets of experiments which exploit a new database of high-quality digital video sequences captured from 5, temporally-synchronized cameras (Kleiner, Wallraven & Bülthoff, 2002). In the first series of experiments, we examined whether a dynamic matching advantage for identity decisions would generalize across viewpoint. We found that a) dynamic primes led to a small but reliable (24 ms) overall matching advantage compared to static primes; b) matching speed with dynamic primes was unaffected by view direction (left or right) or viewing angle (0, 22, 45 degrees) c) static primes were not only slower, but were also more dependent on view direction and viewing angle. These results suggest that the additional information provided by the dynamic primes is able to compensate to some extent for viewpoint mismatches. In the second series of experiments, we examined visual search for expression singletons using arrays of moving faces. Our initial results indicate that search for faces can be much more efficient (15 ms/item) than previous studies using static images would suggest. Furthermore, as expression search using the same dynamic arrays turned upside down proved to be much harder (50 ms/item), it would appear that the observed upright performance is face related, rather than relying on low level static or dynamic cues.