Abstract
Recently there has been growing interest in the role that motion might play in the perception and representation of facial identity. Most studies have considered old/new recognition as a task. However, especially for non-rigid motion, these studies have often produced contradictory results. Here, we used a delayed visual search paradigm to explore how learning is affected by non-rigid facial motion. In an incidental learning phase, two faces were sequentially shown for an extended period of time. One face was presented moving non-rigidly and the other as a static picture. After a delay of several minutes observers (N=18) were asked to indicate the presence or absence of the target faces among unfamiliar distractor faces, using identical static search arrays. Although undegraded facial stimuli were used at both study and test and the search arrays were identical, faces that had been learned in motion were identified almost 300 ms faster than faces learned as static snapshots. In a second experiment we examined a familiar kind of rigid motion. Stimuli consisted of 3D heads from the MPI database, placed on an avatar body. The figures were animated so as to approach the observer in depth. In this experiment we explicitly compared performance on visual search and old/new recognition tasks (N=22). Again with visual search, observers were significantly faster in detecting the face of the individual learned in motion. Using several variants of old/new recognition tasks, we were unable to detect a difference between moving and static conditions. Taken together the visual search results of both experiments provide clear evidence that motion can affect identity decisions across extended periods of time. Additionally, it seems clear that such effects may be difficult to observe using more traditional old/new recognition tasks. Possibly the list-learning aspects of these methods encourage coding strategies that are simply not appropriate for use with dynamic stimuli.