Abstract
We report three experiments in which we investigated the effect of facial motion on face processing. Specifically, we used the face composite effect to examine whether and how rigid facial motion influences primarily featural or holistic processing of faces. In Experiments 1 and 2, after familiarization with dynamic displays in which a model’s face turned from one side to another, participants judged whether the top half of a composite face belonged to the same model. By comparing performance to various static control conditions in Experiments 1 and 2, which differed from each other in terms of stimuli display inter-stimuli interval (ISI), we found that the size of the face composite effect in the dynamic condition was much smaller than that in the static conditions. In other words, the dynamic face display appeared to promote mainly face featural processing that allowed participants to extract more readily the upper portion of the composite face. Further, in order to investigate whether this influence is specific to rigid facial motion or a general one that existed in other facial motions, we changed the rigid facial motion to non-rigid one (i.e., chewing animation) in Experiment 3. The results supported that the facial motion could generally influence the facial featural processing by showing a similar smaller composite effect led by dynamic faces, as compared to static ones. The findings from the present experiments provide the strongest evidence to date to suggest that the facial motion mainly influences facial featural, but not holistic, processing.
Meeting abstract presented at VSS 2012