Abstract
Much of our understanding about face processing has been derived from studies using static face pictures as stimuli. It is unclear to what extent our current knowledge about face processing can be generalized to real world situations where faces are moving. Recent studies have shown that facial movements facilitate part-based, not holistic, face processing. The present study, using high-frequency eye tracking and the composite face effect paradigm, examined the overt visual attention mechanisms underlying the effect of facial movements on part-based processing. In the moving face condition, participants first remembered a face from a 2-second silent video depicting a face chewing and blinking. They were then tested with a static composite face. The upper and lower halves of the composite face were from different models, which were displayed either aligned or misaligned. Participants judged whether the upper half of the composite face was the same person as the one they just saw. The static face condition was identical to the moving face condition except that the to-be-learned faces were static pictures. Participants eye movements during learning and testing were recorded. Consistent with previous findings, learning moving faces led to a smaller composite effect than learning static faces, suggesting that facial movements facilitated part-based face processing. In addition, participants exhibited longer looking time for each fixation (i.e., deeper processing) while learning the moving relative to the static faces. Further, each participants upper face looking time advantage while learning moving relative to static faces positively predicted the part-based face processing increase engendered by facial movements. The association was only observed in the aligned but not the misaligned condition, indicating that fixating the moving upper face half was specific to reducing the interference from the aligned lower face half. These results indicate that facial movement optimizes part-based face processing by influencing eye movements.
Meeting abstract presented at VSS 2014