August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Facial movement optimizes part-based face processing by influencing eye movements
Author Affiliations
  • Naiqi Xiao
    Department of Applied Psychology and Human Development, University of Toronto
  • Paul Quinn
    Department of Psychology, University of Delaware
  • Qiandong Wang
    Department of Psychology, Zhejiang Normal University
  • Genyue Fu
    Department of Psychology, Zhejiang Normal University
  • Kang Lee
    Department of Applied Psychology and Human Development, University of Toronto
Journal of Vision August 2014, Vol.14, 565. doi:https://doi.org/10.1167/14.10.565
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Naiqi Xiao, Paul Quinn, Qiandong Wang, Genyue Fu, Kang Lee; Facial movement optimizes part-based face processing by influencing eye movements. Journal of Vision 2014;14(10):565. https://doi.org/10.1167/14.10.565.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Much of our understanding about face processing has been derived from studies using static face pictures as stimuli. It is unclear to what extent our current knowledge about face processing can be generalized to real world situations where faces are moving. Recent studies have shown that facial movements facilitate part-based, not holistic, face processing. The present study, using high-frequency eye tracking and the composite face effect paradigm, examined the overt visual attention mechanisms underlying the effect of facial movements on part-based processing. In the moving face condition, participants first remembered a face from a 2-second silent video depicting a face chewing and blinking. They were then tested with a static composite face. The upper and lower halves of the composite face were from different models, which were displayed either aligned or misaligned. Participants judged whether the upper half of the composite face was the same person as the one they just saw. The static face condition was identical to the moving face condition except that the to-be-learned faces were static pictures. Participants eye movements during learning and testing were recorded. Consistent with previous findings, learning moving faces led to a smaller composite effect than learning static faces, suggesting that facial movements facilitated part-based face processing. In addition, participants exhibited longer looking time for each fixation (i.e., deeper processing) while learning the moving relative to the static faces. Further, each participants upper face looking time advantage while learning moving relative to static faces positively predicted the part-based face processing increase engendered by facial movements. The association was only observed in the aligned but not the misaligned condition, indicating that fixating the moving upper face half was specific to reducing the interference from the aligned lower face half. These results indicate that facial movement optimizes part-based face processing by influencing eye movements.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×