August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Movement helps famous and unfamiliar face matching: Evidence from a sorting task
Author Affiliations
  • Rachel Bennetts
    MARCS, University of Western Sydney
  • Darren Burke
    School of Psychology, University of Newcastle
  • Kevin Brooks
    Department of Psychology, Macquarie University
  • Jeesun Kim
    MARCS, University of Western Sydney
  • Simon Lucey
    Commonwealth Science and Industrial Research Organisation, Australia
  • Jason Saragih
    Commonwealth Science and Industrial Research Organisation, Australia
  • Rachel Robbins
    MARCS, University of Western Sydney\nSchool of Psychology, University of Western Sydney
Journal of Vision August 2012, Vol.12, 981. doi:https://doi.org/10.1167/12.9.981
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rachel Bennetts, Darren Burke, Kevin Brooks, Jeesun Kim, Simon Lucey, Jason Saragih, Rachel Robbins; Movement helps famous and unfamiliar face matching: Evidence from a sorting task. Journal of Vision 2012;12(9):981. https://doi.org/10.1167/12.9.981.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We can use the characteristic way a person moves their face and head ("dynamic facial signatures") as a cue to identity. Theoretically, we should have pre-existing representations of the way a familiar face moves, making it easier to match the movement of familiar than unfamiliar faces. However, few studies have directly compared the benefits of movement for familiar and unfamiliar faces. It is also unclear whether the use of dynamic facial signatures depends on the type of movement, or a particular face area. In this study, we investigated the movement advantage for famous and unfamiliar faces using a sorting task. Participants sorted groups of moving or static shape-normalized point-light-displays (PLDs), using either rigid head movement (e.g. nodding, tilting), non-rigid face movement (e.g. smiling, talking) or combined rigid and non-rigid movement. In Experiment 1, standard PLDs were used. In Experiment 2, the PLDs included eyes, while in Experiment 3, they included the teeth and tongue. Accuracy scores were divided by the average number of times clips were viewed. Famous and unfamiliar faces were sorted equally well overall. Famous faces showed a movement advantage for combined and non-rigid clips, but not rigid clips. The results suggest that participants were using mouth information: famous face PLDs with mouths were sorted better than standard PLDs or PLDs with eyes. Like famous faces, unfamiliar faces also showed a movement advantage for combined motion. Unlike famous faces, unfamiliar faces were sorted equally well from standard PLDs and those with mouths or eyes. Overall, these results show that both famous and unfamiliar faces can be sorted based on dynamic facial signatures. Sorting famous faces may rely more on non-rigid movements of the mouth region than on the eyes or rigid motion, whereas sorting unfamiliar faces is best when both rigid and non-rigid movement are present.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×