August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
Recognition of static versus dynamic faces in prosopagnosia
Author Affiliations
  • David Raboy
    The University of Texas at Dallas
  • Alla Sekunova
    Department of Ophthalmology and Visual Science, University of British Columbia
    Department of Medicine (Neurology), University of British Columbia
  • Michael Scheel
    Department of Ophthalmology and Visual Science, University of British Columbia
    Department of Medicine (Neurology), University of British Columbia
  • Vaidehi Natu
    The University of Texas at Dallas
  • Samuel Weimer
    The University of Texas at Dallas
  • Brad Duchaine
    Institute of Cognitive Neuroscience, University College London
  • Jason Barton
    Department of Ophthalmology and Visual Science, University of British Columbia
    Department of Medicine (Neurology), University of British Columbia
  • Alice O'Toole
    The University of Texas at Dallas
Journal of Vision August 2010, Vol.10, 594. doi:https://doi.org/10.1167/10.7.594
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      David Raboy, Alla Sekunova, Michael Scheel, Vaidehi Natu, Samuel Weimer, Brad Duchaine, Jason Barton, Alice O'Toole; Recognition of static versus dynamic faces in prosopagnosia. Journal of Vision 2010;10(7):594. https://doi.org/10.1167/10.7.594.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A striking finding in the face recognition literature is that motion improves recognition accuracy only when viewing conditions are poor. This may be due to parallel (separate) neural processing of the invariant (identity) information in the fusiform gyrus and changeable (social communication) information in the superior temporal sulcus (pSTS) (Haxby et al., 2000). The pSTS may serve as a secondary “back-up” route for the recognition of faces from identity-specific facial dynamics (O'Toole et al., 2002). This predicts that prosopagnosics with an intact pSTS may be able to recognize faces when they are presented in motion. We compared face recognition for prosopagnosics with intact STS (n=2) and neurologically intact controls (n=19). In our experiment, we used static and dynamic (speaking/expressing) faces, tested in identical and “changed” stimulus conditions (e.g., different video with hair change, etc.). Participants learned 40 faces: half from dynamic videos and half from multiple static images extracted from the videos. At test, participants made “old/new” judgments to identical and changed stimuli from the learning session and to novel faces. As expected, controls showed equivalent accuracy for static and dynamic conditions, with better performance for identical than for changed stimuli. Using the same procedure, we tested two prosopagnosic patients: MR, who has a lesion that destroyed the right OFA and FFA, and BP, who has a right anterior temporal lesion sparing these areas. For identical stimuli, MR and BP performed marginally better on static faces than on dynamic faces. For the more challenging problem of recognizing people from changed stimuli, both MR and BP performed substantially better on the dynamic faces. The motion advantage seen for MR and BP in the changed stimulus condition is consistent with the hypothesis that patients with a preserved pSTS may show better face recognition for moving faces.

Raboy, D. Sekunova, A. Scheel, M. Natu, V. Weimer, S. Duchaine, B. Barton, J. O'Toole, A. (2010). Recognition of static versus dynamic faces in prosopagnosia [Abstract]. Journal of Vision, 10(7):594, 594a, http://www.journalofvision.org/content/10/7/594, doi:10.1167/10.7.594. [CrossRef]
Footnotes
 CIHR MOP-77615, Canada Research Chair program, Michael Smith Foundation for Health Research (JB).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×