August 2009
Volume 9, Issue 8
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2009
Use of a correlative training method in the rehabilitation of acquired prosopagnosia
Author Affiliations
  • Ann Grbavec
    Human Vision and Eye Movement Laboratory, Departments of Ophthalmology and Visual Sciences, Medicine (Neurology), Psychology, University of British Columbia, Vancouver, BC, Canada
  • Christopher Fox
    Human Vision and Eye Movement Laboratory, Departments of Ophthalmology and Visual Sciences, Medicine (Neurology), Psychology, University of British Columbia, Vancouver, BC, Canada
  • Jason Barton
    Human Vision and Eye Movement Laboratory, Departments of Ophthalmology and Visual Sciences, Medicine (Neurology), Psychology, University of British Columbia, Vancouver, BC, Canada
Journal of Vision August 2009, Vol.9, 487. doi:10.1167/9.8.487
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ann Grbavec, Christopher Fox, Jason Barton; Use of a correlative training method in the rehabilitation of acquired prosopagnosia. Journal of Vision 2009;9(8):487. doi: 10.1167/9.8.487.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

No effective treatment is known for acquired prosopagnosia. We investigated a novel rehabilitative training strategy, based on work with neural network models showing that correlating a weak cue with a strong cue during training can help a network learn tasks that would otherwise not be possible. Many prosopagnosic subjects can recognize facial expressions despite their problems with identity. By correlating expression with identity during early training stages, we can pair a strong cue (expression) with a weak one (identity). With repeated training, this correlation should increase the perceived difference between these novel faces, eventually allowing recognition of identity even when expression is no longer correlated.

We trained two prosopagnosic subjects (R-AT1 and B-AT1) with anterior temporal lesions and intact recognition of facial expression. During the correlative method, subjects learned five frontal-view faces, initially all with unique expressions. Once they achieved a criterion success rate, a modest degree of variability in expression was introduced, and more again once criterion was achieved, until expression was eventually uncorrelated with identity after several weeks of thrice-weekly training. Additional training runs were performed with hair removed, and external contour removed. As control experiments, we had subjects learn five other faces over a similar time period, but without any correlation between identity and expression.

Subjects learned to recognize these small sets of faces, even without hair or external contour, and showed high levels of retention even two months later. However, subjects also learned the faces in control experiments, suggesting that repeated exposure was also effective. fMRI scanning in one subject showed a significant increase in peak-voxel significance and the number of face-selective voxels in the fusiform face area after training. These results show that prosopagnosics can learn to recognize a small set of faces with at least some invariance for expression.

Grbavec, A. Fox, C. Barton, J. (2009). Use of a correlative training method in the rehabilitation of acquired prosopagnosia [Abstract]. Journal of Vision, 9(8):487, 487a, http://journalofvision.org/9/8/487/, doi:10.1167/9.8.487. [CrossRef]
Footnotes
 This work was supported by the Margaret L. Adamson Award in Vision Research (AMG), CIHR Operating Grant MOP-77615, CIHR Canada Graduate Scholarship Doctoral Research Award and MSFHR Senior Graduate Studentship (CJF), Canada Research Chair and Michael Smith Foundation for Health Research Senior Scholarship (JJSB).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×