Abstract
No effective treatment is known for acquired prosopagnosia. We investigated a novel rehabilitative training strategy, based on work with neural network models showing that correlating a weak cue with a strong cue during training can help a network learn tasks that would otherwise not be possible. Many prosopagnosic subjects can recognize facial expressions despite their problems with identity. By correlating expression with identity during early training stages, we can pair a strong cue (expression) with a weak one (identity). With repeated training, this correlation should increase the perceived difference between these novel faces, eventually allowing recognition of identity even when expression is no longer correlated.
We trained two prosopagnosic subjects (R-AT1 and B-AT1) with anterior temporal lesions and intact recognition of facial expression. During the correlative method, subjects learned five frontal-view faces, initially all with unique expressions. Once they achieved a criterion success rate, a modest degree of variability in expression was introduced, and more again once criterion was achieved, until expression was eventually uncorrelated with identity after several weeks of thrice-weekly training. Additional training runs were performed with hair removed, and external contour removed. As control experiments, we had subjects learn five other faces over a similar time period, but without any correlation between identity and expression.
Subjects learned to recognize these small sets of faces, even without hair or external contour, and showed high levels of retention even two months later. However, subjects also learned the faces in control experiments, suggesting that repeated exposure was also effective. fMRI scanning in one subject showed a significant increase in peak-voxel significance and the number of face-selective voxels in the fusiform face area after training. These results show that prosopagnosics can learn to recognize a small set of faces with at least some invariance for expression.
This work was supported by the Margaret L. Adamson Award in Vision Research (AMG), CIHR Operating Grant MOP-77615, CIHR Canada Graduate Scholarship Doctoral Research Award and MSFHR Senior Graduate Studentship (CJF), Canada Research Chair and Michael Smith Foundation for Health Research Senior Scholarship (JJSB).