Abstract
Is the fusiform face area (FFA) a module specialized for processing faces, or does it simply support generic visual expertise? Researchers have investigated this question using Multi-Voxel Pattern Analysis (MVPA) applied to fMRI results. Haxby et al. (2001) showed that patterns of neural activation in object-selective visual cortex can be used to discriminate object categories, even when voxels selective for those categories are removed. This provided evidence for a distributed neural code, in which information about faces exists outside the FFA. In contrast, Spiridon and Kanwisher (2002) showed that activation patterns in face-selective cortex were more effective for making face vs. non-face discriminations than for non-face vs. non-face discriminations, whereas this was not true for other object categories. This implied FFA neurons contain special information about faces, but that there is no specialized module for other categories.
We applied MVPA to our neurocomputational model of visual processing. Images are subject to Gabor filtering, then PCA, then input to a Kohonen network — a self-organizing neural network that groups similar inputs together, forming a two-dimensional “semantic map” of stimulus space. We trained the model on images of cups, cans, books and faces. As in Haxby et al. (2001), activity of units in areas dedicated to one category can be used to distinguish other categories. However, in line with Spiridon and Kanwisher, the face area is better at distinguishing faces from non-faces than at distinguishing non-face categories from each other, while non-face areas are on average equipotent at both tasks. In the model, this can be explained by lower within-category variability of faces in the representations compared to, say, cups. Hence, in a model of visual cortex possessing no special mechanism for face processing, we simulate Spiridon and Kanwisher's results, casting doubt on their interpretation in favor of a specialized face module.