Purchase this article with an account.
Valentinos Zachariou, Zaid Safiullah, Leslie Ungerleider; The FFA can process a non-face category of objects as robustly as faces.. Journal of Vision 2015;15(12):424. doi: 10.1167/15.12.424.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The FFA is a functionally defined, domain specific brain region in the service of face perception. It has also been shown that the FFA may also subserve the perception of other categories of objects, namely objects of expertise. Here, we present evidence that the FFA, defined using a standard face localizer task (faces > houses), can respond to a non-face category equivalently to faces even when participants are not experts on the non-face category. We demonstrated the above in human adults who performed same-different judgments on two object categories (faces & chairs) while undergoing fMRI. In each category, two exemplars differed in terms of the shape or spatial configuration of their features (featural/configural differences) and the difficulty of the tasks was a priori matched, separately for each participant, in terms of reaction time and accuracy. The functional contrast of faces vs. chairs yielded no significant activation within right FFA, neither at the group nor the individual level. A closer investigation within each individually defined FFA revealed the existence of two overlapping activation peaks of equivalent magnitude: one for faces and another for chairs (mod/mean Euclidian distance: 2.4/3.6mm; EPI: 3.2mm isotropic). To assess the specificity of the face peak, the ten most significantly active voxels comprising the peak were used to train pattern classifiers to differentiate between chair configural and featural trials. The pattern classifiers performed well above chance (60% accuracy), and their performance was comparable to that of differentiating between face configural and featural trials from the same peak voxels (63% accuracy). We conclude that under certain conditions the FFA can respond as strongly to a non-face category as it does for faces and the pattern of activity associated with the non-face category contains sufficient information, even within the most face selective voxels, to differentiate between different attributes of the category.
Meeting abstract presented at VSS 2015
This PDF is available to Subscribers Only