Purchase this article with an account.
Fernando Ramirez, Radoslaw M. Cichy, Carsten Allefeld, John-Dylan Haynes; Translation tolerant and category-selective encoding of orientation in the fusiform face area. Journal of Vision 2012;12(9):1180. doi: 10.1167/12.9.1180.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The fusiform face area (FFA) is a region of the human ventral visual pathway that exhibits a stronger response to faces than objects. The precise role of this region for face perception is not well understood, and its face selectivity has been debated. Furthermore, it is unclear which properties of visual stimuli are systematically reflected in the patterns of activation of this region. Prior research suggests that FFA might encode face orientation. Here we directly explore the encoding of orientation using a combination of functional magnetic resonance imaging (fMRI), multivoxel pattern analysis (MVPA) and computational modeling. We presented subjects with synthetic images of faces and cars that were rotated in depth and displayed either above or below fixation. We then explored orientation-related information available in fine-grained activity patterns in FFA, lateral occipital (LO) and early visual cortex (EVC). Distributed signals from FFA allowed above-chance classification of orientation within category only for faces. This finding generalized to faces presented in different retinotopic positions. In contrast, classification in EVC and LO resulted in comparable, above-chance classification of face and car orientation information, but only when trained and tested on corresponding retinotopic positions. Classification accuracies across position were substantially decreased for both categories in LO, while not different from chance in EVC. Finally, we compared a computational model of population coding in FFA with the data using representational dissimilarity analysis. We conclude that: (i) category-selective effects of stimulus orientation are reflected in the fine grained patterns of activation in FFA, (ii) the structure of these patterns is tolerant to translation, (iii) frontal views of faces are most robustly represented, and (iv) our decoding results presumably reflect an inhomogeneous distribution over voxels of spatially clustered angle-tuned neural populations.
Meeting abstract presented at VSS 2012
This PDF is available to Subscribers Only