Purchase this article with an account.
Fraser Smith, Lucy Petro, Philippe Schyns, Lars Muckli; Complex Contextual Processing in V1 during Face Categorizations. Journal of Vision 2010;10(7):657. doi: 10.1167/10.7.657.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Primary visual cortex (area V1) and higher visual areas are reciprocally connected. To understand the nature of this reciprocal processing in more detail, we investigated the importance of area V1 (and its subregions) during complex face categorization tasks. It is generally assumed that gender or expression classification of faces is a complex cognitive task that relies on processing in higher visual areas. Here we tested the hypothesis that primary visual cortex (V1) is involved in the processing of facial expressions. In an fMRI experiment we delineated the borders of area V1 and subsequently mapped the cortical representation of eye and mouth regions during a face categorization task. We then trained a multivariate pattern classifier (linear SVM) to classify happy and fearful faces on the basis of V1 data from these “eye” and “mouth” regions, and from the remaining V1 area. We found that all three regions resulted in successful classification depending on task. In a second step we investigated the spatial distribution of the most informative vertices throughout V1 in more detail. Again we saw the importance of the cortical representation of the eyes and mouth, but also a strong contribution from outside these regions, i.e. in “non-diagnostic” V1. Our findings are compatible with the idea that contextual information modulates area V1 not only in very restricted regions of processing of the most diagnostic information but also in a more distributed way.
This PDF is available to Subscribers Only