Abstract
The features composing a face are not processed in isolation by the visual system. Rather, they are encoded interactively as commonly illustrated by the composite illusion. Interactions between face features are observer-dependent as they are sensitive to face orientation in the picture plane. Interactive face processing (IFP) as indexed by the composite illusion presumably arises in face-preferring cortical regions located in the right middle fusiform gyrus, coined rFFA. Yet, composite illusion is a limited marker of IFP because it restricts the study of IFP to a single response modality (“same” responses) and the sharp edges introduced in composite stimuli are known to impair the processing of face information. The present experiment re-addresses IFP in the human brain using the congruency paradigm, which bypasses previous limitations: (1) IFP is measured in all response modalities and (2) face stimuli are not distorted by artificial edges. In a slow-event-related design, subjects were presented with face pairs and decided whether a target face region (i.e. eyes+eyebrows) was same or different while ignoring the other distracter features (e.g. nose and mouth). In congruent conditions, target and distracter features call for an identical decision. In incongruent conditions, they call for opposite decisions. Faces were presented at upright and inverted orientations. Our results reveal that performance was better when the target region was embedded in a congruent than an incongruent face context, indicating that distracter and target features were processed interactively. In the rFFA, neural response was as strong in incongruent conditions as when all features differed in a pair, suggesting that feature incongruency was treated as full identity change in this region. Inversion eliminated these differences in rFFA activity. This pattern was not found in other face-selective regions. Our results thus strengthen the view that rFFA is the main site for face interactive processing.