Abstract
Leading models of face recognition assume that the processes of identity and emotional expression recognition occur in distinct cognitive and neural systems. Although there is strong evidence for the separation of these abilities, it is currently unclear whether this division occurs during visual-perceptual processing or at later modality-unspecific stages of face recognition. To track the time course of face identity and expression matching processes, and to find out when these processes separate during visual-perceptual stages of face recognition, we employed the N250r component as an event-related brain potential (ERP) marker of the match between visual working memory and perceptual representations. Two different face images were presented sequentially on each trial. In different tasks, participants had to decide whether either the identity or the expression of the two faces was the same or different. Changes or repetitions of the other task-irrelevant dimension (expression or identity) were varied orthogonally. In both the identity and expression matching tasks, the matching process, as reflected by N250r onset, started earlier when the irrelevant dimension was also repeated. N250r components to a matching task-relevant feature were delayed and attenuated when the task-irrelevant feature changed. Behavioural matching performance was also superior on trials where the currently irrelevant dimension was repeated. These findings demonstrate that face identity and expression are not processed independently. Even when only one of these dimensions is task-relevant, both are initially encoded in an integrated representation, and this can result in interference from the irrelevant dimension on visual-perceptual identity or expression matching processes.
Meeting abstract presented at VSS 2015