Purchase this article with an account.
Margaret Moulson, Benjamin Balas, Charles Nelson, Pawan Sinha; EEG correlates of categorical and graded face perception. Journal of Vision 2008;8(6):533. doi: 10.1167/8.6.533.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Face perception is a critical social ability that is subserved by distinct neural systems. Previous research has shown that faces elicit a distinct electrophysiological signature, the N170, which has a larger amplitude and shorter latency in response to faces compared to other objects. However, determining the face specificity of any neural marker for face perception hinges on finding an appropriate control stimulus. Our goal was to use a state-of-the-art computational model of face detection to create a novel stimulus set consisting of 300 images on a continuum from no similarity to faces to genuine faces, in order to explore the neural correlates of face perception in a principled way. Behaviorally, human observers accurately categorized these images as faces or non-faces, but their pair-wise ratings confirmed that the non-face images spanned a continuum of image-level similarity to faces, from no similarity to high similarity. High-density (128-channel) event-related potentials (ERPs) were recorded while 9 adult subjects viewed all 300 images in random order, and determined whether each image was a face or non-face. The goal of our analyses was to determine if the ERP signal reflects strict face/non-face categorization, or rather the continuum of “face-ness” built in to these images. Interestingly, we found evidence for both categorical and graded responses using two different kinds of analyses. Traditional waveform analyses revealed that the N170 component over occipitotemporal electrodes was larger in amplitude for faces compared to all non-faces, even those that were high in image similarity to faces, suggesting a categorical distinction between faces and non-faces. By contrast, single-trial classification across the entire waveform, using machine learning techniques, revealed that high-similarity non-face images were harder to classify as non-faces compared to low-similarity non-faces. These results suggest that both categorical and graded information are available but ‘multiplexed’ in a subset of the ERP signals.
This PDF is available to Subscribers Only