Abstract
How fast does the human visual system extract individual information about faces? Despite numerous experiments using high temporal resolution electrophysiological methods, this question remains controversial. Event-related potential (ERP) studies have disclosed a visual occipito-temporal component peaking at around 170 ms following the onset of a face, the N170. This response is larger for faces compared to other categories. However, due to the lack of specificity of EEG responses to individual faces, it is unclear whether the recognition of faces at the individual level is also reflected at this stage. To address this question, we used a continuous stimulation paradigm in which the ERP response to a face is recorded with respect to another face identity rather than to a preceding blank screen baseline epoch. Twenty subjects were presented with blocks of alternating facial identities (A-B-A-B-…, 600 ms/stimulation), during EEG recording (64 channels). Following the shift between each face, we recorded a first negative ERP deflection of about 3V starting at 130 ms and peaking with identical latency and scalp topography as the classical N170. To ensure that this response reflected facial identity processing, we applied the same paradigm to the phenomenon of face categorical perception. Three morphed faces were extracted from a continuum between 2 faces in order to build 2 conditions: one including faces belonging to the same identity (face A: 95% and B: 65%) and one including faces belonging to different identities (face B: 65% and C: 35%). The recorded component was larger when face B was preceded by face C than when preceded by face A, although face pairs were at equal distance in terms of low-level properties. This continuous presentation paradigm in electrophysiology provides direct evidence that facial identity is coded as early as 130 ms following stimulus onset, and could be generally applied to clarify the speed of human object processing stages.