Abstract
For the first time, we reveal the time course of integration of Spatial Frequency (SF) facial features from the brain activity of observers who categorized Eckman's six basic expressions of emotions (i.e. happy, surprised, fearful, angry, disgusted, sad). In the experiment, three observers saw 21,000 sparse versions of expressive faces. Their task was to categorize them while we recorded their EEG. Original stimuli were 70 FACS-coded images of 5 males and 5 females, each displaying one of the 6 basic expressions plus neutral. We used Bubbles to synthesize each sparse face by randomly sampling facial information from 5 one-octave non-overlapping SF bands (Gosselin & Schyns, 2001). Online calibration of sampling density ensured 75% accuracy per expression. Using classification image techniques, we reveal the combination of SF features that each observer's brain requires to produce correct categorization behavior (e.g. the mouth for happy, two eyes for fear). With the same techniques applied to the EEG (measured on face sensitive occipito-temporal electrodes P7 and P8), we reveal the SF features that the brain processes over the time course of the N170. Then, we relate the SF features required for behavior with those integrated over the N170 time course. We show, in 42 independent instances (3 observers × 7 expressions × 2 electrodes), that the slopes of the N170 (reflecting phase onset and amplitude) fit with the slopes of a function that integrates SF featural information over time. In all instances, the maximum of the N170 coincides (with a precision of 4 ms) with the plateau of the information integration function. Thus, the N170 marks the end point of a process that integrates SF features in the 50 ms preceding the N170 peak. The characteristics of the N170 curves (latency, amplitude and width) depend on the nature of the SF features integrated.