Abstract
The categorization of facial expressions has been shown to be associated with a lateral-occipital negativity in the ERP record at around 170 ms following stimulus onset (N170). It has been suggested that this negativity reflects processing in a region of the lateral-occipital cortex known as the Fusiform Face Area (FFA). Using both source reconstruction and classification image techniques, in combination with an information-theoretic framework, we propose a new method to relate the cortical electrical activity over the FFA region to the stimulus information that is used to solve the classification problem, where the stimuli on each trial are corrupted by random noise in the form of Gaussian apertures or “Bubbles”. For each voxel in our source reconstructed cortical region of interest we produce a classification image, correlating the uncertainty in our noisy stimulus to the uncertainty in our cortical signal. For each stimulus pixel we then have a distribution of correlations over voxels at each time point following stimulus onset. The entropy of this distribution then tells us, for that pixel and that time, how “spatially informative” the cortical signal is. Furthermore, the mutual information between these distributions at different times tells us the extent to which they correspond to temporally stable “classification maps”. We first found that the pixels with the highest spatial information at the time of the N170 correspond to the regions of the stimulus involved in correct classification performance (e.g. the mouth for “happy”, the eye for “fear”). Furthermore, we found that the classification maps associated with these pixels formed clear clusters of high and low correlation whose mutual information was stable over time. We therefore propose that the spatio-temporal activity pattern over the FFA reflects a task-oriented classification process which can only be found by examining the information-theoretic properties of its distribution.