Abstract
Face processing is mediated by a network involving multiple distributed areas in the brain, with the occipital face area (OFA), fusiform face area (FFA), and posterior superior temporal sulcus (pSTS) considered the core nodes of the network. Results suggest that OFA is primarily involved in early perception of facial features, FFA is mainly involved in the processing of the static aspects of faces, and pSTS is mainly involved in the processing of the dynamic aspects of faces. Based on these results, the first models of the neural basis of face processing posited that pSTS codes for expression and FFA codes for identity. Recently, several neuroimaging studies have suggested that the FFA is involved in the processing of facial expressions and recent models have posited that the FFA is involved in structural encoding of face expression. To mediate between these hypotheses, we recorded intracranial electroencephalography (iEEG) data from 19 patients with electrodes in the OFA, FFA, and/or pSTS during face expression perception. Using pattern classification techniques, our results confirmed the existence of facial expression encoding in the fusiform area. At the early stage of visual information processing (100-250 ms after stimulus onset), neural activity from posterior fusiform area contains facial expression information; and at the late stage of visual processing (250-450 ms after stimulus onset), neural activity from anterior fusiform area contains facial expression information. In addition, facial expression information is seen in OFA and pSTS at the early stage of the process. Notably, the effect size of fusiform encoding of facial expressions is much smaller than the encoding for facial identity. Taken together, these results suggest that fusiform activity may contribute to the representation of the structural difference between facial expressions, and the posterior and anterior fusiform are dynamically involved in distinct stages of facial information processing.
Meeting abstract presented at VSS 2017