Abstract
The neural substrate of face processing is thought to encompass two distinct cortical networks: one for processing static elements, such as facial structure and features used for identification, and one for dynamic elements, such as gaze, speech, and emotion. The amygdala also contains face-selective neurons, but how the amygdala interacts with these two cortical networks during face processing remains poorly understood. To investigate this question, we utilized the difference in spatial selectivity of responses to face stimuli between the amygdala and cortical face areas. Emotionally expressive faces are detectable in the periphery and induce activity in the amygdala. In contrast, occipitotemporal face areas respond most strongly to faces at the fovea. We presented sets of three images while measuring MEG activity from human subjects: a central stimulus presented at the fovea and two flanker stimuli, 8 degrees to the left and right of fixation. On each trial, the three-item stimulus array consisted of a face, an object, and a scene, or alternatively, an image from one of the three image sets along with two noise images. The horizontal screen locations of the image categories were counterbalanced across trials. Decoding the identity of the centrally presented stimulus across time revealed a peak decoding accuracy at 170 ms following the onset of the three-item array. We then used the phase-slope index method with a searchlight analysis to find locations on the cortex at 170 ms that exhibited functional connectivity with the amygdala. For centrally-presented faces, we observed gamma-band functional connectivity from the amygdala to anterior temporal lobe. The gamma band is implicated in bottom-up processing. Our findings, therefore, support the hypothesis that the amygdala projects information about foveated faces to the anterior temporal lobe.