Abstract
In natural viewing, the left and right halves of the face are often divided across separate visual hemifields, each of which initially projects to the contralateral hemisphere. Accordingly, creating unified representations of faces involves the integration of information from both hemispheres. Here, we explored how information from the left and right half of a face is combined into a single representation. First, participants were asked to make familiarity judgements on composite faces, which combined the left and right halves of a famous face and an unfamiliar face. Consistent with the traditional composite face effect (in which the top and bottom halves of faces are combined), we found that accuracy was lower and response time was higher when the composites were aligned compared to when they were misaligned. This showed that the two halves of the face were automatically combined into a holistic representation. Next, we measured the neural correlates of this hemispheric integration in natural viewing using fMRI. We found consistently higher interhemispheric (e.g. rFFA – lFFA) compared to intrahemispheric (e.g. rOFA – rFFA) connectivity between regions of the face network. However, this interhemispheric bias was absent in early visual regions (V1-V3), suggesting an important role of interhemispheric communication in higher-level perceptual processing. Finally, we compared the similarity of left and right face halves in a deep convolutional neural network (DCNN) trained to recognize faces. We found that representations of left and right face halves were independent in the convolutional layers of the DCNN. However, there were similar representations of left and right halves of faces with the same identity in the fully-connected layers. Together, these findings reveal how information from the left and right halves of faces are combined holistically in human and artificial neural networks.