Abstract
A central aim of visual neuroscience is to uncover the function of individual visually-responsive brain regions. A hallmark of occipitotemporal cortex is its functional organization into category-selective brain regions, and among these regions, it is well established that fusiform face area (FFA) responds highly selectively to the visual presentation of faces. At the same time, previous research has shown that FFA activity overlaps with several other feature maps that are not face specific, such as animacy, size, or curvature (Long et al., 2017), and FFA has been shown to carry above-chance information about non-face objects (Duchaine & Yovel, 2015). Thus, it remains an open question which other object dimensions may be represented in patterns of FFA responses. Here, we explored this question with a recent high-throughput neural-network model of FFA activity which has been shown to yield excellent predictive accuracy (Ratan Murty et al., 2021). We first predicted responses of the model’s FFA voxels to >26,000 naturalistic object images from the THINGS database (Hebart et al., 2019). Next, we used a sparse positive similarity embedding technique to identify interpretable dimensions underlying these response patterns. As expected, the results yielded a number of dimensions related to human faces and body parts encoded in FFA activity. Additionally, the embedding revealed latent dimensions in FFA activity encoding animal faces and non-face features reflecting mid-level shapes, textures, and scene-related features. These results capture a broad space of object features embedded in synthetic FFA activity while still confirming its clear selectivity for face images. Our approach may open the door for exploring the rich space of object features encoded in the complex activity patterns in visual brain regions.