Abstract
Abundant evidence supports a role for the parahippocampal place area (PPA) in visual scene perception, but fundamental questions remain. Here we ask whether the PPA contains distinct sub-regions that encode different aspects of scenes. To address this question, we used data-driven clustering to identify groups of PPA voxels with similar responses to a large set of images in extensively scanned individual brains (185 images, 20 repetitions per image, N = 4). We found that >95% of the variance of PPA voxel responses was explained by just two clusters, mapped approximately along the anterior-posterior axis, consistent with previous findings (Baldassano et al., 2013; Nasr et al., 2013; Cukur et al., 2016; Steel et al., 2021). But what distinct scene features do these sub-regions encode? Responses profiles of the two subregions were quite correlated, and visual inspection of stimuli eliciting high and low responses in each sub-region did not reveal any obvious functional differences between them. We therefore built artificial neural network-based encoding models of each PPA sub-region, which were highly accurate at predicting responses to held-out stimuli (each R > 0.70, P< 0.00001), and harnessed these models to find new images predicted to maximally dissociate responses of the two sub-regions. These predictions were then tested in a new fMRI experiment, which produced a clear double dissociation between the two sub-regions in all four PPAs tested (two participants x two hemispheres each): The anterior sub-region responded more to images containing relatively bare spatial layouts than images containing object arrays and textures, while the more posterior region showed the opposite pattern. Taken together, this approach revealed distinct sub-regions of the PPA and produced highly accurate computational models of each, which in turn identified stimuli that could differentially activate the two subregions, providing an initial hint about the functional differences between them.