Abstract
Studies of visual recognition have traditionally focused on neural responses to single, isolated objects. In the real world, however, objects are almost always surrounded by other objects. Although previous fMRI studies have shown that the category identity of single objects can be extracted from patterns of activity in human object selective cortex, little is known about how multiple, simultaneous objects are represented. Here we use multi-voxel pattern analysis to examine this issue. Specifically, we tested whether patterns evoked by pairs of objects showed an ordered relationship to patterns evoked by their constituent objects when presented alone. Subjects viewed four categories of objects (brushes, chairs, shoes, and cars), presented either singly or in different-category pairs, while performing a one-back task that required attention to all items on the screen. Response patterns in the lateral occipital complex (LOC) reliably discriminated between all object pairs, suggesting that LOC populations encode information about object pairs at a fine grain. We next performed a voxel-by-voxel analysis of the relationship between responses evoked by each pair and responses evoked by its component objects. Applying a “searchlight” classification approach to identify voxels with the highest signal-to-noise ratios, we found that individual voxels' responses to object pairs were well-predicted by the mean of responses to the corresponding component objects. We validated this relationship by successfully classifying patterns evoked by object pairs based on synthetic patterns derived from the averages of patterns evoked by single objects. These results indicate that the representation of multiple objects in LOC is governed by response normalization mechanisms similar to those reported in non-human primates. They also suggest a coding scheme in which patterns of population activity preserve information about multiple objects under conditions of distributed attention, facilitating fast object and scene recognition during natural vision. Supported by NIH grant EY-016464 to R.E.