Abstract
The medial temporal lobe (MTL) is known to be vital for memory function. However, recent studies have shown that a specific set of brain structures within the MTL are also important for perception. For example, studies of amnesic patients with damage to MTL structures indicated that these patients performed poorly on perceptual tasks, specifically, when discriminating between items that shared overlapping features. It was suggested that one MTL structure in particular, the perirhinal cortex (PRC), should be considered part of the representational hierarchy in the ventral visual stream (VVS) and is responsible for representing the complex conjunction of features that comprise objects, perhaps at a view-invariant level. In this study we investigated how the different features comprising complex objects are represented throughout the VVS up to and including the MTL, and at what stage the representations become view-invariant. To address these questions, we used multi-voxel pattern analysis (MVPA) of fMRI data, a technique that has gained prominence for its ability to probe the underlying neural representations of visual information. Participants completed a one-back task involving novel objects that were comprised of either one (e.g., A, B, or C), two (e.g., AB, AC, BC), or three features (e.g., ABC) and were presented from one of two possible viewpoints. This allowed us to examine the degree to which neural representation of a pair of objects depended only on the sum of their parts (i.e., A+BC=AB+C), or whether the specific feature conjunctions within objects were encoded. A searchlight analysis using this method indicated that anterior regions of the VVS, including the PRC, coded the complex conjunctions of features comprising the objects, over and above the individual features themselves. Moreover, we found evidence to suggest that the conjunctive representations in PRC were view-invariant.
Meeting abstract presented at VSS 2013