Abstract
How is object orientation represented in the brain? Behavioral studies reveal that certain orientations are reliably more confusable than others. Using fMRI, we ask whether object-selective cortex (LO) represents these confusability relations, and if so, how they are realized neurally. Specifically, we assess whether more confusable orientations are represented more similarly in LO, while comparing two different metrics of neural similarity: MVPA and Repetition Suppression. Participants (N=11) viewed a counterbalanced stream of 16 orientations of an object while performing a task to ensure attention. Using a continuous carry-over design, we simultaneously measured the adapting effect (Repetition Suppression) as well as the multi-voxel pattern similarity (MVPA) between orientations. To estimate whether behavioral confusability is reflected in neural similarity, we modeled both Repetition Suppression and MVPA measures as a function of the behavioral confusability between orientations (empirically defined from highly reliable confusion errors observed in a previous experiment, Gregory & McCloskey, 2010). Using regression-based representational similarity analyses, we found that the behavioral confusability of orientations predicted MVPA pattern similarity in LO (β = 3.09, p< .0001, Permutation tests) even after accounting for pixel-based image similarities to which V1 was highly sensitive. By contrast, Repetition Suppression was not sensitive to the confusability of orientations (β = -.32, p=.55). These results suggest that LO represents the confusability of orientations in the similarity of across-voxel patterns, but not in the degree of Repetition Suppression, suggesting that these two fMRI measures reflect different aspects of neural similarity. To account for the differences between Repetition Suppression and MVPA, we propose a novel hypothesis on which MVPA-similarity reflects the topographical distribution of neuronal populations, whereas Repetition Suppression depends on repeated activation of neuronal populations. We show how these measures provide complementary information about neural representations, together allowing for richer conclusions than possible with either method alone.
Meeting abstract presented at VSS 2016