Abstract
There is systematic structure in the neural responses to visually presented objects across the ventral and dorsal streams. What are the key properties of objects that drive these responses? To explore a broad space of possibilities, we considered properties that reflect how we interact with objects (action), where they are found (context), what they are for (function), how big they are (real-world size), and what they look like (object gist). Estimates for these feature spaces were obtained for a set of 200 inanimate objects, using either behavioral rating experiments or image-based measures that capture global shape structure (Oliva & Torralba, 2001). Using fMRI, we obtained neural response patterns for 72 of these items in 11 participants. To analyze the structure in the neural responses, we used a feature-modeling approach (Mitchell et al., 2008; Huth et al., 2012), which fits a tuning model for each voxel along a set of feature dimensions (e.g. object gist features, action features). We found that a large proportion of posterior visual cortex was well-fit by the object gist model (mean r2=0.54). In a leave-two-out validation procedure, this object gist encoding model could accurately classify between two new object patterns with near perfect accuracy (96% SEM=1%). The feature spaces of action, context, function, and real-world size all were also able to classify objects but with lower overall accuracy (61%-68%). These models fit best along more anterior regions of object-responsive cortex, extending along PHC, TOS, and IPS. Thus, while these abstract properties of objects capture some of the structure in neural object responses, the results indicate that most of visually-responsive object cortex represents global form properties, i.e. object gist.
Meeting abstract presented at VSS 2014