August 2014
Volume 14, Issue 10
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Object gist features capture the structure of neural responses to objects
Author Affiliations
  • Talia Konkle
    Psychology Department, Harvard University
  • Alfonso Caramazza
    Psychology Department, Harvard University
Journal of Vision August 2014, Vol.14, 1292. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Talia Konkle, Alfonso Caramazza; Object gist features capture the structure of neural responses to objects. Journal of Vision 2014;14(10):1292.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

There is systematic structure in the neural responses to visually presented objects across the ventral and dorsal streams. What are the key properties of objects that drive these responses? To explore a broad space of possibilities, we considered properties that reflect how we interact with objects (action), where they are found (context), what they are for (function), how big they are (real-world size), and what they look like (object gist). Estimates for these feature spaces were obtained for a set of 200 inanimate objects, using either behavioral rating experiments or image-based measures that capture global shape structure (Oliva & Torralba, 2001). Using fMRI, we obtained neural response patterns for 72 of these items in 11 participants. To analyze the structure in the neural responses, we used a feature-modeling approach (Mitchell et al., 2008; Huth et al., 2012), which fits a tuning model for each voxel along a set of feature dimensions (e.g. object gist features, action features). We found that a large proportion of posterior visual cortex was well-fit by the object gist model (mean r2=0.54). In a leave-two-out validation procedure, this object gist encoding model could accurately classify between two new object patterns with near perfect accuracy (96% SEM=1%). The feature spaces of action, context, function, and real-world size all were also able to classify objects but with lower overall accuracy (61%-68%). These models fit best along more anterior regions of object-responsive cortex, extending along PHC, TOS, and IPS. Thus, while these abstract properties of objects capture some of the structure in neural object responses, the results indicate that most of visually-responsive object cortex represents global form properties, i.e. object gist.

Meeting abstract presented at VSS 2014


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.