May 2008
Volume 8, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   May 2008
Implicit coding of location, scale and configural information in feedforward hierarchical models of the visual cortex
Author Affiliations
  • Cheston Tan
    Department of Brain and Cognitive Sciences, MIT, and McGovern Institute for Brain Research, MIT
  • Thomas Serre
    Department of Brain and Cognitive Sciences, MIT, and McGovern Institute for Brain Research, MIT
  • Gabriel Kreiman
    Children's Hospital Boston, Harvard Medical School, and Center for Brain Science, Harvard University
  • Tomaso Poggio
    Department of Brain and Cognitive Sciences, MIT, and McGovern Institute for Brain Research, MIT
Journal of Vision May 2008, Vol.8, 43. doi:10.1167/8.6.43
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Cheston Tan, Thomas Serre, Gabriel Kreiman, Tomaso Poggio; Implicit coding of location, scale and configural information in feedforward hierarchical models of the visual cortex. Journal of Vision 2008;8(6):43. doi: 10.1167/8.6.43.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Feedforward hierarchical models of the visual cortex constitute a popular class of models of object recognition. In these models, position and scale invariant recognition is achieved via selective pooling mechanisms, resulting in units at the top of the hierarchy having large receptive fields that signal the presence of specific image features within their receptive fields, irrespective of scale and location. Hence, it is often assumed that such models are incompatible with data that suggest a representation for configurations between objects or parts. Here, we consider a specific implementation of this class of models (Serre et al, 2005) and show that location, scale and configural information is implicitly encoded by a small population of IT units.

First we show that model IT units agree quantitatively with the coarse location and scale information read out from neurons in macaque IT cortex (Hung et al, 2005). Next, we consider the finding by Biederman et al (VSS 2007) that changes in configuration are reflected both behaviorally and in the BOLD signal measured from adaptation experiments. Model results are qualitatively similar to theirs: for stimuli consisting of two objects, stimuli that differ in location (objects shifted together) evoke similar responses, while stimuli that differ in configuration (object locations swapped) evoke dissimilar responses. Finally, the model replicates psychophysical findings by Hayworth et al. (VSS 2007), further demonstrating sensitivity to configuration. Line drawings of objects were split into complementary pairs A and B by assigning every other vertex to A, and complementary vertices to B. Scrambled versions A' and B' were then generated. Both human subjects and the model rated A as more similar to B than to A'.

Altogether, our results suggest that implicit location, scale and configural information exists in feedforward hierarchical models based on a large dictionary of shape-components with various levels of invariance.

Tan, C. Serre, T. Kreiman, G. Poggio, T. (2008). Implicit coding of location, scale and configural information in feedforward hierarchical models of the visual cortex [Abstract]. Journal of Vision, 8(6):43, 43a, http://journalofvision.org/8/6/43/, doi:10.1167/8.6.43. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×