September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
A two-stage model of V2 demonstrates efficient higher-order feature representation
Author Affiliations & Notes
  • Timothy D. Oleskiw
    New York University
    Flatiron Institute
  • Ruben R. Diaz-Pacheco
    Commonwealth Fusion Systems
  • J. Anthony Movshon
    New York University
  • Eero P. Simoncelli
    New York University
    Flatiron Institute
  • Footnotes
    Acknowledgements  NIH EY022428, Simons Foundation
Journal of Vision September 2021, Vol.21, 2654. doi:https://doi.org/10.1167/jov.21.9.2654
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Timothy D. Oleskiw, Ruben R. Diaz-Pacheco, J. Anthony Movshon, Eero P. Simoncelli; A two-stage model of V2 demonstrates efficient higher-order feature representation. Journal of Vision 2021;21(9):2654. https://doi.org/10.1167/jov.21.9.2654.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A distinguishing feature of neurons within cortical area V2 is selectivity for higher-order visual features beyond the localized orientation energy conveyed by area V1. Recently physiology has shown that while single units in area V1 respond primarily to the spectral content of a stimulus, single units in V2 are selective for image statistics that distinguish natural images. Despite these observations, a description of how V2 can achieve higher-order feature selectivity from V1 outputs remains elusive. To study this we consider a two-layer linear-nonlinear network mimicking areas V1 and V2. When optimized to detect a subset of higher-order features, fitted model V2-like units perform computations that resemble localized differences over the space of V1 afferents, computing relative spectral energy within and across the V1 tuning dimensions of space, orientation, and scale. Interestingly, we find these model fits bear strong qualitative resemblance to models trained on data collected from single units in primate V2, suggesting that some V2 neurons are ideal for encoding higher-order features of natural images. Interestingly, it is known that cortical neurons, such as those of V1, exhibit sparse (heavy-tailed) response distributions to natural images, a fact that is believed to reflect an efficient image code. Indeed these idealized V2-like units exhibit sparsity, similar to what is seen in model V1 populations. What we show here is that sparseness itself can encode image content: classifiers trained to detect higher-order image features from a population readout of response sparsity are significantly more efficient when using V2-like units than comparable V1-like populations, requiring fewer observations to achieve the same classification accuracy. Thus, we show that differences over V1 afferent activity yield efficient mechanisms for computing higher-order visual features, providing a justification for receptive field structures observed in neurons within primate area V2.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×