September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Mapping models of V1 and V2 selectivity with local spectral reverse correlation
Author Affiliations & Notes
  • Timothy D. Oleskiw
    New York University
    Flatiron Institute
  • R.T. Raghavan
    New York University
  • Justin D. Lieber
    New York University
  • Eero P. Simoncelli
    New York University
    Flatiron Institute
  • J. Anthony Movshon
    New York University
  • Footnotes
    Acknowledgements  Simons Foundation, NIH EY022428-10
Journal of Vision September 2024, Vol.24, 1126. doi:https://doi.org/10.1167/jov.24.10.1126
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Timothy D. Oleskiw, R.T. Raghavan, Justin D. Lieber, Eero P. Simoncelli, J. Anthony Movshon; Mapping models of V1 and V2 selectivity with local spectral reverse correlation. Journal of Vision 2024;24(10):1126. https://doi.org/10.1167/jov.24.10.1126.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Neurons in macaque area V2 respond selectively to higher-order visual features, such as the quasi-periodic structure of natural texture, but it is unknown how selectivity for these features is built from V1 inputs tuned more simply for orientation and spatial frequency. We have recently developed an image-computable two-layer linear-nonlinear network that captures higher-order tuning from a sparse combination of subunits tuned in orientation and scale. This model can be independently trained to predict data from single-unit recordings of V1 and V2 neurons from the awake macaque, responding to a stimulus set comprised of multiple superimposed grating patches that localize oriented contrast energy. These optimized models accurately predict neural responses to other stimuli, including gratings and synthetic textures with higher-order features common to natural images. However, the high-dimensional parameter space of these models makes them difficult to interpret. A systematic comparison of V1 and V2 selectivity is therefore elusive. To address this limitation, we investigate neural tuning across our population of models in silico using local spectral reverse correlation (LSRC; Nishimoto et al., 2006). Here, we present thousands of ternary white noise stimuli to fitted models, computing a response-weighted windowed frequency spectrum across image coordinates. LSRC can effectively characterize the tuning properties of our model neurons, thereby estimating the linear and nonlinear components of model receptive fields. These estimates are qualitatively similar to our direct LSRC measurements made using identical methods from single-unit V1 and V2 recordings collected in separate experiments.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×