October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Amplification of feature selectivity by spatial convolution in primary visual cortex
Author Affiliations & Notes
  • Felix Bartsch
    University of Maryland
  • Daniel A. Butts
    University of Maryland
  • Bruce Cumming
    National Eye Institute
  • Footnotes
    Acknowledgements  NEI/NIH EY025403; NSF DGE-1632976; Intramural research program at NEI/NIH
Journal of Vision October 2020, Vol.20, 1344. doi:https://doi.org/10.1167/jov.20.11.1344
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Felix Bartsch, Daniel A. Butts, Bruce Cumming; Amplification of feature selectivity by spatial convolution in primary visual cortex. Journal of Vision 2020;20(11):1344. https://doi.org/10.1167/jov.20.11.1344.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Primary visual cortex (V1) has long been of interest in the study of binocular integration, as it receives largely separate monocular streams of input, but its outputs are almost entirely binocular. Here we hypothesize that the selectivity of V1 neurons to binocular disparity (i.e. the difference is location of images between the two eyes) is derived through convolutional processing, which amplifies correlations present in natural binocular inputs. Most previous models of disparity selectivity within V1 are based on comparing similar spatiotemporal input from each eye (often displaced by the neuron’s preferred disparity). Such processing, however, is limited in how much it can be modulated by disparity because its disparity selectivity is conflated with its sensitivity to the particular spatiotemporal patterns. Thus, performing identical binocular processing across space (i.e., performing a spatial convolution of binocular subunits) can leverage the correlations present in natural visual input: with disparity typically correlated over large spatial scales (due to the depth structure of the visual scene) and pattern information changing rapidly within these scales. The convolution thus amplifies the correlated disparity signals while averaging out their responses to particular patterns. We test such a model using recordings from V1 neurons in awake macaque, presenting random bar stimuli aligned to each neuron’s receptive field with randomly changing disparity. Unlike previous models of V1 disparity tuning, our model is able to almost completely reproduce the disparity tuning of a wide variety of binocular V1 neurons to complex stimulus patterns: explaining, on average, >80% of the disparity selectivity of disparity-tuned V1 neurons, with many neurons almost perfectly explained. Furthermore, such a model also generalizes to non-disparity-tuned V1 cells. We thus suggest this as a general strategy for amplification of a feature selectivity within V1.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×