September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Selectivity, hyper-selectivity and the tuning of V1 neurons
Author Affiliations
  • David Field
    Department of Psychology, Cornell University
  • Kedarnath Vilankar
    Department of Psychology, Cornell University
Journal of Vision August 2017, Vol.17, 777. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      David Field, Kedarnath Vilankar; Selectivity, hyper-selectivity and the tuning of V1 neurons. Journal of Vision 2017;17(10):777.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

We explore two forms of selectivity in sensory neurons. The first form is the traditional linear or 'basis selectivity' that is revealed by the classical receptive field. This receptive field traditionally describes the response of a neuron as a function of position. This receptive field will also typically represent the stimulus that optimally stimulates the neuron. The second type of selectivity we describe as "hyper-selectivity" and is either implicitly or explicitly a component of several models including sparse coding, gain control, some linear non-linear (LNL) models and deep-networks. Hyper-selectivity is unrelated to the stimulus that maximizes the response. Rather, it is the drop-off in response around that optimal stimulus that determines the hyper-selectivity. Models with hyper-selectivity allow what appear to be paradoxical results. For example, it is possible for a neuron to be very narrowly tuned (hyper-selective) to a broadband stimulus - or broadly tuned to a narrow-band stimulus (linear selectivity). We note that the Gabor-Heisenberg tradeoffs apply to selectivity with linear neurons. However, non-linear neurons that are hyper-selective can easily break this limitation, and we show this with both sparse coding and published data from V1 neurons. We also argue that results with over-complete sparse coding typically focus on the linear selectivity, but the hyper-selectivity changes in important and systematic ways as the network becomes more overcomplete. We show that the receptive fields of neurons, when measured with spots or gratings, will misestimate the optimal stimulus for the neuron. For four times overcomplete codes, we find that the estimates are in the range of 40 degrees. Finally, we argue that although gain control models, some linear non-linear models, and sparse coding have much in common, we believe that our approach to hyper-selectivity provides a deeper understanding of why these non-linearities are present in the early visual system.

Meeting abstract presented at VSS 2017


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.