Abstract
We explore two forms of selectivity in sensory neurons. The first form is the traditional linear or 'basis selectivity' that is revealed by the classical receptive field. This receptive field traditionally describes the response of a neuron as a function of position. This receptive field will also typically represent the stimulus that optimally stimulates the neuron. The second type of selectivity we describe as "hyper-selectivity" and is either implicitly or explicitly a component of several models including sparse coding, gain control, some linear non-linear (LNL) models and deep-networks. Hyper-selectivity is unrelated to the stimulus that maximizes the response. Rather, it is the drop-off in response around that optimal stimulus that determines the hyper-selectivity. Models with hyper-selectivity allow what appear to be paradoxical results. For example, it is possible for a neuron to be very narrowly tuned (hyper-selective) to a broadband stimulus - or broadly tuned to a narrow-band stimulus (linear selectivity). We note that the Gabor-Heisenberg tradeoffs apply to selectivity with linear neurons. However, non-linear neurons that are hyper-selective can easily break this limitation, and we show this with both sparse coding and published data from V1 neurons. We also argue that results with over-complete sparse coding typically focus on the linear selectivity, but the hyper-selectivity changes in important and systematic ways as the network becomes more overcomplete. We show that the receptive fields of neurons, when measured with spots or gratings, will misestimate the optimal stimulus for the neuron. For four times overcomplete codes, we find that the estimates are in the range of 40 degrees. Finally, we argue that although gain control models, some linear non-linear models, and sparse coding have much in common, we believe that our approach to hyper-selectivity provides a deeper understanding of why these non-linearities are present in the early visual system.
Meeting abstract presented at VSS 2017