Abstract
Perception is an outcome of neuronal computations, and perception changes only if the underlying neuronal responses change. For a single image, perception is sensitive to change in some pixel patterns, possibly because visual neurons preferentially respond to adjustments in these pixels. We investigated how perceptual discriminability to different perturbations of an image is related to neuronal receptive fields – the image perturbation that invokes the greatest increase in a neuron or a population’s responses. We began our analysis with a simplifying assumption that neuronal responses are deterministic. Under a further assumption that perceptual discriminability reflects the magnitude of change in neuronal response (the L2 norm), receptive fields, but not other aspects of neuronal computations, completely determine how perceptual discriminability varies to all perturbations to an image. We generalized our analysis to stochastic neuronal responses, in which case the neuronal receptive fields, together with response variabilities and noise correlations, co-determine the perceptual discriminability patterns. Under the L2 norm assumption, which gained support from empirical analysis, we can interpret perceptual discriminability as a measure of distance (or metric) on the stochastic manifold of neuronal responses. Two different patterns of perceptual discriminability to the same set of image perturbations reflect the difference between the underlying neuronal response manifolds, hence the difference in neuronal computations. We developed a metric to quantify the difference between perceptual predictions (or metric properties) of a pair of neuronal response models. The metric compares the pattern of perceptual discriminability to perturbations to different images all at once, and can be flexibly applied to analyzing simple (e.g. single-layer) and complex (e.g. multi-layer) neuronal models that produce deterministic or stochastic outputs.