Purchase this article with an account.
Ann Hermundstad, John Briguglio, Mary Conte, Jonathan Victor, Vijay Balasubramanian, Gasper Tkacik; Natural scene statistics predict allocation of resources to nonlinear visual feature extraction. Journal of Vision 2014;14(15):26. doi: https://doi.org/10.1167/14.15.26.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Barlow's “efficient coding principle” has provided a powerful conceptual framework for the early stages of sensory processing. Specifically, in the regime that channel capacity is the limiting factor (e.g. the optic nerve bottleneck), sensory systems make efficient use of their limited resources by magnifying signal components with low variance, and reducing those with high variance. But it is much less widely recognized that the efficient coding principle makes predictions about another regime of operation: when no transmission bottleneck is present, but meaningful feature identification is limiting. This is the regime relevant to the cortex, and, interestingly, the efficient coding principle takes on a different character, and makes a different prediction: that greater resources are devoted to signals with high variance. We test this hypothesis in the human visual system by measuring cortically-determined perceptual thresholds to a mathematically well-defined set of textures (a set of synthetic images with complex but controlled statistical properties). Our theory unambiguously predicts tens of independent and non-trivial parameters without fitting, based on the statistics of natural scenes. We find a strikingly detailed quantitative match between the predicted and measured values of these parameters. This study confirms that the efficient coding principle applies beyond the sensory periphery to describe cortical processing, and it provides clues to the mechanisms that enable a detailed match between the statistics of sensory inputs and the allocation of central resources.
This PDF is available to Subscribers Only