Abstract
It is generally accepted that biases in visual orientation perception can be understood in terms of active inference: perception involves combining prior expectations with noisy sensory estimates. Theoretical work has shown that sensory estimates can be instantiated in population code models in which tuning curve preferences are matched to the frequency of orientations in natural environments (i.e. efficient coding). In the present study, we link meso-scale neural responses in the human visual system to such theoretic models of orientation coding. We recorded human observers’ (n = 37) brain activity with EEG while they passively viewed randomly oriented gratings. Using univariate and multivariate decoding analyses, we found that neural responses to orientation were strongly anisotropic, but not in a way predicted from any leading model of neural coding. We therefore developed a novel generative modelling procedure to simulate EEG activity from arbitrarily specified sensory tuning functions. By applying decoding analyses to EEG data generated from population codes with known tuning properties, we were able to determine the coding scheme necessary to reproduce the empirical neural responses. We found that the underlying population code was one in which tuning preferences were redistributed to prioritise cardinal orientations, but, most critically, with a substantial over-representation of horizontal relative to vertical orientations. Moreover, a population code that prioritises horizontal orientations alone was sufficient to produce many (but not all) of the anisotropic neural responses. We relate these findings to prior psychophysical and computational work that foreshadowed the importance of horizontal environmental structures to vision. More generally, our results provide insight into the encoding of environmental statistics in biological systems.