Abstract
Human perception is often biased. As key signatures of the underlying computational process, these biases are of tremendous interest to brain scientists. Over the past decades, Bayesian inference has emerged as a major theoretical framework for understanding perception, and more generally cognition. To understand these biases, previous work has proposed a number of conceptually different and even seemingly contradicting ingredients, including attraction to a Bayesian prior, repulsion from the prior due to efficient coding, and central tendency effects on a bounded range. We present a unifying Bayesian theory of biases in perceptual estimation. We theoretically demonstrate an additive decomposition of perceptual biases into attraction to a prior, repulsion away from regions with high encoding precision, and regression away from the boundary. Importantly, the results reveal a simple and universal rule for predicting the direction of perceptual biases. To fit our Bayesian framework to the data, we developed a general numerical fitting procedure that estimates the model components, including prior, encoding, and noise magnitude, by maximizing the likelihood of trial-by-trial response data. Hence, the model is fitted to account for the full response distribution conditioned on the stimulus, taking both the perceptual bias and the response variability into account. We applied our modeling framework to a number of datasets collected in previous experiments, including those that investigated color perception, orientation perception, perceptual learning of motion direction, the estimation of numerosity, and the estimation of the length of time intervals. Applications to these experimental datasets reveal both domain-specific and general insights. In particular, the results reveal a major difference between the perceptual biases of circular variables and scale variables. Overall, our theory accounts for and leads to new understandings of biases in the perception of a variety of stimulus attributes.