August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
A unifying theory explains seemingly contradicting biases in perceptual estimation
Author Affiliations
  • Xue-Xin Wei
    UT Austin
  • Michael Hahn
    Stanford University
Journal of Vision August 2023, Vol.23, 5082. doi:https://doi.org/10.1167/jov.23.9.5082
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xue-Xin Wei, Michael Hahn; A unifying theory explains seemingly contradicting biases in perceptual estimation. Journal of Vision 2023;23(9):5082. https://doi.org/10.1167/jov.23.9.5082.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Human perception is often biased. As key signatures of the underlying computational process, these biases are of tremendous interest to brain scientists. Over the past decades, Bayesian inference has emerged as a major theoretical framework for understanding perception, and more generally cognition. To understand these biases, previous work has proposed a number of conceptually different and even seemingly contradicting ingredients, including attraction to a Bayesian prior, repulsion from the prior due to efficient coding, and central tendency effects on a bounded range. We present a unifying Bayesian theory of biases in perceptual estimation. We theoretically demonstrate an additive decomposition of perceptual biases into attraction to a prior, repulsion away from regions with high encoding precision, and regression away from the boundary. Importantly, the results reveal a simple and universal rule for predicting the direction of perceptual biases. To fit our Bayesian framework to the data, we developed a general numerical fitting procedure that estimates the model components, including prior, encoding, and noise magnitude, by maximizing the likelihood of trial-by-trial response data. Hence, the model is fitted to account for the full response distribution conditioned on the stimulus, taking both the perceptual bias and the response variability into account. We applied our modeling framework to a number of datasets collected in previous experiments, including those that investigated color perception, orientation perception, perceptual learning of motion direction, the estimation of numerosity, and the estimation of the length of time intervals. Applications to these experimental datasets reveal both domain-specific and general insights. In particular, the results reveal a major difference between the perceptual biases of circular variables and scale variables. Overall, our theory accounts for and leads to new understandings of biases in the perception of a variety of stimulus attributes.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×