Abstract
Understanding why two surface patches that have the same luminance appear different when they are surrounded by different spatial context is a challenging problem in vision research. Here, we derive a Bayesian observer model to account for such spatial context phenomena. We first demonstrate that these perceptual effects are inconsistent with a change in the prior distribution given the spatial context in the Bayesian observer model. We then show that the effect of spatial context on perception can be explained by a change in the likelihood function as the result of efficient coding. According to the efficient coding hypothesis, the observer reallocates sensory resources to process the visual information efficiently given overall limited sensory resources. We use this principle to derive changes in the likelihood function of our observer model as a result of an efficient representation of the stimulus within the spatial context. We argue that the spatial context is used as side information to allocate resources efficiently. We propose that efficient representation would lead in asymmetry in the likelihood function that is consistent with perceptual phenomena. We then show that the model's predictions can account for observed perceptual effect of enhanced discriminability around the average luminance value of the spatial context. The result suggests that contextual phenomena in perception can be understood as an efficient representation of the stimulus within such context.