August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
Bayesian and neural computations in lightness perception
Author Affiliations
  • Michael E. Rudd
    Howard Hughes Medical Institute
    Department of Physiology and Biophysics, University of Washington
Journal of Vision August 2010, Vol.10, 415. doi:10.1167/10.7.415
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Michael E. Rudd; Bayesian and neural computations in lightness perception. Journal of Vision 2010;10(7):415. doi: 10.1167/10.7.415.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The task of computing lightness (i.e., perceived surface reflectance) from the spatial distribution of luminances in the retinal image is an underdetermined problem because the causal effects of reflectance and illumination are confounded in the image. Some recent approaches to lightness computation combine Bayesian priors with empirical estimates of the illuminant to compute reflectance from retinal luminance. Here, I argue for a different sort of Bayesian computation that takes local signed contrast (roughly, “edges”) as its input. Sensory edge information is combined with Bayesian priors that instantiate assumptions about the illumination and other rules such as grouping by proximity. The model incorporates a number of mechanisms from the lightness literature, including edge integration, anchoring, illumination frameworks, and contrast gain control. None of these mechanisms is gratuitous-all are required to account for data. I demonstrate how the model works by applying it to the results of lightness matching studies involving simple stimuli. Failures of lightness constancy are quantitatively accounted for by misapplying priors that probably favor lightness constancy in natural environments. Assimilation and contrast occur as byproducts. The rules that adjust the priors must necessarily be applied in a particular order, suggesting an underlying neural computation that first weighs the importance of local edge data according to the observer's assumptions about illumination, then updates these weights on the basis of the spatial organization of the stimulus, then spatially integrates weighted contrasts prior to a final anchoring stage. The order of operations is consistent with the idea that top-down attentional feedback sets the gains of early cortical contrast detectors in visual areas V1 or V2, then higher-level visual circuits having larger receptive fields further adjust these gains in light of the wider spatial image context. The spatial extent of perceptual edge integration suggests that lightness is represented in or beyond area V4.

Rudd, M. E. (2010). Bayesian and neural computations in lightness perception [Abstract]. Journal of Vision, 10(7):415, 415a, http://www.journalofvision.org/content/10/7/415, doi:10.1167/10.7.415. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×