September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Dynamical neural model of lightness computation and perceptual fading of retinally stabilized images
Author Affiliations
  • Michael Rudd
    University of Nevada, Reno
  • Idris Shareef
    University of Nevada, Reno
Journal of Vision September 2024, Vol.24, 980. doi:https://doi.org/10.1167/jov.24.10.980
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael Rudd, Idris Shareef; Dynamical neural model of lightness computation and perceptual fading of retinally stabilized images. Journal of Vision 2024;24(10):980. https://doi.org/10.1167/jov.24.10.980.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We recently proposed a neural model that accounts to within <6% error for lightness matches made to Staircase Gelb and simultaneous contrast displays comprising real illuminated surfaces. Here, we demonstrate how the model accounts for the perceptual fading that occurs when images are stabilized on the retina (Troxler, 1804, Riggs et al., 1953). In the model, cortical lightness computations are derived from transient ON and OFF cell responses in the early visual system that are generated in the course of fixational eye movements. The ON and OFF responses are sorted by eye movement direction in visual cortex to produce a set of spatiotopic maps of ON and OFF activations. Activations within these maps trigger spatial filling-in of lightness and darkness within independent ON and OFF networks, which are combined at the final modeling stage to compute perceived reflectance (lightness). We elaborate these mechanisms to produce a more detailed neurophysiological theory. We propose how early temporal responses of ON and OFF cells are read out (decoded) in visual cortex to trigger lightness and darkness induction signals, and we explicitly model cortical magnification, which further improves the fit to psychophysical data. Two key takeaways are: 1) the model accounts for multiple lightness phenomena, including fading of stabilized images, with high quantitative precision and in a biologically plausible way; 2) estimated rates of fixational eye movements known as microsaccades (Martinez-Conde et al., 2004) are too low to explain the dynamics of lightness phenomenology. We suggest that the higher rate eye movements known as tremor can better account for the perceptual data within the context of an otherwise identical neural framework. Correspondences between the model's processing stages and cortical neurophysiology will be discussed, and the computations performed at different model stages will be illustrated through a combination of still images and movies.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×