December 2023
Volume 23, Issue 15
Open Access
Optica Fall Vision Meeting Abstract  |   December 2023
Contributed Session II: Computational modeling of shift in unique yellow for small stimuli
Author Affiliations
  • Carlos Rodriguez
    University of Pennsylvania
  • Ling-Qi Zhang
    University of Pennsylvania
  • Alexandra E. Boehm
    Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley
  • Maxwell J. Greene
    Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley
  • William S. Tuten
    Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley
  • David H. Brainard
    University of Pennsylvania
Journal of Vision December 2023, Vol.23, 78. doi:https://doi.org/10.1167/jov.23.15.78
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Carlos Rodriguez, Ling-Qi Zhang, Alexandra E. Boehm, Maxwell J. Greene, William S. Tuten, David H. Brainard; Contributed Session II: Computational modeling of shift in unique yellow for small stimuli. Journal of Vision 2023;23(15):78. https://doi.org/10.1167/jov.23.15.78.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Unique yellow (UY) is largely invariant to L:M cone proportion for spatially-extended stimuli in healthy trichromats. However, a recent adaptive-optics-based study by Boehm et al. reveals that when stimulus size is reduced to a few arcmin, color appearance depends on the local L:M proportion in the patch of the retina on which the stimulus was imaged. We aimed to determine if such findings are consistent with a normative account of visual processing. A series of 3.5 and 10 arcmin stimuli were simulated as isoluminant mixtures of 540 and 680 nm primaries. We modeled sensory encoding under adaptive-optics conditions using the open-source software ISETBio, for simulated retinal cone mosaics with varying local L:M proportions. The resultant cone excitations were decoded using a Bayesian image reconstruction algorithm (Zhang et al., 2022). For the 3.5 arcmin stimuli, as local L:M proportion decreased, the 540 nm component of the reconstructions increased relative to the 680 nm component. This is qualitatively consistent with the experimental observations of Boehm et al. For 10 arcmin stimuli, in contrast, reconstructions were stable across variation in local L:M cone proportion. Notably, reconstructions depend not only on the local L:M cone proportion, but also on the proportion in the immediately surrounding retina, leading to a testable prediction. The computational observations frame the experimental results as a normative consequence of visual processing.

Footnotes
 Funding: Funding: The University of Pennsylvania Post-Baccalaureate Research Education Program, grant number R25 GM071745, and a research gift from Meta
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×