September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Developing a peripheral color tolerance model for gaze-contingent rendering
Author Affiliations & Notes
  • Lili Zhang
    Rochester Institute of Technology
  • Rachel Albert
    NVIDIA Research
  • Joohwan Kim
    NVIDIA Research
  • David Luebke
    NVIDIA Research
Journal of Vision September 2019, Vol.19, 298c. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lili Zhang, Rachel Albert, Joohwan Kim, David Luebke; Developing a peripheral color tolerance model for gaze-contingent rendering. Journal of Vision 2019;19(10):298c.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Gaze-contingent rendering (also called foveated rendering) is a technique for increasing rendering efficiency by displaying reduced-fidelity content outside the fixation region. It has the potential to lower computational costs as well as reduce bandwidth and latency for cloud-based rendering. Color discrimination is known to be degraded in the periphery due to photoreceptor distribution (Curcio & Allen, 1990) and cortical magnification (Abramov et al., 1991), suggesting a potential for significant savings. However, an eccentricity-dependent color discrimination model is required to reduce peripheral color accuracy without detection. We built a model describing peripheral color difference tolerances, adapted from the CIEDE2000 color difference model. We used eccentricity-dependent parameters for hue and chroma based on measured peripheral chromatic discrimination thresholds from Hansen et al., 2009. We conducted two experiments to test our model. First, we compared the predicted versus actual thresholds by testing multiple levels of CIELAB chroma and direction (increased or decreased chroma) on three image types (simple, vector, and natural). Second, we validated the utility of the model as a visual difference predictor (VDP) for per-channel bit reduction of natural images (peripheral images were rendered at lower bit depth). In both experiments, subjects were asked to freely view high-resolution static images with real-time eye tracking and peripheral color degradation, responding whether they noticed any artifacts. Results indicate the model slightly overestimates color difference thresholds for some subjects and image types. There is a strong trend of content dependency, with more complex images producing higher and more consistent thresholds across subjects. No difference was found between chroma directions. Our simple model shows some predictive power as a VDP for gaze-contingent color degradation. However, additional perceptual effects such as chromatic crowding and peripheral spatial frequency characteristics would likely produce more accurate results. Further study is required for practical applications.

Acknowledgement: NVIDIA Research 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.