Abstract
Gaze-contingent rendering (also called foveated rendering) is a technique for increasing rendering efficiency by displaying reduced-fidelity content outside the fixation region. It has the potential to lower computational costs as well as reduce bandwidth and latency for cloud-based rendering. Color discrimination is known to be degraded in the periphery due to photoreceptor distribution (Curcio & Allen, 1990) and cortical magnification (Abramov et al., 1991), suggesting a potential for significant savings. However, an eccentricity-dependent color discrimination model is required to reduce peripheral color accuracy without detection. We built a model describing peripheral color difference tolerances, adapted from the CIEDE2000 color difference model. We used eccentricity-dependent parameters for hue and chroma based on measured peripheral chromatic discrimination thresholds from Hansen et al., 2009. We conducted two experiments to test our model. First, we compared the predicted versus actual thresholds by testing multiple levels of CIELAB chroma and direction (increased or decreased chroma) on three image types (simple, vector, and natural). Second, we validated the utility of the model as a visual difference predictor (VDP) for per-channel bit reduction of natural images (peripheral images were rendered at lower bit depth). In both experiments, subjects were asked to freely view high-resolution static images with real-time eye tracking and peripheral color degradation, responding whether they noticed any artifacts. Results indicate the model slightly overestimates color difference thresholds for some subjects and image types. There is a strong trend of content dependency, with more complex images producing higher and more consistent thresholds across subjects. No difference was found between chroma directions. Our simple model shows some predictive power as a VDP for gaze-contingent color degradation. However, additional perceptual effects such as chromatic crowding and peripheral spatial frequency characteristics would likely produce more accurate results. Further study is required for practical applications.
Acknowledgement: NVIDIA Research