Abstract
Adaptation is thought to be important for optimizing visual coding, yet performance improvements with adaptation have been difficult to demonstrate for stimulus dimensions beyond mean luminance and color. We explored the functional consequences of contrast adaptation in a new way - by adapting images rather than observers to simulate theoretically complete adaptation to an environment. This allowed us to probe effects of long-term adaptation over time scales that are difficult to test by adapting an observer. The adaptation was modeled as gain changes in the cones and in multiple post-receptoral channels tuned to different color-luminance directions. Image sets were sampled from different environments and the individual images rendered after adjusting the gains so that the average response within each channel was equal across the two environments. This centers contrast responses on the average of the color distribution for a given environment and scales contrast sensitivity inversely with the gamut of the distribution along different color-luminance axes. Visual performance with the resulting adapted images was assessed with a search task for colored targets among neutral distracters, both shown as Gaussian blobs superimposed at random locations across the images. Search times were compared for pairs of original and adapted images and for corresponding targets such that the two stimuli were equivalent except for the simulated changes with adaptation. For natural environments that vary widely in their distributions, pronounced improvements in contrast discrimination and search times are readily demonstrated and thus lend support to functional accounts of contrast adaptation. Assessing performance across the range of environments routinely encountered allows us to assess the extent to which adaptation might significantly impact contrast coding or when performance could be enhanced by pre-adapting images for observers.