Abstract
Color vision facilitates two distinct functions of vision. The first is to segment objects from each other and the background — color differences across an image provide an important cue in this regard. The second function is to provide information about object identity — color is generally considered to be a perceptual correlate of an object's intrinsic surface reflectance. In parallel with these two functions, two distinct experimental paradigms have been used to probe the adaptive mechanisms of color vision. The first employs measurements of detection and discrimination thresholds. This paradigm is attractive because threshold measurements are objective and precise. Threshold measurements, however, are silent about the way things look. Thus studies concerned with color appearance generally employ subjective scaling and/or asymmetric matching methods. These methods directly assess appearance. Although both threshold and appearance methods may be used to build models of color vision, there has been relatively little consideration as to whether a single model and set of parameters can simultaneously account for results from both paradigms. Understanding the answer this question is critical if we are to leverage threshold measurements to build models of color appearance. In this talk I will review recent work from my lab, with attention to at least two of the following three issues: i) how compatible are the computational demands placed on the visual system for optimizing discrimination and identification across changes of viewing context? ii) what experimental and analytic logic can be used to link models of color discrimination and color appearance? and iii) can a common model account for the effect of context on both thresholds and appearance?