Abstract
Color vision is an impressive ability of the human visual system. Three different types of cones in the retina, and then three postreceptoral cone-opponent mechanisms enable us to extract both luminance and chromatic information from our environment. Theories of efficient coding argue that the visual system evolved this way to optimally process natural scenes. Therefore, the study of statistical properties of natural images has great potential value to expand our understanding of visual perception. From that perspective, a central question is to characterize the role of chromatic information: in natural scenes, what information can be extracted from chromatic channels that is not already present in luminance? The literature on this point is contradictory. On the one hand, several studies have investigated the joint distribution of luminance and chrominance, finding that luminance and chromatic edges are not independent (Fine et al. 2003) and that most edges are defined by luminance contrast with color information being redundant (Zhou & Mel 2008). On the other hand, Hansen and Gegenfurtner (2009) showed using mutual information that luminance and chromatic edges constitute relatively independent sources of information and that their independence increases along successive stages of visual processing. Here we improve and extend their analysis, using ~1000 images taken from the McGill calibrated color image database (Olmos & Kingdom, 2004) to quantify the redundancy of luminance and chrominance information in each image individually. While we replicate the main findings of Hansen and Gegenfurtner (2009), we also find that the estimated amount of mutual information depends on how the images are processed. The most critical step is divisive normalization, a late stage in the processing pipeline. How redundant chrominance and luminance are, may thus depend on the precise definition of these two quantities, explaining some inconsistencies in the literature.
Meeting abstract presented at VSS 2017