Abstract
The visual system summarizes complex scenes to extract meaningful features (Barlow, 1959; Marr 1976) by using image primitives (edges, bars), encoded physiologically by specific configuration of receptive fields (Hubel & Wiesel, 1962). We recently proposed a pattern-filtering approach, based on the principle of most efficient information coding under real-world physical limitations (Punzi & Del Viva VSS-2006), that is a good predictor of an early stage of visual analysis. The model, when applied to black and white images, predicts from very general principles the structure of visual filters that closely resemble well-known receptive fields, and identifies salient features, such as edges and lines. A comparison with the performance of human observers showed that human sensitivity closely follows the model predictions (Del Viva & Punzi VSS-2006). Here, the same approach is applied to a set of colored natural images, in order to consider the role of color in the initial stages of image processing and edge detection. Again, the model identifies salient features in these more complex and realistic images, using both color and luminance information. The model predicts, however, that color information is used in a very different way from luminance information. The results show that equiluminant patterns are far from being efficient coders of information: they are either too common (uniform colored regions) or too rare and therefore are discarded by our approach. These results thus provide a theoretical explanation from first-principles for the presence of cells, in primary visual areas, that do not discriminate between chromatic or achromatic spatial patterns (see for example Johnson et al., 2001).
Supported by NIH grant EY-04802.