The visual system is capable of quickly extracting relevant information from a large amount of visual data. In order to do so, the early stages of analysis must provide a compact image representation that extracts meaningful features (Barlow, 1959; Marr 1976). Color in natural scenes is a rich source of information, but a worthwhile question is whether color is sufficiently important to justify its implicit computational load during the early stages of visual processing, when strong compression is needed. A pattern-filtering approach (Punzi & Del Viva, VSS-2006), based on the principle of most efficient information coding under real-world physical limitations, was applied to color images of natural scenes in order to investigate the possible role of color in the initial stages of image representation and edge detection. That study, performed on photographic RGB images, confirmed the effectiveness of the pattern-filtering approach in predicting from first principles the structure of visual representations, and additionally suggested that color information is used in a very different way than luminance information (Del Viva, Punzi & Shevell, VSS-2009). The present study is significantly more detailed and uses the photoreceptor color space of MacLeod and Boynton, where luminance and chromatic information can be expressed separately. The results show that, when strict computational limitations are imposed, the use of color information does not provide a significant improvement in either the perceived quality of the compressed image or its information content, over the use of luminance alone. These results suggest that the early visual representations may not use color. Instead, color may be more suitable for a separate level of processing, following a rapid, initial luminance-based analysis.
Supported by an Italian Ministry of University and Research Grant (PRIN 2007).