Abstract
A complete account of color perception should accurately predict the appearance of any distribution of light on the retina. Existing models of color vision that predict appearance under normal viewing conditions do not extend to stimuli at the finest spatial scales. We used adaptive optics microstimulation and a hue scaling paradigm to quantify color experience at the resolution of single cones in an effort to elucidate the rules governing small spot color appearance. These experiments have yielded four trends: (1) The largest predictor of hue was the cone type targeted: against a white background L-cones elicited reddish, M-cones greenish percepts. Saturation judgments were less stereotyped, even between cones with the same photopigment. (2) The appearance of two neighboring cones targeted simultaneously was well predicted by an average of their responses when targeted alone (R2=0.7, p<0.01). (3) The mean response of trials where L- and M-cones were stimulated together was “white”. (4) When two L- or two M-cones were targeted, reported sensations were 8±2% more saturated than their average (p<0.001). We extended these rules to model the appearance of larger stimuli, targeted to multiple cones, with a bootstrap procedure. The cone types illuminated were randomly updated on each iteration and the appearance of each spot was predicted by sampling from the experimental data. As expected, response variability across simulations was inversely related to spot diameter, but saturation was systematically under-estimated at larger sizes. These results suggest that the deviation from a simple average (rule #4) must increase with stimulus size.