**Color conveys important information for birds in tasks such as foraging and mate choice, but in the natural world color signals can vary substantially, so birds may benefit from generalizing responses to perceptually discriminable colors. Studying color generalization is therefore a way to understand how birds take account of suprathreshold stimulus variations in decision making. Former studies on color generalization have focused on hue variation, but natural colors often vary in saturation, which could be an additional, independent source of information. We combine behavioral experiments and statistical modeling to investigate whether color generalization by poultry chicks depends on the chromatic dimension in which colors vary. Chicks were trained to discriminate colors separated by equal distances on a hue or a saturation dimension, in a receptor-based color space. Generalization tests then compared the birds' responses to familiar and novel colors lying on the same chromatic dimension. To characterize generalization we introduce a Bayesian model that extracts a threshold color distance beyond which chicks treat novel colors as significantly different from the rewarded training color. These thresholds were the same for generalization along the hue and saturation dimensions, demonstrating that responses to novel colors depend on similarity and expected variation of color signals but are independent of the chromatic dimension. **

*Parus major*) vary in the level of yellow pigment of their breast feathers, and it has been suggested that hue signals individual foraging success and saturation overall body condition, providing independent signals during mate choice (Senar, Negro, Quesada, Ruiz, & Garrido, 2008). More generally, the light reflected from a uniform specular surface (such as a feather or beetle elytron) will vary in saturation but not hue, with the chromaticity of points on the surface lying on a line between the color locus of the illumination (i.e., achromatic) and the locus of the material seen with minimum specular reflectance. Consequently, hue and saturation may give different types of information about objects and surfaces and hence have different behavioral significance.

*N*alternatives, each characterized by its discriminable distance from the T+ color. To compare generalization between conditions, it is useful to translate our choice data into a model of the chicks' certainty about a reward and how that certainty depends on the perceptual distance of novel colors from the T+ color. We assume that the chicks peck on the

*N*alternative stimuli with a probability proportional to the probability that the stimulus is rewarded (Herrnstein, 1970). In accordance with the ideal observer theory (e.g., Geisler, 2003), the probability that a chick will choose a stimulus

*k*with a discriminable distance

*X*to the rewarded training stimulus T+ can be determined by invoking Bayes's rule:

_{k}*P*(choice

*= k*) is our belief about the probability of a chick's choosing stimulus

*k*prior to any training. If we assume no pretraining bias such as innate color preferences,

*P*(choice

*= k*) is the same for all

*N*stimuli, and is therefore simply 1/

*N*. The expression

*P*(

*X*|choice

_{k}*= k*) is the probability of

*X*given the choice of stimulus

_{k}*k*. This term is called the likelihood function. Since the prior probability

*P*(choice

*= k*) is constant, the likelihood function is the only term we have to fit to describe the probability of a chick's choosing stimulus

*k*given its discriminability

*X*from T+ (Equation 1). The denominator is the total probability of

_{k}*X*for all stimuli.

_{j}*X*−

_{k}*T*) measures the number of JNDs a given stimulus is away from the rewarded pattern. This means that the standard deviation

*σ*is the only parameter that needs to be fitted.

*σ*in Equation 2,

*a*determines the width of the function and is the only parameter to fit.

*n*= 1,000 iterations, new data sets are created by randomly sampling from the original data set with a sample size equivalent to the original data. For each new data set the model performs a maximum (log) likelihood estimation for both types of fit and compares them in a likelihood ratio test. Since we compute log likelihoods, we can calculate the difference instead of the ratio: with

*L*(

*a*|

*D*′) being the likelihood given the Laplace function and

*L*(

*σ*|

*D*′) being the likelihood given the Gaussian function for a given data set

*D*′. This way we obtain a distribution of

*n*likelihood ratios

*r*, of which the mean and the 95% confidence limits are calculated (explicitly, the values of the 97.5 and 2.5 percentiles). This nonparametric bootstrap is performed by the function fit_exp_and_gauss.m in MATLAB (available on request), which returns a distribution of bootstrapped log likelihood ratios.

*n*iterations this algorithm approximates the posterior distribution of the parameter values by a set of samples, as is standard Bayesian modeling practice.

*b*will be used for the parameter, standing for either

*a*or

*σ*depending on the outcome of the likelihood ratio tests.

*b*that maxmizes the likelihood, as the initial value

*b*. It proposes a candidate value

_{t}*b*′ that is randomly sampled from a normal distribution

*P*(

*b*′|

*b*) centered around

_{t}*b*and then compares the posterior (which here is the same as the likelihood, since we assume a uniform prior) of the current and the proposed model using the following standard Metropolis–Hastings acceptance ratio test. The posterior ratio (likelihood ratio, since we have equal priors) is calculated as

_{t,}*L*(

*b*′|

*D*) and

*L*(

*b*|

_{t}*D*) are the posterior probabilities (likelihoods) of the proposed and the current model, respectively, given our choice-frequency data

*D*.

*r*> 1),

*b*′ is chosen as a sample of the posterior probability distribution of

*b*. If the proposed model is less likely than the current model (

*r*< 1),

*b*′ is rejected with a probability equivalent to

*r*and

*b*is chosen instead. The chosen value will then serve as the current value

_{t}*b*in the next iteration. Repeating this process allows us to obtain samples from the posterior probability distribution together with its 97.5 and 2.5 percentiles (serving as a measure of our 95% confidence interval).

_{t}*R*

^{2}that describes the percentage of variance in the data that the model accounts for.

*identical*. Here, we compare four

*different*colors simultaneously, which all have a different choice probability that has to be considered (Equation 1). Since our model converts choice frequencies into probabilities we can apply a two-alternative forced-choice threshold criterion, despite using a multiple-choice paradigm to generate the data by asking for the relative probability

*P*(T+) of T+ compared to any novel color with a measurement value

_{r}*X*: with

*P*(T

*+*) as the probability of T+ and

*P*(

*X*) as the probability of a color at a discriminable distance

*X*. For colors very close to T+,

*P*(T

_{r}*+*) is around 50% and increases with increasing discriminability of novel colors from T+ until it reaches 100% for colors that are very different from T+. Therefore, we can interpolate the difference threshold using a criterion of

*P*(T

_{r}*+*) = 75%. If we assume a Gaussian or a Laplace likelihood function (Equations 2 and 3), and disregard normalization, Equation 6 can be written as for a Gaussian likelihood function and for a Laplace likelihood function. As becomes obvious from Equations 7 and 8, the thresholds can be directly calculated from the estimated parameters (

*σ*or

*a*). Assuming a threshold criterion of 0.75 (75%), solving for

*X*reveals a linear relationship between the parameter and the threshold, with

*X*

_{threshold}= 1.482

*σ*for a Gaussian likelihood function and

*X*

_{threshold}= 1.099

*a*for a Laplace likelihood function. This threshold interpolation is performed for each parameter value in the posterior distribution, which was obtained using the Metropolis–Hastings algorithm, as described in the previous section (Equation 5). This way a probability distribution of thresholds with mean and upper and lower 95% confidence limits of the mean can be obtained as summary statistics and a measure of confidence to compare generalization between conditions.

*b*obtained using the Metropolis–Hastings algorithm (see Fit) and plots the mean as well as the standard deviation as a function of the perceptual distance

*X*from T+ (Figure 3).

*R*

^{2}values of the mean fit for Groups 1–4 are, respectively, 0.97, 0.95, 0.93, and 1).

*σ*obtained using parametric bootstrapping for Groups 1–4 are, respectively, 2.13 [1.73, 2.78], 1.67 [1.36, 2.12], 1.76 [1.43, 2.17], and 1.96 [1.59, 2.43].

*Parus major*) convey different types of information about the bird's quality (Senar et al., 2008), and if hue and saturation can generally provide complementary information, then the ability to analyze these two chromatic dimensions of color independently when encountering novel colors would allow birds to optimize decisions—for example, about what to eat or which mate to choose. This raises the question of whether the chromatic dimension influences the degree to which birds generalize responses to novel colors.

*Harmonia axyridis*.

*, 61 (9), 1401–1408. 1408, doi:10.1007/s00265-007-0371-9.*

*Behavioral Ecology and Sociobiology**, 37 (16), 2183–2194. 2194, doi:10.1016/S0042-6989(97)00026-6.*

*Vision Research**, 153 (2), 183–200. 200, doi:10.1086/303160.*

*The American Naturalist**, 141 (1), 47–52. 52, doi:10.1007/BF00611877.*

*Journal of Comparative Physiology**, 35 (2), 67–77.*

*Annales Zoologici Fennici**(pp. 825–838). 838). Cambridge, MA: MIT Press.*

*The visual neurosciences**. New York: Lawrence Erlbaum Associates.*

*Psychophysics: Methods and theory**, 66, 15–36. 36, doi:10.1006/anbe.2003.2174.*

*Animal Behavior**, 274 (1621), 1941–1948. 1948, doi:10.1098/rspb.2007.0538.*

*Proceedings of the Royal Society B: Biological Sciences**, 13 (2), 243–266. 266, doi:10.1901/jeab.1970.13-243.*

*Journal of the Experimental Analysis of Behavior**. Cambridge, MA: Academic Press.*

*Encyclopedia of animal behavior*(Vol. 1, pp. 470–475)*, 218 (2), 184–193. 193, doi:10.1242/jeb.111187.*

*Journal of Experimental Biology**. Cambridge, MA: MIT Press.*

*Cognitive biology*(pp. 129–146)*, 202 (21), 2951–2959.*

*Journal of Experimental Biology**, doi:10.1111/brv.12230.*

*Biological Reviews**, 19 (3), 542–547. 547, doi:10.1016/S0003-3472(71)80109-4.*

*Animal Behaviour**Parus major*) reflects both pigment acquisition and body condition.

*, 145, 1195–1210. 1210, doi:10.1163/156853908785387638.*

*Behaviour**, 237 (4820), 1317–1323. 1323, doi:10.1126/science.3629243.*

*Science**, 24, 629–640. 640, doi:10.1017/S0140525X01000061.*

*Behavioral and Brain Sciences**, 265 (1394), 351–358. 358, doi:10.1098/rspb.1998.0302.*

*Proceedings of the Royal Society B: Biological Sciences**, 63 (8), 1293–1313. 1313, doi:10.3758/Bf03194544.*

*Perception & Psychophysics**, 63 (8), 1314–1329. 1329, doi:10.3758/Bf03194545.*

*Perception & Psychophysics**. New York: Wiley.*

*Color science*(2nd ed.)