Free
Article  |   August 2013
The biological significance of color constancy: An agent-based model with bees foraging from flowers under varied illumination
Author Affiliations
Journal of Vision August 2013, Vol.13, 10. doi:https://doi.org/10.1167/13.10.10
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Samia Faruq, Peter W. McOwan, Lars Chittka; The biological significance of color constancy: An agent-based model with bees foraging from flowers under varied illumination. Journal of Vision 2013;13(10):10. https://doi.org/10.1167/13.10.10.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  The perceived color of an object depends on its spectral reflectance and the spectral composition of the illuminant. Thus when the illumination changes, the light reflected from the object also varies. This would result in a different color sensation if no color constancy mechanism is put in place—that is, the ability to form consistent representation of colors across various illuminants and background scenes. We explore the quantitative benefits of various color constancy algorithms in an agent-based model of foraging bees, where agents select flower color based on reward. Each simulation is based on 100 “meadows” with five randomly selected flower species with empirically determined spectral reflectance properties, and each flower species is associated with realistic distributions of nectar rewards. Simulated foraging bees memorize the colors of flowers that they have experienced as most rewarding, and their task is to discriminate against other flower colors with lower rewards, even in the face of changing illumination conditions. We compared the performance of von Kries, White Patch, and Gray World constancy models with (hypothetical) bees with perfect color constancy, and color-blind bees. A bee equipped with trichromatic color vision but no color constancy performed only ∼20% better than a color-blind bee (relative to a maximum improvement at 100% for perfect color constancy), whereas the most powerful recovery of reflectance in the face of changing illumination was generated by a combination of von Kries photoreceptor adaptation and a White Patch calibration (∼30% improvement relative to a bee without color constancy). However, none of the tested algorithms generated perfect color constancy.

Introduction
Colors are used by many animals, including humans, to identify salient objects under considerable variation in natural illumination. If the spectral composition of the illumination changes, then so does the light reflected from objects. For example the impression of redness can be generated by a red object under white light, or a white object under red light. This makes object identification by color challenging, unless the viewing system possesses color constancy, the ability of the visual system to compensate for illumination changes. But how does color constancy work, and which mechanisms work best for realistic and natural ranges of illumination changes? 
Pollinating bees are a powerful model for studying the adaptive significance of color vision as they exist in a mutual relationship with flowering plants that signal reward by color (Chittka & Menzel, 1992; Menzel & Backhaus, 1991). Generalist flower visitors such as honeybees use their color vision to detect flowers, to memorize the colors and patterns of rewarding flowers, and to discriminate against less rewarding flowers. This paradigm rules the life of the forager bee, and accuracy of color choice is crucial for foraging success and therefore biological fitness. Because of this prime importance of color choices in a bee's life, these insects provide a unique system to explore questions of the function and benefits of color constancy. 
Like humans, most species of bees are trichromatic. Honeybees (Apis mellifera) are typical in terms of their photoreceptor equipment across the diversity found in bee pollinators, and are thus used as a model system here. A crucial difference between human and honeybee color vision is that the latter possess an ultraviolet (UV) receptor with a peak sensitivity (λmax) at 344 nm, in addition to blue (λmax = 436 nm), and green (λmax = 544 nm) receptors (Peitsch et al., 1992). Honeybees and most other species of bees have no red receptors (Figure 1). 
Figure 1
 
Spectral sensitivity functions of the UV, B, and G photoreceptors in the honeybee (Apis mellifera) eye, as determined by intracellular recordings (Peitsch et al., 1992). The functions are here normalized to a maximum of unity; in reality, absolute sensitivity can differ between receptor types by more than an order of magnitude, as a result of photoreceptor adaptation. Adapted from figure 6A in Peitsch et al. (1992) with permission from Springer/Rightslink.
Figure 1
 
Spectral sensitivity functions of the UV, B, and G photoreceptors in the honeybee (Apis mellifera) eye, as determined by intracellular recordings (Peitsch et al., 1992). The functions are here normalized to a maximum of unity; in reality, absolute sensitivity can differ between receptor types by more than an order of magnitude, as a result of photoreceptor adaptation. Adapted from figure 6A in Peitsch et al. (1992) with permission from Springer/Rightslink.
Numerous studies on color vision in bees have shown that color choice is, to some degree, independent of the spectral content of the illuminant (Dyer & Chittka, 2004; Lotto & Chittka, 2005; Mazokhin-Porshnjakov, 1966; Neumeyer, 1980, 1981a; Werner, Menzel, & Wehrhahn, 1988), although the compensation of the light change is not complete and color constancy, therefore, is imperfect. Thus, bees do not “discount” the illuminant, and indeed the spectral quality of natural illumination holds important information about, for example, weather conditions and time of day. Therefore, animals face the challenge of remaining color constant and yet to also be able to perceive changes to the light (Lotto & Chittka, 2005; Skorupski & Chittka, 2011). While some authors have held that color constancy needs to be essentially perfect for color vision to be at all useful (Land, 1977), the penalties for departures from perfect color constancy paid under natural conditions need to be quantified on a case-by-case basis, depending on the actual variation of the illumination, and colors that need to be distinguished. The fundamental question we ask here is how effective various computational color constancy algorithms are under biologically relevant, natural conditions. 
Several models of computational color constancy have been proposed, with various assumptions about the integration of properties of the illuminant and the scene surfaces (Brainard & Wandell, 1986; Brainard et al., 2006; Fairchild & Reniff, 1995; Hurlbert, 1998; Land, 1983, 1986; Maloney & Wandell, 1986; Werner, Sharpe, & Zrenner, 2000) and with different methods of assessing performance (Hansen, Olkkonen, Walter, & Gegenfurtner, 2006; Ling & Hurlbert, 2008). Empirical studies have quantified performance of color constancy methods under various conditions (Brainard, Kraft, & Longère, 2003; Kraft & Brainard, 1999; Ling & Hurlbert, 2008). What is missing is a framework by which the efficiency of various color constancy algorithms can be tested under biologically relevant conditions. 
Here we explore quantitatively the extent to which (a) a visual model with color constancy outperforms a color visual model with no color constancy in foraging success; and (b) the evaluation of the biological usefulness of various computational color constancy algorithms for foraging under conditions of changing illumination. The efficiency of various color constancy mechanisms is assessed using multi-agent based simulations of bees foraging under highly realistic conditions, using natural illuminants and flower spectral reflectance functions, as well as floral rewards directly drawn from empirical data. 
We begin with exploring the efficiency of a von Kries adaptation mechanism, where color constancy is only mediated by photoreceptor adaptation, so that the relative sensitivity of a receptor will increase when it is poorly stimulated, and decrease when light in its spectral domain is strong (Dyer, 1998; von Kries, 1905). However, there is considerable evidence that more central nervous processes (i.e., beyond adaptation in the retina) are also involved in color constancy, and these are explored in the retinex theory developed by Edwin Land (Land, 1959a, 1959b, 1959c, 1977; Land & McCann, 1971). Retinex here combines elements of retina and cortex, highlighting the importance of both peripheral as well as cortical mechanisms in human color constancy. While bees of course do not have a cortex, there is nonetheless evidence that more central nervous processing might also be involved in maintaining color constancy in bees (Lotto & Wicklein, 2005; Werner et al., 1988) as well as fish (Ingle, 1985). Whilst numerous variants of the retinex theory have been developed (see Ebner, 2007, and Hurlbert, 1998), the focus of the paper is to quantify the biological usefulness of color constancy mechanisms. Two classical retinex algorithms in computational theory of color constancy (Buchsbaum, 1980; Land, 1959a, 1959b) are applied to the scenes in our experiments. These are (a) the Gray world assumption, which is the assumption that the average color components of the scene in Red, Green, and Blue ([RGB]; or UV, Blue, and Green [UBG] in the bee) average to gray; and (b) the White patch calibration, which uses of the most intense region of the scene as a reference point and assumes that this point must be white. 
Methods
Modeling the foraging environment
To test adaptive hypotheses in foraging, it is useful to employ Agent-Based Modeling, where the interactions of an autonomous agent (here: a bee forager), using defined memory and decision heuristics, with a spatially explicitly modeled environment can be formalized (Dornhaus, Klügl, Oechslein, Puppe, & Chittka, 2006). To model the foraging environment and bee agents, we used NetLogo (Wilensky, 1999), which is a simple programmable Agent-Based Modeling system for simulating natural and social phenomena, especially for modeling the interaction developing over time between the agents and the changes in the environment, such as variation in the illuminant. 
The meadow in which the modeled bees forage is a 350 × 350 celled map. Each cell contains either a flower or green foliage. Five thousand flowers consisting of five randomly selected flower species (i.e., 1,000 flowers per species) are distributed randomly within the map. The flowers are assigned their empirically determined reflectance spectra as downloaded from the Floral Reflectance Database (FReD; Arnold, Faruq, Savolainen, McOwan, & Chittka, 2010). Cells not occupied by flowers are assumed to have an average of green foliage reflectance spectra (Chittka, Shmida, Troje, & Menzel, 1994). The hive (and the single bee agent at the start of a foraging bout) are placed in the center of the map. 
Performance by bees (depending on the quality of the color constancy algorithm; see below) is quantified as the amount of nectar collected by the bee agent in the agent-based simulations under changes of illumination. Nectar standing crop distributions of natural flowers are assigned to each of the five randomly occurring flower species in each foraging environment. These values are based on empirically determined nectar standing crop distributions found in real plants (Raine & Chittka, 2007a, 2007b). 
Implementation of foraging rules and color choice
The agent-based model captures the interplay of the color visual system of the foraging bee to make choice between the flower colors in the environment using simple rules of matching and maximizing (Chittka, Gumbert, & Kunze, 1997; Greggers & Menzel, 1993) and rules based on empirically determined foraging behavior (Chittka, Spaethe, Schmidt, & Hickelsberger, 2001). 
Figure 2 illustrates the rules of color choice in the model by showing the states that the modeled bee can assume. The rules for bees' tendency for flower constancy, i.e., to continue visiting a recently visited flower species versus to switch species (Chittka & Thomson, 1997) are implemented into the simulation for the bee agent to determine when it should forage on a flower, for example in the choice state the decision to forage on a flower is based on: 
Figure 2
 
Bee color choice behavior in an agent-based modeling environment based on flower constancy foraging strategy. The bee agent begins with searching for flowers in a “patch” that is in the vicinity of the bee (flowers within its visual field). Bee agent will switch from move-and-search state until flowers are found that will result in the bee movements on the grid. The bee makes a decision if it should or should not forage on the flower based on the other flowers that are available in the patch of flowers. These flowers in the patch are then compared with the memory of the most rewarding flower color, and bees stay with this flower or switch to a different one depending on the perceived similarity between the memorized and the actually encountered flower color.
Figure 2
 
Bee color choice behavior in an agent-based modeling environment based on flower constancy foraging strategy. The bee agent begins with searching for flowers in a “patch” that is in the vicinity of the bee (flowers within its visual field). Bee agent will switch from move-and-search state until flowers are found that will result in the bee movements on the grid. The bee makes a decision if it should or should not forage on the flower based on the other flowers that are available in the patch of flowers. These flowers in the patch are then compared with the memory of the most rewarding flower color, and bees stay with this flower or switch to a different one depending on the perceived similarity between the memorized and the actually encountered flower color.
  •  
    If the recent collection average is 0.4 to 1 μl, the bee matches its choices against other rewarding flower colors in this range (matching).
  •  
    If a particular colored flower exceeds 1 μl in one visit, the bee exclusively visits this flower species if available (maximizing; Greggers & Menzel, 1993).
The recent nectar collection average is a running average of nectar collected from three flowers visited before. 
The search and move state as shown in Figure 2 for the agent-based model are determined as follows: 
  1.  
    Step 1: If target is in location within the bee's visual field, here defined as a 14 × 14 cell array surrounding the agent, move straight to target.
  2.  
    Step 2: If no target is found within the visual field, choose a direction at random and move in distance d, then look for any target in this new location. If none is found, choose another random direction and move distance d. Repeat this step until target is found, then move to Step 1 (Viswanathan, Raposo, & da Luz, 2008).
This foraging rule is essentially a “random walk” spatial movement strategy (Heinrich, 1979; Pyke, 1981; Zimmerman, 1981). In our model, the bee can perform any of four behaviors (searching for any flower, moving from one point to another, foraging on a flower by taking its available nectar content, or deciding on a flower to forage on; this includes recalling a known flower color from memory; Figure 2). The bee can switch between species, which can happen in one of the following circumstances: (a) the flower species are similar in color (in which case the switch probability will depend upon the similarity between the two species (see section: “Calculation of color loci in color space: color discrimination” below); (b) a flower of the previously visited species is not available in the immediate vicinity after extensive search; or (c) if the nectar levels of the currently visited species fall below the running average of nectar collected for that species (Chittka et al., 1997). 
In order to quantify exactly how performance is affected by a change in lighting, the bee agent forages under the canonical light (set as normal daylight D65; see below) for a set period of time, that is, 50 flower visits to any of the flowers in the meadow. After 50 visits to any of the five floral species in the meadow, the light condition is changed from daylight D65 to one of the following, naturally occurring illumination conditions—forest shade, woodland shade, and light filtering through small canopy gaps (small gap illuminant; Endler, 1993). The ability to store new experiences in the new light condition is blocked after the switch from the canonical light occurs (after all 50 flower visits), and the flowers encountered under the canonical light are retained. This forces the bee to make decisions on choosing floral color based on what is learned under the canonical light. 
The amount of nectar M collected by the bee agent is determined at the end of each simulation run. One hundred simulation runs are performed for each computational color constancy method. Each simulation run creates one new bee agent, and each time generates a new 350 × 350 map to distribute a new set of five randomly selected flower species (occurring 1,000 times each). 
Calculation of photoreceptor signals: von Kries receptor adaptation
The signal generated by a photoreceptor when viewing a particular colored object depends principally on the receptor's spectral sensitivity function and adaptation state, the illuminating light, and the viewed object's spectral reflectance. The relative quantum flux P in a given photoreceptor type is calculated as follows (Laughlin, 1981; Naka & Rushton, 1966):  where Is(λ) is the spectral reflectance of the stimulus (Arnold et al., 2010). D(λ) is the illuminant (Figure 3, for example, the D65 daylight function); S(λ) is the spectral sensitivity of the photoreceptor type in question and is the wavelength step (i.e., 1 nm). We used the spectral sensitivity functions of the honeybee Apis mellifera (Peitsch et al., 1992; Figure 1). The sensitivity of the photoreceptors is adjusted by a sensitivity factor (R) as follows:   
Figure 3
 
Normalized irradiance functions of the types of illumination used. Spectral distribution of daylight D65 (Wyszecki & Stiles, 1982), forest shade, woodland shade, and light filtered through small canopy gaps, all under sunny conditions (Endler, 1993), normalized to a maximum of unity. The lights are differently intense in different wavelength domains—for example, forest shade light is most intense around 550 nm, and so the light is green, whereas woodland shade is dominated by wavelengths in the 400–450 nm range, and thus appears bluish. Adapted from figure 1 in Chittka (1996) with permission from Elsevier/Rightslink and data from figures 6 through 8 in Endler (1993) with permission from Ecological Society of America.
Figure 3
 
Normalized irradiance functions of the types of illumination used. Spectral distribution of daylight D65 (Wyszecki & Stiles, 1982), forest shade, woodland shade, and light filtered through small canopy gaps, all under sunny conditions (Endler, 1993), normalized to a maximum of unity. The lights are differently intense in different wavelength domains—for example, forest shade light is most intense around 550 nm, and so the light is green, whereas woodland shade is dominated by wavelengths in the 400–450 nm range, and thus appears bluish. Adapted from figure 1 in Chittka (1996) with permission from Elsevier/Rightslink and data from figures 6 through 8 in Endler (1993) with permission from Ecological Society of America.
Figure 4
 
Color loci of 1,572 flower colors in the color hexagon, and color shift under an illumination change. In this color space, angular position (as measured from the center, the uncolored point) corresponds to bee-subjective hue, so that color loci in the top corner indicate bee blue; top right corner: bee blue-green; bottom right corner: bee green, and so forth. The distance between two color loci corresponds to their discriminability. Large color shifts might corrupt the identification of flower species by color. The distance from the center to any of the corners is 1, and circles indicate distances from the center at steps of 0.1. Straight lines represent color shift from daylight D65 (dot end; Wyszecki & Stiles, 1982) to forest shade lighting (tip end; Endler, 1993) for each flower plotted, assuming von Kries receptor adaptation and no further correction. The line from the dot to tip represents the perceptual color shift of flowers under D65 daylight to forest shade lighting. Note that shifts in different areas of color space occur predominantly in different directions. Shifts appear especially pronounced in the blue-green and UV-green areas of color space, and less so in the green and UV-blue regions.
Figure 4
 
Color loci of 1,572 flower colors in the color hexagon, and color shift under an illumination change. In this color space, angular position (as measured from the center, the uncolored point) corresponds to bee-subjective hue, so that color loci in the top corner indicate bee blue; top right corner: bee blue-green; bottom right corner: bee green, and so forth. The distance between two color loci corresponds to their discriminability. Large color shifts might corrupt the identification of flower species by color. The distance from the center to any of the corners is 1, and circles indicate distances from the center at steps of 0.1. Straight lines represent color shift from daylight D65 (dot end; Wyszecki & Stiles, 1982) to forest shade lighting (tip end; Endler, 1993) for each flower plotted, assuming von Kries receptor adaptation and no further correction. The line from the dot to tip represents the perceptual color shift of flowers under D65 daylight to forest shade lighting. Note that shifts in different areas of color space occur predominantly in different directions. Shifts appear especially pronounced in the blue-green and UV-green areas of color space, and less so in the green and UV-blue regions.
The adaptation process by the coefficient R scales sensitivity whilst adapting to light reflected from the background (Laughlin, 1981) and adjusts the sensitivity of the receptor so that it displays half the maximal response when viewing the background (with reflectance Ib(λ)). 
This is in line with the von Kries (1905) adaptation theory, which is based on the assumption that the sensitivity of a photoreceptor is scaled depending on the overall intensity of the light in the receptor's spectral domain. This self-shunting of receptors ensures that receptors can meaningfully code information over intensity ranges of several logarithmic units. Different spectral receptors can adjust their sensitivity independently of each other to some extent. Thus, such receptor adaptation can be considered one of several possible mechanisms in achieving color constancy: von Kries receptor adaptation can partially compensate for the effects of illuminant changes (Dyer & Chittka, 2004; Ives, 1912). 
Natural illumination spectra of forest shade, small gap, and woodland shade are taken from Endler (1993; Figure 3). These spectra provide a range of the natural illuminants that a foraging bee might realistically encounter during a single foraging bout. Forest shade is the condition under a closed canopy cover, with most light transmitted through (or reflected from) green foliage. Woodland shade is an illumination condition found where trees do not form a closed canopy and therefore gaps of open sky (of varied sizes) generate a mixture of light with that reflected from and transmitted through green leaves. Gaps (“sun flecks”) are defined as patches of direct sunlight within an otherwise forested environment (Endler, 1993). Since the illumination spectra in Endler (1993) extend only down to 400 nm, but bees are sensitive in the UV down to 300 nm (Briscoe & Chittka, 2001), we extrapolated down to 300 nm from 400 nm using a polynomial trend of the illumination spectra. Reflectance spectra of flowers are downloaded from the FReD (Arnold et al., 2010). 
The conversion of photoreceptor quantum catch P into receptor excitation E (normalized to a maximum of unity) follows a nonlinear function (Chittka, Beier, Hertel, Steinmann, & Menzel, 1992; Naka & Rushton, 1966):   
Calculation of color loci in color space: Color discrimination
To calculate the discriminability of flower colors in color space, we used the color hexagon. Bee color discrimination can be well explained by assuming that receptor signals are processed using a two-dimensional (2-D) color opponent space, although behavioral (Chittka et al., 1992) as well as physiological (Yang, Lin, & Hung, 2004) evidence is ambiguous as to the precise nature of the opponent processes. For this reason it is advisable to use a generalized representation of color opponency that does not hinge critically on a particular set of color opponent mechanisms (Chittka et al., 1992). It can be shown by simple geometry that a projection of the cubical receptor excitation space onto two dimensions, which yields a hexagon, provides the desired representation of color opponency. Points within this hexagonal color space are defined by constant difference between receptor signals, which is just the sort of algorithm performed by the color opponent system in the bee's brain. 
The hexagon coordinates of any color are defined by the receptor signals derived according to Equation 3, inserted into the following equations (both of which define color-opponent axes of the color hexagon):    
Color distance in the color hexagon is determined using a Euclidian metric, so that for two color stimuli with coordinates x1, y1 and x2, y2, the color distance D is:   
Color discrimination between a trained (rewarding color) and a different (unknown or less rewarding) color does not follow a linear function. The probability of selecting an encountered flower color, depending on its similarity to the learnt flower color, is determined by Pdiscrim (Chittka et al., 2001; Figure 5). 
Figure 5
 
Flower constancy as a function of color distance between natural flower colors as experimentally determined under field conditions for several bee species (Chittka et al., 2001). The curve is a cumulative Weibull distribution (λ = 2.2, k = 0.23) generated in Mathematica© (a statistical modeling tool) in order to generate random numbers (i.e., color discrimination level) given this distribution. This discrimination ability is Pdiscrim as employed by the bee agent in Figure 2. The probability determines if the bee will switch to another flower color or continue to remain faithful to it, based on the color distance (i.e., color units on the color hexagon) between the two flowers.
Figure 5
 
Flower constancy as a function of color distance between natural flower colors as experimentally determined under field conditions for several bee species (Chittka et al., 2001). The curve is a cumulative Weibull distribution (λ = 2.2, k = 0.23) generated in Mathematica© (a statistical modeling tool) in order to generate random numbers (i.e., color discrimination level) given this distribution. This discrimination ability is Pdiscrim as employed by the bee agent in Figure 2. The probability determines if the bee will switch to another flower color or continue to remain faithful to it, based on the color distance (i.e., color units on the color hexagon) between the two flowers.
Retinex color constancy algorithms
Beyond simple calibration by receptor adaptation, performance was measured in two computational color constancy techniques, White patch–Brightest patch (Land & McCann, 1971) and Gray world assumption (Buchsbaum, 1980; Helson, 1964; Land, 1986). To apply a computational color constancy method to the scene that the bee has encountered, each time the bee attempts to make a decision between flowers within its visual field, the scene is transformed using a particular computational color constancy function. The scene described is a segment of the simulated meadow made up of cells from a location within the visual field of the bee agent as it forages. Each cell has a reflectance spectrum that is either a floral color or green foliage illuminated by the training or testing illuminants (Figure 6). This 2-D scene made up of flower colors and green foliage is processed through either the White patch retinex or the Gray world method each time the bee agent encounters flowers around its current position. 
Figure 6
 
The efficiency of different color constancy algorithms in compensating for illumination shifts. The figure shows an example of a scene (i.e., visual field, defined here as 14 × 14 celled map around the bee) consisting of five different flower colors encountered by a bee agent in the simulation, remapped onto human vision in RGB. The excitation values of the bees' UV, B, and G receptors range from 0–1, and are mapped to red, blue, and green, respectively, ranging from 0–255. Colored squares represent flowers, and gray squares green foliage. The scene is the visual field around the location of the bee, and consists of all flowers in r, the radius of the visual field. It contains five flower species under daylight (left column) and forest shade (right column). The change of appearance is shown for (a) von Kries adaptation only; (b) White patch algorithm; (c) Gray world algorithm. For this scene and this set of five flower species, the White patch algorithm (b, middle row) performs best, since the colors are well spread out within scenes and also change the least between scenes.
Figure 6
 
The efficiency of different color constancy algorithms in compensating for illumination shifts. The figure shows an example of a scene (i.e., visual field, defined here as 14 × 14 celled map around the bee) consisting of five different flower colors encountered by a bee agent in the simulation, remapped onto human vision in RGB. The excitation values of the bees' UV, B, and G receptors range from 0–1, and are mapped to red, blue, and green, respectively, ranging from 0–255. Colored squares represent flowers, and gray squares green foliage. The scene is the visual field around the location of the bee, and consists of all flowers in r, the radius of the visual field. It contains five flower species under daylight (left column) and forest shade (right column). The change of appearance is shown for (a) von Kries adaptation only; (b) White patch algorithm; (c) Gray world algorithm. For this scene and this set of five flower species, the White patch algorithm (b, middle row) performs best, since the colors are well spread out within scenes and also change the least between scenes.
A visual representation, accessible to human observers, of what happens to the transformation of the bee agents' scene when applying these computational methods is presented in Figure 6. One example scene shows how the 14 × 14 celled map around a bee agents' location, consisting of random distribution of five floral species, is transformed with the respective color constancy mechanisms. The excitation response levels of the bees' UV, blue, and green photoreceptors ranging from 0 and 1 are mapped to the human trichromatic RGB values that range from 0–255 in digital images (Gonzalez & Wintz, 1977) where bees' short wavelength (UV) receptor response is mapped to the shortwave component of the RGB model (B), the bees' middle wavelength (B) receptor is mapped to the middle wavelength component of RGB (G), and the bees' long wavelength receptor response maps to the R component of the RGB model. 
White patch color constancy algorithm
A form of the White Patch retinex algorithm is achieved through assuming that the brightest point in a scene is of white color, so that all other colors can be placed in the context of this reference (Land, 1964). In digital image processing it is achieved by finding the brightest (highest excitation response at a given location in the image—i.e., the brightest pixel) level of pixels and to assume this is white (Ebner, 2007). Computationally, the White patch is the maximum intensity in the UV, B, and G receptors, and thus that is the estimated illuminant. The scene undergoes a transformation using the estimated illumination as a chromatic adaptation. Initially, the simplest computational version of this is to find the maximal intensity in each receptor response (Ebner, 2007):   
In the above scenario, ci represents the response of the receptors in a given location of x,y coordinates in a given receptor (i.e., UV, B, or G). The maximum intensity of Li,max is described as the maximum receptor response of the reflected light ci, which is determined by the canonical illuminant (L) and the object reflectance (R) at point i in the scene:   
This maximum value in each channel is used to predict the illuminant, which is used to scale all the perceived reflectance of all cells in the scene:   
An example of the efficiency of the White patch algorithm in calibrating a visual scene is shown in Figure 6b
Gray world color constancy algorithm
The Gray world algorithm (Buchsbaum, 1980) assumes that, on average, the color of the scene is achromatic and so to estimate the illuminant, the average color in the scene is used (Ebner, 2007; Gonzalez & Wintz, 1977). The average of UV, B, and G is found for a scene. In the first step, the average color in the viewed image/scene is computed:   
If in all receptor responses ai (i.e., i = UV, B, or G) is equal, then the visual scene already satisfies the Gray world assumption. If the average found of one receptor type response is much lower than the other receptor types then the algorithm increases the influence of the lowest receptor type average excitation response (Ebner, 2007). The same process as the transformation in Equation 5 is applied except that the white (maximum intensity) constants will be the average value for the receptor response in UV, B, and G. The mapping of the UBG response to RGB system is shown in Figure 6c for the Gray world assumption. 
Reference points: Bee with perfect color constancy, no color constancy, and color-blind bee
To test the computational models against a lower and upper limit of the agent-based model bee, three extreme models of vision were used to evaluate the performance of the color constancy methods with reference to these extremes: a color-blind bee, a bee with no color constancy, and a bee with perfect color constancy. A color-blind bee forages from all five flower species indiscriminately, as if they were members of the same species. Thus it can make adaptive spatial foraging movements, but it cannot choose the most rewarding species by color. A “perfect color-constancy bee” makes no mistakes induced by changes in illumination; it experiences no perceptual color shift while it still makes the usual color discrimination errors based on its ability to discriminate colors, i.e., similar colors are confused with a certain probability, but this remains independent of illumination changes (Chittka et al., 2001). Finally, a no color-constancy bee is simulated by using the constant value of R in Equation 2 for a D65 daylight illuminant while the illuminant is varied (Dyer, 1998, 1999; Dyer & Chittka, 2004). 
Results
The spread of floral color loci in color space, as well as their dislocation under changing illumination conditions, depends strongly on the color constancy algorithm that is implemented. To illustrate the nature of the color shift for the various conditions tested here, Figure 7 shows the color loci in bee color space assuming either no calibration, von Kries photoreceptor adaptation, or White patch and Gray world assumption for one example set of five natural flower colors. Without any mechanisms of color correction (top row), illumination changes result in large displacement of color loci, to such an extent that the area generated by the loci of one flower species under various illuminants can overlap with that of another flower species (see loci on the bottom right of the top panels). Such overlap indicates that one flower species may be taken for another under after an illumination change, an undesirable scenario for a color vision system. The spread of color loci is appreciably smaller when one assumes a correction by a von Kries adaptation mechanism (second row). Both the White patch and Gray world algorithms provide a combination of relatively small color shift under various illumination conditions, as well as a good spread of color loci between the different flower species. 
Figure 7
 
Color loci in bee color space and compensation of color shift for four illumination conditions, assuming different color constancy algorithms. The Figure displays color loci for one set of five flower species as an example, i.e., Vicia cracca (clear circle), Lythrum salicaria (cross), Lathyrus pratensis (clear square), Cirsium oleraceum (filled square), and Lotus corniculatus (filled circle). Color loci are shown for four natural lighting conditions (Endler, 1993)—D: norm function D65 (daylight); FS: forest shade; WS: woodland shade; SG: small forest gaps. Concentric circles are at distances of 0.1 hexagon units. Insets (left) show extended views of the rectangles within the color hexagons on the right, to allow for more detailed visual inspection of color loci and their shifts.
Figure 7
 
Color loci in bee color space and compensation of color shift for four illumination conditions, assuming different color constancy algorithms. The Figure displays color loci for one set of five flower species as an example, i.e., Vicia cracca (clear circle), Lythrum salicaria (cross), Lathyrus pratensis (clear square), Cirsium oleraceum (filled square), and Lotus corniculatus (filled circle). Color loci are shown for four natural lighting conditions (Endler, 1993)—D: norm function D65 (daylight); FS: forest shade; WS: woodland shade; SG: small forest gaps. Concentric circles are at distances of 0.1 hexagon units. Insets (left) show extended views of the rectangles within the color hexagons on the right, to allow for more detailed visual inspection of color loci and their shifts.
Beyond this single example of one set of five flower species, we analyzed quantitatively the quality of the same color correction mechanisms for a larger sample of such sets, in order to ensure that results are robust and do not hinge critically on just one set of flower colors. Figure 8 shows the average nectar collection for a variety of color vision and color constancy systems under changes of illumination from standard daylight function D65 to three other illuminants. The figure allows a comparison between the performance of the various computational color constancy methods tested here for 100 sets of five randomly selected flower species viewed under realistic illumination changes. It is noteworthy that even a color-blind bee, which collects nectar from five flower species without discrimination as if they were members of the same species, collects a reasonable amount of nectar (M = 171.3, SE = 1.1), i.e., only 17% less than a bee with perfect color constancy (M = 206.5 ± 3.1). This is because all flower species in our simulation contain some nectar, although there are of course pronounced differences in their quality (Raine & Chittka, 2007a). We use these two hypothetical systems (color-blind bee and a bee with perfect color constancy) as benchmarks, with the latter having a 100% improvement over the former. 
Figure 8
 
Average (± SE) nectar collected by a bee agent under changes of illumination from D65 daylight to forest shade, small gap light or woodland shade, where each change in illumination from D65 daylight is simulated 100 times in meadows of five randomly selected flower by new bee agents. Perfect color constancy is assigned 100% color constancy improvement over a color-blind bee. Percentage in the other color visual systems indicates the increased nectar collection performance for each color constancy method from the color-blind agent. These percentages are shown above the columns. Significance levels (Dunn's Multiple Comparisons Test): *p < 0.05; **p < 0.01; ***p < 0.001.
Figure 8
 
Average (± SE) nectar collected by a bee agent under changes of illumination from D65 daylight to forest shade, small gap light or woodland shade, where each change in illumination from D65 daylight is simulated 100 times in meadows of five randomly selected flower by new bee agents. Perfect color constancy is assigned 100% color constancy improvement over a color-blind bee. Percentage in the other color visual systems indicates the increased nectar collection performance for each color constancy method from the color-blind agent. These percentages are shown above the columns. Significance levels (Dunn's Multiple Comparisons Test): *p < 0.05; **p < 0.01; ***p < 0.001.
A bee equipped with trichromatic color vision but no color constancy (M = 178.8 ± 1.5 μl) performed only ∼20% better than a color-blind bee (relative to a maximum improvement at 100% for perfect color constancy), and although this improvement is significant at the 5% level, the qualitative change is moderate. This demonstrates that without a suitable correction mechanism in conditions of changing illumination, color vision is only of limited value. A simple von Kries photoreceptor adaptation mechanism resulted in a further improvement of 15% in nectar collection (M = 184.0 ± 3.0 μl). The most powerful recovery of reflectance in the face of changing illumination was generated by a combination of von Kries photoreceptor adaptation and a White Patch calibration (∼30% improvement relative to a bee without color constancy; M = 190.6 ± 2.5μl) closely followed by the Gray world (M = 190.0 ± 2.5 μl) condition. While these two mechanisms did not convert into significantly different nectar collection performances (Figure 8), they substantially (and for most comparisons, significantly) outperformed the color blind bee, the color vision system without color correction, as well the bee equipped with only von Kries receptor adaptation. This shows that there is substantial adaptive value to color constancy under biologically realistic conditions, However, it is also remarkable that none of the correction mechanisms employed here resulted in performance anywhere near perfect color constancy, which is still ∼45% better than the best color constancy algorithm that we tested (p < 0.001 for all models compared with perfect color constancy). 
Figure 7 shows that the quality of different color constancy algorithms hinges both on minimizing color shifts in conditions of changing illumination, and also on the spread of color loci that correspond to different objects (flowers in this case). To disentangle these two factors, and to ensure that the superior performance of the White patch and Gray world algorithms was based in minimizing color shift, we re-evaluated the bee agents' performance for only a limited range of color distances by taking into account only pairs of flower colors within a narrow range of color distances. Only nectar collected in meadows with five randomly chosen flower species that have an average perceptual color distance of 0.1–0.2 hexagon units amongst each other are shown to ensure that color constancy performance is based on perceptual color shift and not perceptual color distance. Under such conditions, the White patch algorithm (with a nectar harvest of M = 191.5, SE = 2.7) still significantly outperforms a bee with no color constancy (M = 172.4, SE = 1.2; Dunn's multiple comparisons test; p < 0.001) as well as a bee equipped with only von Kries receptor adaptation (M = 178.1, SE = 1.7; p < 0.001), showing that this particular algorithm's superiority is not just based on color discrimination (via maximizing color distances), but on minimizing color shift under changing illumination conditions. 
Discussion
Our model demonstrates the biological usefulness of various computational color constancy methods, as well as receptor adaptation response mechanisms in resolving color ambiguity under changes of illumination where the bee uses its color vision to perform a color choice task to solve a real world problem it faces. The results also highlight the importance of target surround and scene content for bees to achieve color constancy (Lotto & Wicklein, 2005; Werner et al., 1988). A variety of computational color constancy mechanisms use whole-scene analysis to estimate the illuminant, or to use statistical ensemble to estimate the surface reflectance (Linnell & Foster, 2002; Smithson & Zaidi, 2004). Computational color constancy mechanisms have not before been assessed based on the biological significance of the subject correctly making color choices under biologically realistic conditions. Assuming certain computational color constancy mechanisms, our simulations show quantitatively the amount of reward collected under changes of illumination to explore how well the actual color visual model performs. Since our results were determined using a large variety of natural object color combinations (rather than only a few selected scenes), and a variety of illumination spectra, our results are likely robust and not dependent on a particular set of colors. This study shows that computational color constancy mechanisms that make the use of scene statistics achieve color constancy with substantially improved results compared to a von Kries receptor adaptation response mechanism alone. Our study quantifies the biological significance of color constancy for foraging bees, compared to no color constancy, or no color vision at all. 
Overall, the advantages generated by various color vision systems, compared to an hypothetical color blind bee are surprisingly moderate, in that even a system with perfect color constancy would perform only ∼17% better than one with no color vision at all. However, empirical results show, and our models confirm, that bee color constancy is not perfect (Dyer & Chittka, 2004; Neumeyer, 1981b; Werner et al., 1988), and so the advantages predicted by realistic color constancy algorithms are only about half those of a hypothetical perfect color constancy system (Figure 8). 
Flower colors and bee color vision make an exceptionally useful model to study color constancy, because for pollinating insects more than for most animals, accurate color choices need to be made throughout its daily foraging activities. This is because forager bees collect practically all the nutrition needed for their native nest from flowers, and unlike other animals, spend no time on other activities such as mate search (Heinrich, 1979). Thus the quantitative benefits of various components of color vision (including constancy) are likely to be more pronounced in bees than in most other animals. In this view, the advantages gained by color constancy, as measured in our study, are likely to be at the upper end of those found across animal species. This is all the more so since the random distributions of flowers in space used here will necessitate frequent decisions between flower species—real plants are often aggregated in space, and pollinators therefore do not have to make accurate color choices as frequently as in a random distribution of multiple flower species. Thus, the differences in the quality of various color vision and color constancy algorithms are likely to be even smaller in natural conditions than in our modeling. Nonetheless even a few percent of improvement in foraging performance, mediated by a particular mechanism of color correction under changes of illumination, might still provide a fitness advantage over the duration of an entire foraging career. 
Agent based modeling is especially useful to explore the quantitative benefits of sensory and cognitive mechanisms under biologically realistic conditions (Dornhaus et al., 2006). An alternative is to measure the quantitative benefits of various algorithms mathematically—for example, simply measuring the amount of color shift in conditions of changing illumination depending on the nature of the calibration mechanism. This might produce a rank order of the quality of various color constancy corrections similar to the one obtained here, but it would not allow us to measure quantitatively the benefits of such algorithms based on realistic distributions of flower colors in space, associated with real rewards in terms of nectar quality, and using bees' empirically determined foraging rules. Our approach therefore allows a much more accurate assessment of the advantages of color constancy in the economy of nature. 
Supplementary Materials
Acknowledgments
Samia Faruq was supported by a PhD studentship from the EPSRC (Engineering and Physical Research Council), UK. 
Commercial relationships: none. 
Corresponding author: Lars Chittka. 
Email: l.chittka@qmul.ac.uk. 
Address: School of Biological and Chemical Sciences, Queen Mary University of London, London, United Kingdom. 
References
Arnold S. E. J. Faruq S. Savolainen V. McOwan P. W. Chittka L. (2010). FReD: The Floral Reflectance Database—A web portal for analyses of flower color. PloS ONE, 5 (12), e14287, doi:10.1371/journal.pone.0014287.
Brainard D. H. Kraft J. M. Longère P. (2003). Colour constancy: Developing empirical tests of computational models. In Mausfeld R. Heyer D. (Eds.), Colour perception: Mind and the physical world (pp. 307–328). Oxford, UK: Oxford University Press.
Brainard D. H. Longere P. Delahunt P. B. Freeman W. T. Kraft J. M. Xiao B. (2006). Bayesian model of human color constancy. Journal of Vision, 6 (11): 10, 1267–1281, http://www.journalofvision.org/content/6/11/10, doi:10.1167/6.11.10. [PubMed] [Article] [CrossRef]
Brainard D. H. Wandell B. A. (1986). Analysis of the retinex theory of color vision. Journal of the Optical Society of America A, 3, 1651–1661. [CrossRef]
Briscoe A. Chittka L. (2001). The evolution of colour vision in insects. Annual Review of Entomology, 46, 471–510. [CrossRef] [PubMed]
Buchsbaum G. (1980). A spatial processor model for object colour perception. Journal of the Franklin Institute, 310, 1–26. [CrossRef]
Chittka L. (1996). Optimal sets of colour receptors and opponent processes for coding of natural objects in insect vision. Journal of Theoretical Biology, 181, 179–196. [CrossRef]
Chittka L. Beier W. Hertel H. Steinmann E. Menzel R. (1992). Opponent color coding is a universal strategy to evaluate the photoreceptor inputs in hymentoptera. Journal of Comparative Physiology A, 170, 545–563.
Chittka L. Gumbert A. Kunze J. (1997). Foraging dynamics of bumble bees: Correlates of movements within and between plant species. Behavioral Ecology, 8, 239–249. [CrossRef]
Chittka L. Menzel R. (1992). The evolutionary adaptation of flower colours and the insect pollinators' colour vision. Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, & Behavioral Physiology, 171, 171–181. [CrossRef]
Chittka L. Shmida A. Troje N. Menzel R. (1994). Ultraviolet as a component of flower reflections, and the colour perception of Hymenoptera. Vision Research, 34, 1489–1508. [CrossRef] [PubMed]
Chittka L. Spaethe J. Schmidt A. Hickelsberger A. (2001). Adaptation, constraint, and chance in the evolution of flower color and pollinator color vision. In Chittka L. Thomson J. D. (Eds.), Cognitive ecology of pollination (pp. 106–126). Cambridge: Cambridge University Press.
Chittka L. Thomson J. D. (1997). Sensori-motor learning and its relevance for task specialization in bumble bees. Behavioral Ecology & Sociobiology, 41, 385–398. [CrossRef]
Dornhaus A. Klügl F. Oechslein C. Puppe F. Chittka L. (2006). Benefits of recruitment in honey bees: Ecology and colony size. Behavioral Ecology, 17, 336–344. [CrossRef]
Dyer A. G. (1998). The colour of flowers in spectrally variable illumination and insect pollinator vision. Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, & Behavioral Physiology, 183, 203–212. [CrossRef]
Dyer A. G. (1999). Broad spectral sensitivities in the honeybee's photoreceptors limit colour constancy. Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, & Behavioral Physiology, 185, 445–453. [CrossRef]
Dyer A. G. Chittka L. (2004). Biological significance of discriminating between similar colours in spectrally variable illumination: Bumblebees as a study case. Journal of Comparative Physiology A, 190, 105–114. [CrossRef]
Ebner M. (2007). Color constancy. Chichester: John Wiley.
Endler J. A. (1993). The color of light in forests and its implications. Ecological Monographs, 63, 1–27. [CrossRef]
Fairchild M. D. Reniff L. (1995). Time-course of chromatic adaptation for color-appearance judgments. Journal of the Optical Society of America A: Optics, Image Science, & Vision, 12, 824–833. [CrossRef]
Gonzalez R. C. Wintz P. A. (1977). Digital image processing. Reading, MA: Addison-Wesley Pub.
Greggers U. Menzel R. (1993). Memory dynamics and foraging strategies of honeybees. Behavioral Ecology & Sociobiology, 32, 17–29. [CrossRef]
Hansen T. Olkkonen M. Walter S. Gegenfurtner K. R. (2006). Memory modulates color appearance. Nature Neuroscience, 9, 1367–1368. [CrossRef] [PubMed]
Heinrich B. (1979). Bumblebee economics. Cambridge: Harvard University Press.
Helson H. (1964). Adaptation-level theory. New York: Harper & Row.
Hurlbert A. C. (1998). Computational models of colour constancy. In Walsh V. Kulikowski J. (Eds.), Perceptual constancy: Why things look as they do (pp. 283–322). Cambridge, UK: Cambridge University Press.
Ingle D. J. (1985). The goldfish as a retinex animal. Science, 227, 651–654. [CrossRef] [PubMed]
Ives H. E. (1912). The relation between the color of the illuminant and the color of the illuminated object. Transactions of Illuminating Engineering Society, 7, 62–72.
Kraft J. M. Brainard D. H. (1999). Mechanisms of color constancy under nearly natural viewing. Proceedings of the National Academy of Sciences, USA, 96, 307–312. [CrossRef]
Land E. H. (1959a). Color vision and the natural image. Part I. Proceedings of the National Academy of Sciences, USA, 45, 115–129. [CrossRef]
Land E. H. (1959b). Color vision and the natural image. Part II. Proceedings of the National Academy of Sciences, USA, 45, 636–644. [CrossRef]
Land E. H. (1959c). Experiments in color vision. Scientific American, 45, 84–99. [CrossRef]
Land E. H. (1964). The Retinex. American Scientist, 52, 247–264.
Land E. H. (1977). The retinex theory of color vision. Scientific American, 237 (6), 108–128. [CrossRef] [PubMed]
Land E. H. (1983). Recent advances in retinex theory and some implications for cortical computations: color vision and the natural image. Proceedings of the National Academy of Sciences, USA, 80, 5163–5169. [CrossRef]
Land E. H. (1986). Recent advances in Retinex theory. Vision Research, 26, 7–21. [CrossRef] [PubMed]
Land E. H. McCann J. J. (1971). Lightness and retinex theory. Journal of the Optical Society of America, 61, 1–11. [CrossRef] [PubMed]
Laughlin S. (1981). A simple coding procedure enhances a neuron's information capacity. Zeitschrift für Naturforschung C, 36, 910–912.
Ling Y. Hurlbert A. (2008). Role of color memory in successive color constancy. Journal of the Optical Society of America A, 25, 1215–1226. [CrossRef]
Linnell K. J. Foster D. H. (2002). Scene articulation: Dependence of illuminant estimates on number of surfaces. Perception, 31, 151–159. [CrossRef] [PubMed]
Lotto R. B. Chittka L. (2005). Seeing the light: Illumination as a contextual cue to color choice behavior in bumblebees. Proceedings of the National Academy of Sciences, USA, 102, 3852–3856. [CrossRef]
Lotto R. B. Wicklein M. (2005). Bees encode behaviorally significant spectral relationships in complex scenes to resolve stimulus ambiguity. Proceedings of the National Academy of Sciences, USA, 102, 16870–16874. [CrossRef]
Maloney L. T. Wandell B. A. (1986). Color constancy: A method for recovering surface spectral reflectance. Journal of the Optical Society of America A, 3, 29–33. [CrossRef]
Mazokhin-Porshnjakov G. A. (1966). Recognition of colored objects by insects. Oxford: Pergamon Press.
Menzel R. Backhaus W. (1991). Color vision in insects. In Gouras P. (Ed.), Vision and visual dysfunction: The perception of colour (Vol. 6, pp. 262–293). London: Macmillan Press.
Naka K. I. Rushton W. A. H. (1966). S-potentials from luminosity units in the retina of fish (Cyprinidae). Journal of Physiology, 185, 587–599. [CrossRef] [PubMed]
Neumeyer C. (1980). Simultaneous color contrast in the honeybee. Journal of Comparative Physiology, 139, 165–176. [CrossRef]
Neumeyer C. (1981a). Chromatic adaptation in the honeybee—Successive color contrast and color constancy. Journal of Comparative Physiology, 144, 543–553. [CrossRef]
Neumeyer C. (1981b). Chromatic adaption in the honeybee: Successive color contrast and color constancy. Journal of Comparative Physiology, 144, 543–553. [CrossRef]
Peitsch D. Fietz A. Hertel H. Souza J. Ventura D. F. Menzel R. (1992). The spectral input systems of hymenopteran insects and their receptor-based colour vision. Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, & Behavioral Physiology, 170, 23–40. [CrossRef]
Pyke G. H. (1981). Honeyeater foraging: A test of optimal foraging theory. Animal Behaviour, 29, 878–888. [CrossRef]
Raine N. E. Chittka L. (2007a). Nectar production rates of 75 bumblebee-visited flower species in a German flora (Hymenoptera: Apidae: Bombus terrestris). Entomologia Generalis, 30, 191–192. [CrossRef]
Raine N. E. Chittka L. (2007b). The adaptive significance of sensory bias in a foraging context: Floral color preferences in the bumblebee Bombus terrestris. PLoS ONE, 2 (6), e556, doi:10.1371/journal.pone.0000556.
Skorupski P. Chittka L. (2011). Is colour cognitive? Optics & Laser Technology, 43, 251–260. [CrossRef]
Smithson H. Zaidi Q. (2004). Colour constancy in context: Roles for local adaptation and levels of reference. Journal of Vision, 4 (9): 3, 693–710, http://www.journalofvision.org/content/4/9/3, doi:10.1167/4.9.3. [PubMed] [Article] [CrossRef]
Viswanathan G. M. Raposo E. P. da Luz M. G. E. (2008). Lévy flights and superdiffusion in the context of biological encounters and random searches. Physics of Life Reviews, 5, 133–150. [CrossRef]
von Kries J. (1905). Die Gesichtsempfindungen. In Nagel W. (Ed.), Handbuch der physiologie des menschen (Vol. 3, pp. 109–282). Braunschweig: Vieweg.
Werner A. Menzel R. Wehrhahn C. (1988). Color constancy in the honeybee. Journal of Neuroscience, 8, 156–159. [PubMed]
Werner A. Sharpe L. T. Zrenner E. (2000). Asymmetries in the time-course of chromatic adaptation and the significance of contrast. Vision Research, 40, 1101–1113. [CrossRef] [PubMed]
Wilensky U. (1999). Netlogo. Evanston, IL: Northwestern University.
Wyszecki G. Stiles W. S. (1982). Color science: Concepts and methods, quantitative data and formulae (2nd ed.). New York: Wiley.
Yang E. C. Lin H. C. Hung Y. S. (2004). Patterns of chromatic information processing in the lobula of the honeybee, Apis mellifera L. Journal of Insect Physiology, 50, 913–925. [CrossRef] [PubMed]
Zimmerman M. (1981). Patchiness in the dispersion of nectar resources: Probable causes. Oecologia, 49, 154–157. [CrossRef]
Figure 1
 
Spectral sensitivity functions of the UV, B, and G photoreceptors in the honeybee (Apis mellifera) eye, as determined by intracellular recordings (Peitsch et al., 1992). The functions are here normalized to a maximum of unity; in reality, absolute sensitivity can differ between receptor types by more than an order of magnitude, as a result of photoreceptor adaptation. Adapted from figure 6A in Peitsch et al. (1992) with permission from Springer/Rightslink.
Figure 1
 
Spectral sensitivity functions of the UV, B, and G photoreceptors in the honeybee (Apis mellifera) eye, as determined by intracellular recordings (Peitsch et al., 1992). The functions are here normalized to a maximum of unity; in reality, absolute sensitivity can differ between receptor types by more than an order of magnitude, as a result of photoreceptor adaptation. Adapted from figure 6A in Peitsch et al. (1992) with permission from Springer/Rightslink.
Figure 2
 
Bee color choice behavior in an agent-based modeling environment based on flower constancy foraging strategy. The bee agent begins with searching for flowers in a “patch” that is in the vicinity of the bee (flowers within its visual field). Bee agent will switch from move-and-search state until flowers are found that will result in the bee movements on the grid. The bee makes a decision if it should or should not forage on the flower based on the other flowers that are available in the patch of flowers. These flowers in the patch are then compared with the memory of the most rewarding flower color, and bees stay with this flower or switch to a different one depending on the perceived similarity between the memorized and the actually encountered flower color.
Figure 2
 
Bee color choice behavior in an agent-based modeling environment based on flower constancy foraging strategy. The bee agent begins with searching for flowers in a “patch” that is in the vicinity of the bee (flowers within its visual field). Bee agent will switch from move-and-search state until flowers are found that will result in the bee movements on the grid. The bee makes a decision if it should or should not forage on the flower based on the other flowers that are available in the patch of flowers. These flowers in the patch are then compared with the memory of the most rewarding flower color, and bees stay with this flower or switch to a different one depending on the perceived similarity between the memorized and the actually encountered flower color.
Figure 3
 
Normalized irradiance functions of the types of illumination used. Spectral distribution of daylight D65 (Wyszecki & Stiles, 1982), forest shade, woodland shade, and light filtered through small canopy gaps, all under sunny conditions (Endler, 1993), normalized to a maximum of unity. The lights are differently intense in different wavelength domains—for example, forest shade light is most intense around 550 nm, and so the light is green, whereas woodland shade is dominated by wavelengths in the 400–450 nm range, and thus appears bluish. Adapted from figure 1 in Chittka (1996) with permission from Elsevier/Rightslink and data from figures 6 through 8 in Endler (1993) with permission from Ecological Society of America.
Figure 3
 
Normalized irradiance functions of the types of illumination used. Spectral distribution of daylight D65 (Wyszecki & Stiles, 1982), forest shade, woodland shade, and light filtered through small canopy gaps, all under sunny conditions (Endler, 1993), normalized to a maximum of unity. The lights are differently intense in different wavelength domains—for example, forest shade light is most intense around 550 nm, and so the light is green, whereas woodland shade is dominated by wavelengths in the 400–450 nm range, and thus appears bluish. Adapted from figure 1 in Chittka (1996) with permission from Elsevier/Rightslink and data from figures 6 through 8 in Endler (1993) with permission from Ecological Society of America.
Figure 4
 
Color loci of 1,572 flower colors in the color hexagon, and color shift under an illumination change. In this color space, angular position (as measured from the center, the uncolored point) corresponds to bee-subjective hue, so that color loci in the top corner indicate bee blue; top right corner: bee blue-green; bottom right corner: bee green, and so forth. The distance between two color loci corresponds to their discriminability. Large color shifts might corrupt the identification of flower species by color. The distance from the center to any of the corners is 1, and circles indicate distances from the center at steps of 0.1. Straight lines represent color shift from daylight D65 (dot end; Wyszecki & Stiles, 1982) to forest shade lighting (tip end; Endler, 1993) for each flower plotted, assuming von Kries receptor adaptation and no further correction. The line from the dot to tip represents the perceptual color shift of flowers under D65 daylight to forest shade lighting. Note that shifts in different areas of color space occur predominantly in different directions. Shifts appear especially pronounced in the blue-green and UV-green areas of color space, and less so in the green and UV-blue regions.
Figure 4
 
Color loci of 1,572 flower colors in the color hexagon, and color shift under an illumination change. In this color space, angular position (as measured from the center, the uncolored point) corresponds to bee-subjective hue, so that color loci in the top corner indicate bee blue; top right corner: bee blue-green; bottom right corner: bee green, and so forth. The distance between two color loci corresponds to their discriminability. Large color shifts might corrupt the identification of flower species by color. The distance from the center to any of the corners is 1, and circles indicate distances from the center at steps of 0.1. Straight lines represent color shift from daylight D65 (dot end; Wyszecki & Stiles, 1982) to forest shade lighting (tip end; Endler, 1993) for each flower plotted, assuming von Kries receptor adaptation and no further correction. The line from the dot to tip represents the perceptual color shift of flowers under D65 daylight to forest shade lighting. Note that shifts in different areas of color space occur predominantly in different directions. Shifts appear especially pronounced in the blue-green and UV-green areas of color space, and less so in the green and UV-blue regions.
Figure 5
 
Flower constancy as a function of color distance between natural flower colors as experimentally determined under field conditions for several bee species (Chittka et al., 2001). The curve is a cumulative Weibull distribution (λ = 2.2, k = 0.23) generated in Mathematica© (a statistical modeling tool) in order to generate random numbers (i.e., color discrimination level) given this distribution. This discrimination ability is Pdiscrim as employed by the bee agent in Figure 2. The probability determines if the bee will switch to another flower color or continue to remain faithful to it, based on the color distance (i.e., color units on the color hexagon) between the two flowers.
Figure 5
 
Flower constancy as a function of color distance between natural flower colors as experimentally determined under field conditions for several bee species (Chittka et al., 2001). The curve is a cumulative Weibull distribution (λ = 2.2, k = 0.23) generated in Mathematica© (a statistical modeling tool) in order to generate random numbers (i.e., color discrimination level) given this distribution. This discrimination ability is Pdiscrim as employed by the bee agent in Figure 2. The probability determines if the bee will switch to another flower color or continue to remain faithful to it, based on the color distance (i.e., color units on the color hexagon) between the two flowers.
Figure 6
 
The efficiency of different color constancy algorithms in compensating for illumination shifts. The figure shows an example of a scene (i.e., visual field, defined here as 14 × 14 celled map around the bee) consisting of five different flower colors encountered by a bee agent in the simulation, remapped onto human vision in RGB. The excitation values of the bees' UV, B, and G receptors range from 0–1, and are mapped to red, blue, and green, respectively, ranging from 0–255. Colored squares represent flowers, and gray squares green foliage. The scene is the visual field around the location of the bee, and consists of all flowers in r, the radius of the visual field. It contains five flower species under daylight (left column) and forest shade (right column). The change of appearance is shown for (a) von Kries adaptation only; (b) White patch algorithm; (c) Gray world algorithm. For this scene and this set of five flower species, the White patch algorithm (b, middle row) performs best, since the colors are well spread out within scenes and also change the least between scenes.
Figure 6
 
The efficiency of different color constancy algorithms in compensating for illumination shifts. The figure shows an example of a scene (i.e., visual field, defined here as 14 × 14 celled map around the bee) consisting of five different flower colors encountered by a bee agent in the simulation, remapped onto human vision in RGB. The excitation values of the bees' UV, B, and G receptors range from 0–1, and are mapped to red, blue, and green, respectively, ranging from 0–255. Colored squares represent flowers, and gray squares green foliage. The scene is the visual field around the location of the bee, and consists of all flowers in r, the radius of the visual field. It contains five flower species under daylight (left column) and forest shade (right column). The change of appearance is shown for (a) von Kries adaptation only; (b) White patch algorithm; (c) Gray world algorithm. For this scene and this set of five flower species, the White patch algorithm (b, middle row) performs best, since the colors are well spread out within scenes and also change the least between scenes.
Figure 7
 
Color loci in bee color space and compensation of color shift for four illumination conditions, assuming different color constancy algorithms. The Figure displays color loci for one set of five flower species as an example, i.e., Vicia cracca (clear circle), Lythrum salicaria (cross), Lathyrus pratensis (clear square), Cirsium oleraceum (filled square), and Lotus corniculatus (filled circle). Color loci are shown for four natural lighting conditions (Endler, 1993)—D: norm function D65 (daylight); FS: forest shade; WS: woodland shade; SG: small forest gaps. Concentric circles are at distances of 0.1 hexagon units. Insets (left) show extended views of the rectangles within the color hexagons on the right, to allow for more detailed visual inspection of color loci and their shifts.
Figure 7
 
Color loci in bee color space and compensation of color shift for four illumination conditions, assuming different color constancy algorithms. The Figure displays color loci for one set of five flower species as an example, i.e., Vicia cracca (clear circle), Lythrum salicaria (cross), Lathyrus pratensis (clear square), Cirsium oleraceum (filled square), and Lotus corniculatus (filled circle). Color loci are shown for four natural lighting conditions (Endler, 1993)—D: norm function D65 (daylight); FS: forest shade; WS: woodland shade; SG: small forest gaps. Concentric circles are at distances of 0.1 hexagon units. Insets (left) show extended views of the rectangles within the color hexagons on the right, to allow for more detailed visual inspection of color loci and their shifts.
Figure 8
 
Average (± SE) nectar collected by a bee agent under changes of illumination from D65 daylight to forest shade, small gap light or woodland shade, where each change in illumination from D65 daylight is simulated 100 times in meadows of five randomly selected flower by new bee agents. Perfect color constancy is assigned 100% color constancy improvement over a color-blind bee. Percentage in the other color visual systems indicates the increased nectar collection performance for each color constancy method from the color-blind agent. These percentages are shown above the columns. Significance levels (Dunn's Multiple Comparisons Test): *p < 0.05; **p < 0.01; ***p < 0.001.
Figure 8
 
Average (± SE) nectar collected by a bee agent under changes of illumination from D65 daylight to forest shade, small gap light or woodland shade, where each change in illumination from D65 daylight is simulated 100 times in meadows of five randomly selected flower by new bee agents. Perfect color constancy is assigned 100% color constancy improvement over a color-blind bee. Percentage in the other color visual systems indicates the increased nectar collection performance for each color constancy method from the color-blind agent. These percentages are shown above the columns. Significance levels (Dunn's Multiple Comparisons Test): *p < 0.05; **p < 0.01; ***p < 0.001.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×