Free
Article  |   January 2013
Systematic biases in adult color perception persist despite lifelong information sufficient to calibrate them
Author Affiliations
Journal of Vision January 2013, Vol.13, 19. doi:https://doi.org/10.1167/13.1.19
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Aline Bompas, Georgie Powell, Petroc Sumner; Systematic biases in adult color perception persist despite lifelong information sufficient to calibrate them. Journal of Vision 2013;13(1):19. https://doi.org/10.1167/13.1.19.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Learning from visual experience is crucial for perceptual development. One crucial question is when this learning occurs and to what extent it compensates for changes in the visual system throughout life. To address this question, it is essential to compare human performance not only to the hypothetical state of no recalibration, but also to the ideal scenario of optimum learning given the information available from visual exposure. In the adult eye, macular pigment introduces nonhomogeneity in color filtering between the very center of vision and the periphery, which is known to introduce perceptual differences. By modeling cone responses to the spectra of everyday stimuli, we quantify the degree of calibration possible from visual exposure, and therefore the perceptual color distortion that should occur with and without recalibration. We find that perceptual distortions were halfway between those predicted from bare adaptation and from learning, despite nearly lifelong exposure to a very systematic bias. We also show that these distortions affect real stimuli and are already robust in the near-periphery. Our findings challenge an assumption that has fueled influential accounts of vision—that the apparent homogeneity of perceived colors across the visual field in everyday life is evidence for continuous learning in perception. Since macular pigment is absent at birth and reaches adult levels before age 2, we argue that the most plausible, though likely controversial, interpretation of our results is early development of color constancy across space and not much recalibration afterwards.

Introduction
That we learn to perceive what is meaningful in the world, not the output of our retina, has become a central tenet of perception. Consequently, scientists no longer wonder why the world is perceived upright while its projection on our retina is upside-down, or why we do not perceive our blind spots. This idea is also dramatically illustrated by a wide range of “illusions,” in which perception seems inescapably dominated by inferences of ecological benefit in a natural setting (but misleading in the artificial context of the illusion). For example, the most plausible three-dimensional (3-D) structure of a scene is automatically perceived from two-dimensional (2-D) cues, as in the “corridor illusion” (Fineman, 1981). As vision scientists, we have learned to assume that we never see a simple representation of the retinal stimulation, but always an interpretation of it according to rules established by lifelong perceptual learning through interaction with the world (Gregory, 1997; Neisser, 1967; von Helmholtz, 1866). This tenet has led to expanding emphasis on learning, plasticity, and inference in all aspects of perception (Geisler, 2011; O'Regan, 2011; O'Regan & Noe, 2001; Pascual-Leone, Amedi, Fregni, & Merabet, 2005; Purves, Wojtach, & Lotto, 2011; Sagi, 2011). This would predict that distortions continuously present in the eye should not affect our perceptual experience of real objects, providing previous information was sufficient to allow calibration. Here we focus on a particular distortion introduced in the eye—the varying density of macular pigment between central and peripheral retina—in order to address the ability to compare color across the visual field, what information is available to keep it calibrated throughout life, and to what extent this information is used. 
Early and lifelong learning in color perception
Color perception is a long-standing focus of psychological and philosophical debate on the nature of the link between physical properties (e.g., light spectra, colored objects), physiological responses (e.g., cone responses in the retina, color-opponent neurons), and perceptual content or phenomenal quality (e.g., perceived hue) (Byrne & Hilbert, 2003; Hardin, 1993; Palmer, 1999; Thompson, Palacios, & Varela, 1992). 
One most studied aspects of color vision is the ability to identify the color of a surface independently of changes in the illuminant (”color constancy”; see Foster, 2011). This ability heavily relies on comparing stimuli across the visual scene, which is the main focus of the present study. Although mere early exposure to light is sufficient to achieve simple color discrimination abilities, the development of more sophisticated aspects of color perception, such as color constancy, depends more specifically on the quality of the exposure (Brenner & Cornelissen, 2005; Sugita, 2004). In Sugita's (2004) study, monkeys reared in an environment lit with monochromatic lights alternating every minute until the age of 9 months exhibited abnormal color constancy, despite normal color discrimination abilities. 
This finding is in perfect agreement with modern understanding of the development of the color pathways. The retinal mosaic of cones is largely random and slightly clumped, and the wiring of L and M cones into the opponent surrounds of the parvocellular pathway is largely unspecific (see Crook, Manookin, Packer, & Dacey, 2011, for the most recent evidence). Furthermore, V1 cells vary widely in the relative contribution from parvo- and koniocellular chromatic pathways (i.e., “red-green'” and “blue-yellow”) (De Valois, Cottaris, Elfar, Mahon, & Wilson, 2000; Landisman & Ts'o, 2002; Mullen, Dumoulin, McMahon, de Zubicaray, & Hess, 2007). Thus, without prior exposure to visual signals from the retina, neurons in visual cortex would not be able to reliably signal color, as they could not know what proportions of the three types of retinal cone receptors contribute to the center and surround of their receptive fields. Exposure to patterns of correlated visual signals is the only way for the brain to find out that a particular activity pattern in cells with foveal receptive fields represents the same color as other activity patterns in cells with receptive fields elsewhere. In other words, while it is likely that a newborn baby's visual system can detect local chromatic contrasts, it would be impossible for it to know, without experience, that a certain shade in the center was the same color as that shade viewed in periphery. 
Particularly interesting is that monkeys in Sugita's (2004) study showed no signs of going back to normal performance after returning to a normal environment even after another 9 months. Furthermore, whether color constancy actually benefits from inference based on lifelong experience in natural environments is controversial (Delahunt & Brainard, 2004; Foster, 2011). Thus, although it is clear that color perception remains able to adjust to some extent to visual exposure throughout life (Belmore & Shevell, 2008; Delahunt, Webster, Ma, & Werner, 2004; McCollough, 1965; Neitz, Carroll, Yamauchi, Neitz, & Williams, 2002; Schefrin & Werner, 1990; Tseng, Gobell, & Sperling, 2004), it seems that some higher level abilities requiring comparing visual information across space could settle early in development and remain insensitive to new information available after this critical period. This highlights an important distinction between the initial calibration of a system, which always requires learning from visual experience, and ongoing or recalibration, which is constrained by the neuronal plasticity of this system, the potential costs of any miscalibrations, and the quality of the information available to compensate for them. 
Color distortions across the visual field
The adult retina is not uniform with respect to spectral filtering, a key influence being the varying density of macular pigment, which filters short wavelengths in the center (Bone, Landrum, & Cains, 1992; Maxwell, 1860; Snodderly, Auran, & Delori, 1984). Changes in macular pigment density are particularly large between the center and about 5° eccentricity (Snodderly et al., 1984), about the width of this column of text at reading distance—well within the area of relatively good color discrimination and the area where attention is mostly oriented. 
In everyday life uniformly colored surfaces do not appear to vary in color with eccentricity, nor do we generally notice objects changing color whether we look directly at them or not. Such apparent homogeneity has been a direct inspiration for influential theories of vision and conscious experience that emphasize lifelong learning (Kohler, 1962; Maxwell, 1860; O'Regan & Noe, 2001). However, there is robust evidence that perceived color between the periphery and the central 2° does systematically, though subtly, differ when measured in controlled laboratory conditions, a phenomenon originally reported by Maxwell (1860) and subsequently reported in many studies (see McKeefry, Murray, & Parry, 2007; Parry, McKeefry, & Murray, 2006; Parry, Panorgias, McKeefry, & Murray, 2012, for some recent examples and related literature review; see also Magnussen, Spillmann, Sturzel, & Werner, 2004, for the separate influence of short-wave cone absence in the central 20 arcmin). Consequently, apparent homogeneity in everyday life cannot be taken as evidence that the distortion is fully recalibrated, but rather suggests that it fails to be noticed in real-life conditions thanks to inattention or known phenomena such as filling-in and change blindness (O'Regan & Levy-Schoen, 1983; Rensink, O'Regan, & Clark, 1997; Simons & Levin, 1997). 
Is color constancy across space achievable?
That a visual ability is imperfect can appear profoundly unsurprising. For instance, color constancy to the change in illuminant is often imperfect, in particular in nonfamiliar lighting conditions. However, what makes this imperfection unsurprising is that the information that would be necessary for the system to achieve perfect color constancy (i.e., the spectrum of the illuminant) is mainly missing. Can such argument apply in the case of color constancy across the visual field? To answer this question, it is essential to estimate whether the information necessary to maintain good color calibration across space is available through everyday visual exposure. To interpret evidence of color biases across space in relation to lifelong learning, it is thus essential to compare observers' performance, not solely to objective measures (equality in the outside world), but to two scenarios assuming the absence of learning or the presence of best achievable learning from the available information. 
Before we explain this in more detail, it should be said that most of the distortion introduced by macular pigment is likely to be compensated by the adjustment of cone gain and adaptation in early visual neurons, via a normalization process to the average stimulation. Consistent with this, white is nearly invariant throughout the visual field (Webster & Leonard, 2008). Importantly, whether this effective adjustment of the white balance relies on short term processes like von Kries adaptation (Worthey & Brill, 1986) or longer-term processes, it can ensure only that neutral colors produce a similar receptoral or neural response in centre and periphery, but such normalization would be unable to equalize these responses for all colors. These normalization processes occur for many aspects of visual processing, and whether some can be described as “learning” is an open question (see Webster, 2011 for a related discussion). However, the present work mainly focuses on characterizing the more specific learning mechanisms able to achieve and possibly maintain calibration of perception across the visual field beyond what is already achieved by normalization to the average stimulation. This is what we refer to as learning or recalibration hereafter. 
One obvious reason why distortions are not fully compensated could be that compensation is inherently limited by how systematic the distortion introduced by macular pigment is across surfaces. Specifically, when viewing a new surface for the first time in periphery, can the cone response in central vision be predicted from prior experience of similar colors? This question is not trivial because different spectral filtering between the center and near-periphery introduces a “metamer” problem (Figure 1). That is, the filtering effect depends on the light spectrum, but the raw information upon which human color vision depends is limited to the activation ratio for the three types of retinal cone receptor (note that we use the word “chromaticity” for this color-relevant physiological response, while we will henceforth restrict the word “color” to referring to what we perceive). If the light spectra in our environment were infinitely varied, the question would be unsolvable: A given chromaticity in the periphery could be generated by an infinite number of spectra, all differently affected by filtering, and thus it would be impossible to predict the chromaticity of the same object in central vision. In this case, color-specific recalibration would be impossible and color distortions would be unsurprising. 
Figure 1
 
The metamer problem. (A) The same color can be produced by very different spectra, for example natural surfaces and computer monitors. Natural spectra tend to be much smoother (these examples come from our everyday objects and stimuli in the computer experiments described below). The effect of a filter, such as macular pigment (B) (taken from Bone et al., 1992), on a stimulus depends on its spectrum, and so spectra that produce identical colors in the periphery of vision would no longer produce identical colors in the center (C).
Figure 1
 
The metamer problem. (A) The same color can be produced by very different spectra, for example natural surfaces and computer monitors. Natural spectra tend to be much smoother (these examples come from our everyday objects and stimuli in the computer experiments described below). The effect of a filter, such as macular pigment (B) (taken from Bone et al., 1992), on a stimulus depends on its spectrum, and so spectra that produce identical colors in the periphery of vision would no longer produce identical colors in the center (C).
In practice, however, real stimuli are not infinitely variable in spectral shape, and the extent to which the system could learn and generalize for the stimuli it encounters depends on how constrained the chromaticity shifts are for those stimuli. To answer this empirical question, we measured the spectra of objects from our daily environment (Figure 3A) and calculated how systematic the chromaticity shifts are, and what the prediction error would be for new test stimuli. We used a range of purple stimuli, simply because these are known to be robustly affected by macular pigment (because they contain both long and short wavelengths, and macular pigment changes the relative ratio of short to long wavelengths) and therefore seem best suited to quantify perceptual distortions and the extent of recalibration. We model chromaticity shifts for everyday purple objects (after the expected normalization to the average stimulation) and we show they are highly systematic (Figure 3B, C), and thus the prediction error should be small if we assume lifelong learning (Figure 3D). 
The next part of the question then becomes: how large are observers' perceived color distortions compared to this prediction? Having modeled the cone responses to everyday spectra, we derive the predicted color distortions in the two alternative scenarios: (a) with learning, where the relationship between central and peripheral colors sampled from past experience is used, and (b) without learning, where this information is ignored. We then conducted a first experiment where observers performed 3-D color-matching staircases in which they adjusted the hue, saturation, and luminance of a central patch to match a patch in the near periphery. To anticipate, we find that perceived color shifts were halfway between those predicted from lifelong learning and those predicted from normalization in the absence of further recalibration. 
Color distortions in plain sight
Although there is uncontroversial evidence that colors can appear different across the visual field, it is unclear whether these observations are relevant to everyday viewing conditions. Perceptual nonhomogeneities have been consistently reported for far periphery (>15°), but they are more controversial at smaller eccentricities (Parry et al., 2006; Webster, Halen, Meyers, Winkler, & Werner, 2010). Furthermore, spectra generated with computer monitors or monochromatic lights are quite different to spectra reflected from real objects under natural illumination (see Figure 1), and the visual system may have learned to compensate for natural (usually broadband spectra), but not artificial (possibly more narrow band spectra) stimuli. A recent study confirmed that color distortions still occur for Munsell papers (which have broadband spectra like most natural objects), but this study only tested the far periphery (18°) (Parry et al., 2012). 
In a second series of experiments, we confirm the existence of systematic color distortions on the perception of both real surfaces (Experiments 2A and 2B) and artificial stimuli (Experiments 2C and 2D) between central and near-peripheral viewing (<6°). We also compared, for computer stimuli, the situation in which the eyes are fixed and the stimulus changes position (the common way to run lab experiments), with a situation in which the stimulus is fixed and the eyes move (a condition more akin to real life where most objects are stationary and we do not expect them to change color). In all conditions, we find that purple stimuli appeared systematically pinker when viewed centrally, in agreement with the previous literature and our Experiment 1. This confirms that perceived distortions are not limited to special cases—in the far periphery, during flicker or for unusual spectra—but can be measured in conditions more similar to natural viewing. 
Materials and methods
Modeling of cone responses to light spectra
In order to predict the perceived color shift expected with and without learning, we needed to model the chromaticity shifts expected between near periphery and central vision for each stimulus (Figure 2). 
Figure 2
 
Calculating chromaticity shifts between central and peripheral vision. Chromaticity is calculated by convolving an object's spectrum with cone sensitivity functions to obtain relative activation of each cone class, and then calculating suitable cone activation ratio; follow the solid or dashed lines from spectrum (A) to chromaticity values (D) for central and peripheral vision, respectively. (A) The spectrum of light that would enter the eye from one of our everyday stimuli, a purple plastic object (pink line), and also from a white china plate (gray line), under the same northern daylight. (B) The effective spectral sensitivity functions of our three classes of cone receptor—short-wave (blue), medium-wave (green), and long-wave (red). These functions already include an adjustment for macular pigment (and lens) appropriate for central vision (as can be seen by the slight kink in the blue line). (C) Adjusted spectral sensitivity functions for cones in the periphery; we have removed the filtering effect of macular pigment and then adjusted the relative sensitivity of each cone class so that the white point (here given by the white reference plate) maintains the same chromaticity across the visual field. (D) The chromaticity coordinates of a stimulus are calculated from the relative activation values (S, M, and L) for each cone class (short-, medium-, and long-wave). MB space uses S / (L + M) and L / (L + M) to represent the “yellow-blue” dimension and the “green-red” dimension, respectively. Distance from the white point (gray diamond) represents saturation, while hue is represented by the angle around a circle with white at the center. The white stimulus has the same chromaticity coordinates for central and peripheral vision because we adapted the cones to ensure that this was the case. The chromaticity of the purple stimulus shifts between central and peripheral vision, as marked by the pink arrow (Figure 3B shows equivalent arrows for all the everyday stimuli). For each stimulus, we take the hue angles for central (Ac) and peripheral (Ap) vision as our main measure. These are plotted against each other in Figure 3C.
Figure 2
 
Calculating chromaticity shifts between central and peripheral vision. Chromaticity is calculated by convolving an object's spectrum with cone sensitivity functions to obtain relative activation of each cone class, and then calculating suitable cone activation ratio; follow the solid or dashed lines from spectrum (A) to chromaticity values (D) for central and peripheral vision, respectively. (A) The spectrum of light that would enter the eye from one of our everyday stimuli, a purple plastic object (pink line), and also from a white china plate (gray line), under the same northern daylight. (B) The effective spectral sensitivity functions of our three classes of cone receptor—short-wave (blue), medium-wave (green), and long-wave (red). These functions already include an adjustment for macular pigment (and lens) appropriate for central vision (as can be seen by the slight kink in the blue line). (C) Adjusted spectral sensitivity functions for cones in the periphery; we have removed the filtering effect of macular pigment and then adjusted the relative sensitivity of each cone class so that the white point (here given by the white reference plate) maintains the same chromaticity across the visual field. (D) The chromaticity coordinates of a stimulus are calculated from the relative activation values (S, M, and L) for each cone class (short-, medium-, and long-wave). MB space uses S / (L + M) and L / (L + M) to represent the “yellow-blue” dimension and the “green-red” dimension, respectively. Distance from the white point (gray diamond) represents saturation, while hue is represented by the angle around a circle with white at the center. The white stimulus has the same chromaticity coordinates for central and peripheral vision because we adapted the cones to ensure that this was the case. The chromaticity of the purple stimulus shifts between central and peripheral vision, as marked by the pink arrow (Figure 3B shows equivalent arrows for all the everyday stimuli). For each stimulus, we take the hue angles for central (Ac) and peripheral (Ap) vision as our main measure. These are plotted against each other in Figure 3C.
Figure 3
 
Learning is achievable but unachieved. (A) The photograph shows the everyday objects whose spectra were measured, including both natural (fruits, flowers, and vegetables bought in local shops), and man-made objects (paint, plastic, fabric, paper, crayons, and makeup, all unselectively gathered around our houses and offices); we also included samples of the standard Windows color palette (top right) displayed on a LCD monitor. (B) Pink arrows show, in MB color space, the calculated chromaticity shifts due to macular pigment for each object between center and periphery of the visual field (see Figure 2 for method). Black arrows show the calculated chromaticity shift for the four reference stimuli used for the 3-D color-matching task (Experiment 1). The gray diamond marks the background gray/white point, which has the same chromaticity in center and periphery due to independent cone adaptation to the background in center and periphery (see Methods and Figure 2). (C) The relationship between central and peripheral hue (chromaticity angle; see Figure 2D) is highly systematic across everyday spectra (pink dots), as shown by the very small jitter around the smooth average (black curve), which represents the average calibration level we would expect the visual system to achieve through everyday experience. The straight black line represents actual equality between central and peripheral chromaticity angles, and where the pink dots approach it, only saturation and not hue is affected by macular pigment (see lower right of B, which represents pink-reds). Black dots show the four reference hues from Experiment 1. (D) Predicted and observed hue-shifts for each of the four peripheral reference stimuli in Experiment 1, expressed as angular deviation in MB space. Black bars depict the hue-shifts predicted assuming perfect cone adaptation, but no learning; this is the horizontal distance between the black dots and the straight black line in (C). Pink bars depict the hue-shift predicted by learning from the everyday spectra shown in (A) and (B); this is the distance between the black dots and the curved line in (C) (the average of the learning set), with 95% confidence intervals estimated by bootstrapping on the learning set. Red bars show the observed hue-shifts between the reference stimuli and their perceptual match in central vision (average on five observers with standard errors). The observed shifts are halfway between those predicted for adaptation alone and those predicted for learning.
Figure 3
 
Learning is achievable but unachieved. (A) The photograph shows the everyday objects whose spectra were measured, including both natural (fruits, flowers, and vegetables bought in local shops), and man-made objects (paint, plastic, fabric, paper, crayons, and makeup, all unselectively gathered around our houses and offices); we also included samples of the standard Windows color palette (top right) displayed on a LCD monitor. (B) Pink arrows show, in MB color space, the calculated chromaticity shifts due to macular pigment for each object between center and periphery of the visual field (see Figure 2 for method). Black arrows show the calculated chromaticity shift for the four reference stimuli used for the 3-D color-matching task (Experiment 1). The gray diamond marks the background gray/white point, which has the same chromaticity in center and periphery due to independent cone adaptation to the background in center and periphery (see Methods and Figure 2). (C) The relationship between central and peripheral hue (chromaticity angle; see Figure 2D) is highly systematic across everyday spectra (pink dots), as shown by the very small jitter around the smooth average (black curve), which represents the average calibration level we would expect the visual system to achieve through everyday experience. The straight black line represents actual equality between central and peripheral chromaticity angles, and where the pink dots approach it, only saturation and not hue is affected by macular pigment (see lower right of B, which represents pink-reds). Black dots show the four reference hues from Experiment 1. (D) Predicted and observed hue-shifts for each of the four peripheral reference stimuli in Experiment 1, expressed as angular deviation in MB space. Black bars depict the hue-shifts predicted assuming perfect cone adaptation, but no learning; this is the horizontal distance between the black dots and the straight black line in (C). Pink bars depict the hue-shift predicted by learning from the everyday spectra shown in (A) and (B); this is the distance between the black dots and the curved line in (C) (the average of the learning set), with 95% confidence intervals estimated by bootstrapping on the learning set. Red bars show the observed hue-shifts between the reference stimuli and their perceptual match in central vision (average on five observers with standard errors). The observed shifts are halfway between those predicted for adaptation alone and those predicted for learning.
Full spectra from 60 everyday purple stimuli and 10 standard purples from an LCD monitor (shown in Figure 3A) were measured from 390 to 780 nm (e.g., Figure 1A and 2A) with a spectroradiometer (SpectroCAL, Cambridge Research System, Cambridge, UK). The real surfaces were measured in a room illuminated with natural daylight from a large northeast-facing window. Chromaticity coordinates in the MacLeod and Boynton (MB) chromaticity space (Macleod & Boynton, 1979) were calculated for central vision using the cone sensitivity functions of Stockman and Sharpe (2000) (Figure 2B) and corresponding scaling factors (http://www.cvrl.org). These coordinates correspond to the starting point of the pink arrow on Figure 2D
To determine the chromaticity coordinates in peripheral vision (the end point of the arrow in Figure 2D), we determined a peripheral counterpart to the MB space, differing from the standard one according to the following assumptions. First, we assumed that the main center-periphery difference that affects hue at such short eccentricity (5.6°) is the density of macular pigment (see Discussion for consideration of other possible factors, such as cone ratios, cone wiring, or rod intrusion). Since the cone sensitivity functions we used (Stockman & Sharpe, 2000) are determined for central vision and already take macular pigment into account, we had to remove the filtering effect to model cone response in periphery, as illustrated in Figure 2C
To estimate the density variation of macular pigment between center and periphery, we determined the points of psychophysical equiluminance for five observers between gray and purple, centrally and at 5.6° eccentricity, using the minimum motion technique (Anstis & Cavanagh, 1983). Using the spectral absorption function and average optical density of macular pigment (Figure 1B), obtained from Bone et al. (1992), we calculated how much macular pigment should be removed between center and periphery to account for the change in observers' equiluminant settings. Removing all the macular pigment fitted the results for our five observers. We thus assumed no macular pigment in the periphery and removed from the foveal cone sensitivity functions the average optical density. 
Additionally, we adjusted the cone gain in periphery to reflect the expected adaptation effect of chronic viewing through a filter (in this case, macular pigment), akin to von Kries adaptation (Worthey & Brill, 1986). To do so, we assumed that the cone response to our gray background should remain the same in central and peripheral vision (although part of this adjustment to white could happen at the level of the early opponent pathways rather than cone level, it does not matter for our conclusions). This assumption is consistent with the fact that white looks homogenous across the visual field, as already reported by Maxwell (1860) and later confirmed by Webster and Leonard (2008), including by us (results not shown). 
Chromaticity coordinates were then calculated for each surface, for center and periphery, illustrating the predicted shift in cone response due to macular pigment (e.g., the pink arrow in Figure 2D). For analysis of these chromaticity shifts, we use the polar coordinates of saturation (distance from gray) and hue angle (position in the color circle), since hue angle (as illustrated by red and blue curved arrows in Figure 2D) is of most interest (the absolute predicted color shift is proportional to saturation). 
Experiment 1
Participants
Five observers participated in Experiment 1. All had normal vision, and naive participants received payment for their time. 
Stimuli and procedure
Stimuli were presented on a 21″ Sony GDM-F520 Trinitron monitor and controlled by a ViSaGe (Cambridge Research System, Cambridge, UK). Stimuli were viewed binocularly at a distance of 72 cm, maintained by a chin rest. Eye movements were recorded by a CRS high-speed video eye-tracker sampling at 250 Hz. Manual responses were made with a CRS CB6 button box. The stimuli comprised a pair of 1.9° wide circular, isoluminant colored patches presented on a uniform gray background, positioned at 2.8° to the left and right of screen center. A thin black circle surrounded the patches, and black dots above and below one circle indicated where to fixate. Observers adjusted the foveal patch to match the peripheral patch in hue, saturation, and luminance, in order to find the precise color shifts experienced between fovea and near-periphery and compare these to the predictions from the modeling of real spectra described above. We used four different peripheral stimuli (Table 1; counterbalanced order across participants). The gray background was at 34 cd/m2 (MB: 0.696, 0.031). For each peripheral color, results were obtained on at least six staircases. The first three staircases started with a random chromaticity (within limits) and required observers to focus first on one dimension (hue, saturation, luminance; counterbalanced) before freely choosing which dimension to adjust on each trial, using three sets of button pairs. Observers could continue adjustments for as long as it took to find a satisfactory match. The fourth staircase used the average of the previous three settings as a starting point. The fifth and sixth staircases started with the setting from the last session. The final results were an average of the matches from all six staircases. The average matches were then displayed and measured with the spectroradiometer for the purpose of modeling cone responses, as described above. 
Table 1
 
MacLeod and Boyton (MLB) coordinates for the natural and artificial stimuli used in each experiment and shifts in chromaticity angle in degree (Hue-Shifts) between the central and peripheral stimulus in each condition, as estimated using the modeling described in Figure 2 (after normalization to white and without learning). Notes: Red figures indicate the stimuli used as reference, either in the center (Experiments 2A and 2B) or in the periphery (Experiments 1, 2C, and 2D). For Experiments 2C and 2D, the figures given are for the first session, which are the only figures that all participants had in common; in sessions two and three, the point of subjective equality (PSE, i.e., angle in color space at which the foveal patch perceptually matches the peripheral patch) was calculated for each observer in the previous session, and the foveal patches were varied in smaller increments about this value. FM: Farnsworth-Munsell.
Table 1
 
MacLeod and Boyton (MLB) coordinates for the natural and artificial stimuli used in each experiment and shifts in chromaticity angle in degree (Hue-Shifts) between the central and peripheral stimulus in each condition, as estimated using the modeling described in Figure 2 (after normalization to white and without learning). Notes: Red figures indicate the stimuli used as reference, either in the center (Experiments 2A and 2B) or in the periphery (Experiments 1, 2C, and 2D). For Experiments 2C and 2D, the figures given are for the first session, which are the only figures that all participants had in common; in sessions two and three, the point of subjective equality (PSE, i.e., angle in color space at which the foveal patch perceptually matches the peripheral patch) was calculated for each observer in the previous session, and the foveal patches were varied in smaller increments about this value. FM: Farnsworth-Munsell.
Experiment MLB coordinate 1 (bluer) 2 3 4 5 6 7 8 9 (pinker)
1 (computer) L / (L + M) 0.680 0.693 0.705 0.716
S / (L + M) 0.063 0.064 0.063 0.056
Hue-shift 13 18 21 18
2A (smoothies) L / (L + M) 0.693 0.698 0.703 0.706 0.711 0.716 0.722
S / (L + M) 0.026 0.025 0.024 0.024 0.023 0.022 0.021
Hue-shift 43 7.6 1.9 0.4 −1.4 −2.8 −4.3
2B (FM caps) L / (L + M) 0.695 0.696 0.699 0.702 0.705 0.707 0.709 0.713 0.714
S / (L + M) 0.028 0.028 0.027 0.026 0.025 0.025 0.024 0.023 0.023
Hue-shift 49 31 16 8.3 1.1 −2.5 −4.9 −7.5 −8.9
2C, 2D (computer) L / (L + M) 0.678 0.684 0.690 0.696 0.702 0.707 0.712 0.717 0.721
S / (L + M) 0.060 0.062 0.062 0.062 0.061 0.059 0.056 0.053 0.048
Hue-shift −22 −11 −0.5 8.9 18 26 34 42 50
Experiments 2A and 2B: Real surfaces
Participants
Four observers participated in Experiment 2A and five in Experiment 2B. Most were naive to the purpose of the study. All had normal vision, and naive participants received payment for their time. 
Stimuli and procedure
The natural stimuli in Experiment 2A were fruit juices mixed to obtain seven hues from bluer to pinker (Figure 4A) with equal perceptual steps (as defined in a preliminary experiment), and stored in clear cylinder Perspex pots. Differing ratios of purple grape and cranberry juice were combined with milk to approximately equate luminance levels (±3%; see Table 1 for chromaticities). Stimuli in Experiment 2B were selected from the blue to pink range of the Farnsworth Munsell 100 Hue Test (cap no. 70 to 78; Figure 4B) and comprised approximately isoluminant colored pigments (±5%) on matte paper. Incremental hue variations between the caps are based on just noticeable differences (Farnsworth, 1943). The gray background had mean coordinates (0.689, 0.023 in MB color space (Macleod & Boynton, 1979); Luminance was about 200 cd/m2 and varied from day to day, according to natural daylight. Both experiments were run in a room illuminated with natural daylight from a large northeast-facing window behind the observers' seat (the same room used for spectral measurements above). Stimuli were presented to observers at eye level against a gray background card, which was manually lifted to expose the stimuli. These were positioned to subtend 1.9° of visual field at 2.8° eccentricity to the left and right of straight ahead. Observers were asked to fixate either the left or the right surface on each trial (as indicated by a black dot on the card), so that the other surface would be seen in periphery at 5.6° eccentricity. For each trial, the foveal and peripheral stimuli were displayed simultaneously for approximately 1 s, and the observers were required to verbally state if the foveal stimulus was “pinker” or “bluer” than the peripheral one. Gaze direction was monitored by the experimenter, and trials in which fixation was not accurate were repeated at the end of the trial block. Each peripheral chromaticity was repeated 10 times in a randomly shuffled order. Each session took about 40 min. Psychometric functions were fitted to the data for each observer. 
Figure 4
 
Purple objects look bluer in the near-periphery than in the center of the visual field. For each experiment, the observers' psychometric functions are plotted after a constant stimulus experiment comparing colors in the center and in periphery. If colors appeared identical in center and periphery of the visual field, the curves would go through the intersection of the gray lines marking “same” and “50%.” That they are biased to the left indicates that the stimuli are perceived to be pinker in the center / bluer in periphery. Note that in Experiments 2A and 2B, the central stimulus remained constant and the peripheral stimuli varied across trials, while in Experiments 2C and 2D, the peripheral stimulus was constant and the central one varied (hence the change in abscissa; see Methods). (A) Experiment 2A. Natural stimuli were produced by mixing cranberry and red grape juice with milk in various proportions. The seven ticks on the abscissa correspond to the seven test pots presented at the top of the panel. Observers were presented with the reference sample centrally (far right on the picture) followed by one of the seven test pots peripherally and judged whether the peripheral pot was pinker or bluer than the reference. (B) Experiment 2B. Stimuli were the pink-to-blue range of the Farnsworth-Munsell 100 Hue Test caps. The nine ticks correspond to the nine caps presented at the top of the panel. The same procedure was employed as in (A). (C–D) Experiments 2C and 2D. Computer-generated stimuli allowed us to compare fixation (C) and eye movement (D) conditions. A reference hue was briefly present in periphery, followed 1 s later by a central stimulus of varying hue. In the fixation condition, the change from center to periphery was obtained by changing the position of the stimulus on the screen (C, top panel), while in the eye movement condition, the stimuli appeared in the same location and the eyes moved (D, top panel). The stimuli on the abscissa range from stimulus 3 on the left (“bluer”) to 7 on the right (“pinker”), as described in Table 1.
Figure 4
 
Purple objects look bluer in the near-periphery than in the center of the visual field. For each experiment, the observers' psychometric functions are plotted after a constant stimulus experiment comparing colors in the center and in periphery. If colors appeared identical in center and periphery of the visual field, the curves would go through the intersection of the gray lines marking “same” and “50%.” That they are biased to the left indicates that the stimuli are perceived to be pinker in the center / bluer in periphery. Note that in Experiments 2A and 2B, the central stimulus remained constant and the peripheral stimuli varied across trials, while in Experiments 2C and 2D, the peripheral stimulus was constant and the central one varied (hence the change in abscissa; see Methods). (A) Experiment 2A. Natural stimuli were produced by mixing cranberry and red grape juice with milk in various proportions. The seven ticks on the abscissa correspond to the seven test pots presented at the top of the panel. Observers were presented with the reference sample centrally (far right on the picture) followed by one of the seven test pots peripherally and judged whether the peripheral pot was pinker or bluer than the reference. (B) Experiment 2B. Stimuli were the pink-to-blue range of the Farnsworth-Munsell 100 Hue Test caps. The nine ticks correspond to the nine caps presented at the top of the panel. The same procedure was employed as in (A). (C–D) Experiments 2C and 2D. Computer-generated stimuli allowed us to compare fixation (C) and eye movement (D) conditions. A reference hue was briefly present in periphery, followed 1 s later by a central stimulus of varying hue. In the fixation condition, the change from center to periphery was obtained by changing the position of the stimulus on the screen (C, top panel), while in the eye movement condition, the stimuli appeared in the same location and the eyes moved (D, top panel). The stimuli on the abscissa range from stimulus 3 on the left (“bluer”) to 7 on the right (“pinker”), as described in Table 1.
We measured the spectra reflected from the seven pots and the nine caps under natural daylight, together with that of the white background. Repeating our modeling from Experiment 1, we estimated the hue-shift corresponding to each subcondition, i.e., the difference in chromaticity angle between the reference spectra presented in central vision and each of the other spectra presented for comparison in periphery. These figures are given in Table 1 and correspond to the hue-shift after normalization to white and in the absence of further recalibration. The shifts in chromaticity angle predicted for the reference pot and cap used for central stimulation (0.4° and 1.1°) are numerically much smaller than the ones in the computer-controlled experiments, because the former were much pinker: The chromatic angles for the reference smoothie pot and Munsell cap were 83° and 89°, while the reference stimuli in Experiment 1 were −26°, −6°, 16°, and 39°, the later angles producing larger distortions, particularly so when quantified in the MB space (see Figure 3C). For the smoothie experiment, the chosen hues were constrained by the fact that blue fruits also tend to be quite dark, so we could not add too much of it. The Munsell caps were chosen to match the chromaticity range of the smoothie spectra. This being said, the MB space is not scaled according to appearance, thus a numerically small shift in the pink area could be equally detectable as a larger shift in the bluer area. 
Experiments 2C and 2D: Computer stimuli with and without eye movements
Stimuli and apparatus were the same as for Experiment 1, except where specified below, but the procedure was akin to Experiments 2A and 2B. The chromaticity of the peripheral patch was constant while the foveal patch varied in hue angle between trials (Table 1). The gray background was at 34 cd/m2 (MB: 0.681, 0.030). For each trial, the observer judged whether the foveal patch was “bluer” or “pinker” than the peripheral patch. At the start of each trial, the circular frames and fixation indicator dots were presented for 1 s so that fixation could be acquired. The patches were then presented sequentially for 100 ms with an interstimulus interval of 1 s. The peripheral patch was always presented first, as pilot data revealed no differences between periphery-fovea and fovea-periphery presentation orders and this order was less prone to undesired eye movement. Note that the physical color of the peripheral stimulus was fixed in Experiments 2C and 2D, while the central stimulus changed color between trials. This change from Experiments 1A and 1B was an attempt to benefit from higher sensitivity in central vision and therefore to improve discrimination (slopes of psychometric function). 
In Experiment 2C (“Fixation”), the fixation dots remained on either the left or right throughout the trial (Figure 4C, upper panel). In Experiment 2D (“eye movement”), the fixation dots switched from the left (or right) to the right (or left) just after the first stimulus offset so that the observers performed a single saccade between the presentation of the peripheral and foveal patch (Figure 4D, upper panel). A full screen mask comprised of randomly generated circles with various colors was displayed for 600 ms between trials to minimize retinal adaptation. Observers completed three sessions of each condition, each comprising 90 trials and lasting 40 min. Eye movements were recorded and checked off-line, and trials were discarded if the observer did not fixate or move their eyes appropriately during the stimulus presentation sequence. 
We measured the spectra of the range of stimuli used in the first session, which all participants had in common (Table 1). The chromaticity angle of the reference peripheral stimulus was 32°, predicting a large shift of 18° between center and periphery. Note that, in Table 1, the hue-shift estimated between central and peripheral stimuli are reversed compared to Experiments 1A and 1B because the reference stimulus is now the peripheral one. 
Results
Modeling cone responses: Color constancy across the visual field is achievable
Figure 3B shows the chromaticity shifts between central and peripheral vision for our ‘learning set' of everyday stimuli (shown in Figure 3A). These assume the main filtering is from macular pigment and that there is von-Kries adaptation appropriate for the gray point. These chromaticity shifts increased with saturation and changed gradually with chromatic angle (i.e., the physiological counterpart of perceived hue; Figure 3C) but, crucially, proved highly systematic across surfaces of similar saturation and hue. The extent of this systematicity can be appreciated on Figure 3C, by the diminutive departure of each surface from the smooth average across all surfaces. In other words, the chromaticity of a patch seen in central vision is highly predictive of its chromaticity when seen in periphery and vice versa. Thus, the conditions are met for learning and generalization to new surfaces. 
Experiment 1: Color constancy across the visual field is unachieved
According to the lifelong learning hypothesis, perceived color between center and near-periphery should shift only if the chromaticity shift of the stimulus being judged differs from the systematic effects learnt from everyday stimuli. Thus the error we expect in the color matches of Experiment 1 is given by the degree to which the computer stimuli employed differ in their chromatic shifts from the mean of the learning set. The computer stimuli are represented by the black arrows and dots in Figure 3B and C. The residual perceived color shift expected after learning is the difference between the black dots and the systematic curve of natural stimuli in Figure 3C and is plotted as the pink bars on Figure 3D. On the other hand, if no learning has occurred, and perceptual experience is entirely predicted by cone response (after von Kries adaptation to the mean illumination), one would expect a perceived hue-shift depicted by the black bars on Figure 3D. Our observers' results (red bars) lie halfway between these two extreme hypotheses. These were higher (all p < 0.02, all t[4] > 3.3) than predicted from learning on everyday surfaces of similar chromaticity (pink bars). These were also smaller (all p < 0.05, t[4] > 2.2) than that predicted by the cone response only (black bars). Perceived differences varied across participants, but were on average best fit with a difference in density of macular pigment peaking at 0.22, i.e., 64% of the density difference suggested by minimum motion settings in the same participants (peaking at 0.35). 
Experiment 2: Similar perceived color distortion in every case
For both sets of real surfaces viewed under natural light (fruit juices in transparent pots and Farnsworth-Munsell 100 Hue Test caps) we found that for subjective equality to be observed between near-periphery and center, the stimuli in the periphery need to be physically pinker (Figure 4A, B), just as we found in Experiment 1. In other words, even real purple surfaces in natural viewing conditions appear pinker in the center, consistent with little compensation for the higher density of macular pigment absorbing in the short wavelength range. 
We also replicated the finding with the computer stimuli while varying observation condition: eyes fixed and stimulus changing position (the common way to run lab experiments, Experiment 2C, Figure 4C), or stimulus fixed and the eyes moving (a condition more akin to real life where most objects are stationary and we do not expect them to change color, Experiment 2D, Figure 4D). In both conditions stimuli appeared similarly pinker when viewed centrally. 
The purpose of Experiment 2 was simply to confirm the presence of a perceptual bias in these new conditions. Because Experiment 2 involves a constant stimulus design, we do not have the spectrum of the stimuli corresponding to the points of subjective equality (PSEs) for each participant in each condition. This prevents us from quantifying exactly the amount of observed distortion and comparing it to predictions from learning and no learning, like we did on Figure 3D. However, the predicted hue-shifts presented in Table 1 allow us to make some predictions for where our observers' PSE should be in the absence of learning. For Experiments 2A and 2B, the predicted PSEs should all be towards pink, but no larger than one step away (one cap from the reference stimulus for the Munsell caps). This is what we find for all our observers in both conditions. In Experiments 1C and 1D, the same logic would predict PSEs between stimuli 3 and 4 from Table 1 (stimulus 5 being the reference), while most of our observers' PSE were between stimuli 4 and 5 (with only one participant being between 3 and 4 in both Experiments 2B and 2C, and one being around the reference stimulus 5 for both 2B and 2C). Thus, this tends to confirm that observed distortions are smaller than predicted from bare adaptation to white. 
Discussion
Our results show that the distortion introduced by macular pigment between central and near-peripheral vision systematically affects color perception, even for real surfaces in natural light, and regardless of whether the eyes are fixated or move relative to a stationary object. This is consistent with previous reports showing similar effects for far periphery and for artificial spectra (McKeefry et al., 2007; Parry et al., 2006; Parry et al., 2012). Our modeling of cone responses to everyday purple surfaces shows that the information is theoretically available to keep colors calibrated through exposure to everyday visual scenes because the distortion is highly systematic and thus predictable. However, perceptual biases are clearly larger than predicted if perception were best calibrated through lifelong learning. Observed bias is also clearly smaller than predicted from bare normalization to white without further recalibration, confirming the findings of a previous study (Webster et al., 2010). There are two perspectives to take for our observed data being halfway between “no learning” and “achievable learning”: emphasizing the “half-full” or emphasizing the “half-empty.” Both perspectives lead to interesting and counterintuitive conclusions, as elaborated further below. 
Color bias from infancy to adulthood
In Figure 5A we sketch the time course of the emergence of macular pigment in the retina, which is related to the development of the fovea and to eating vegetables and fruits. One study has reported the density of macular pigment in infant eyes as a function of age (Bone, Landrum, Fernandez, & Tarsis, 1988). Their results suggest density is very low at birth (∼12% of adult level) and reaches adult levels well before the age of 2. 
Figure 5
 
Schematic representation of the evolution in time of the density of macular pigment (A) and perceptual bias predicted by four different scenarios (B). The points on the far right represent the results from Figure 3D: the color bias expected with adaptation but no learning (black), the bias observed in adults (red, point J), and the residual bias on computer stimuli predicted after achievable learning from everyday objects (pink). Because the density of macular pigment at birth is very low, all scenarios would hypothesize a bias close to null shortly after birth (I). With no learning during the lifespan and a hard-wired ability to compare colors in the infant, perceptual bias would track the density of macular pigment (black line). With learning throughout life based on viewing everyday objects, the bias would stay close to zero. A small bias would emerge for each object if the effect of macular pigment differs from the learned average for all objects (pink line represents the computer stimuli from Figure 3D). Both of these scenarios are ruled out by our data because they do not reach point J, the measured bias in adults (from Figure 3D). Plausible scenarios consistent with our data lie between the red dashed and dotted lines (red shaded area). Note that the diagram represents only systematic biases; we do, of course, expect the precision and reliability of color comparisons to increase during infancy and childhood in any scenario, but that is separate from the question of systematic bias, and thus not represented in the diagram.
Figure 5
 
Schematic representation of the evolution in time of the density of macular pigment (A) and perceptual bias predicted by four different scenarios (B). The points on the far right represent the results from Figure 3D: the color bias expected with adaptation but no learning (black), the bias observed in adults (red, point J), and the residual bias on computer stimuli predicted after achievable learning from everyday objects (pink). Because the density of macular pigment at birth is very low, all scenarios would hypothesize a bias close to null shortly after birth (I). With no learning during the lifespan and a hard-wired ability to compare colors in the infant, perceptual bias would track the density of macular pigment (black line). With learning throughout life based on viewing everyday objects, the bias would stay close to zero. A small bias would emerge for each object if the effect of macular pigment differs from the learned average for all objects (pink line represents the computer stimuli from Figure 3D). Both of these scenarios are ruled out by our data because they do not reach point J, the measured bias in adults (from Figure 3D). Plausible scenarios consistent with our data lie between the red dashed and dotted lines (red shaded area). Note that the diagram represents only systematic biases; we do, of course, expect the precision and reliability of color comparisons to increase during infancy and childhood in any scenario, but that is separate from the question of systematic bias, and thus not represented in the diagram.
In Figure 5B we trace our modeled and observed results back over the lifespan to project four potential scenarios for the emergence of, and compensatory calibration for, perceived color bias due to macular pigment. Because there is very low density of macular pigment at birth, all scenarios would hypothesize a bias close to null shortly after birth, i.e., near point I. The points on the far right represent the results from Figure 3D: the color bias expected with adaptation but no learning (black), the bias observed in adults (red, point J), and the residual bias on computer stimuli predicted after achievable learning from everyday objects (pink). Only scenarios leading to point J in adults are consistent with our data. If there were no learning at all since birth, then the perceived color bias would simply grow as macular pigment density increases, as traced by the black line. This scenario is clearly ruled out by our observed data, and that of previous studies (McKeefry et al., 2007; Parry et al., 2006; Parry et al., 2012). If the ability to compare colors across the visual field was fully learnt from experience, and continually updated by experience, perceived color bias would have followed the pink line for our computer stimuli (for any given object, the bias would be the difference between the effect of macular pigment on that object and the outcome of learning based on all experienced objects). This is the scenario predicted, for example, by the sensorimotor account of perception, which we return to later in the Discussion. Suffice to say here, it is clearly ruled out by our observed data. 
The red lines depict the extremes of what we regard as the plausible ways to reach point J, the observed data in adults, and we expect a reasonable scenario to lie somewhere in the shaded area between them. The two extreme cases arise from different traditional approaches to perception and differ critically in the emphasis they give to compensatory calibration (calibration in response to an existing bias) and preventive calibration as part of an initial learning process (that prevents a bias from occurring in the first place). This critical difference in emphasis translates into whether the perceived bias is implicitly assumed ever to have been greater than that measured in adults (dashed line), or whether infants were never more biased than adults (dotted line). 
No-learning as baseline
The fact that observed distortions are systematically smaller than predicted from normalization to the average stimulation has traditionally been interpreted as evidence for compensatory calibration and taken to support lifelong plasticity (e.g., Webster et al., 2010). This viewpoint, depicted by the red dashed line in Figure 5B, implicitly takes the “no-learning” case, depicted by the black line, as the important point of comparison. The underlying assumption is that as the density of macular pigment increases in infancy, a perceptual bias between center and periphery develops, resulting in a perceptual bias close to the no-learning state (point X). This perceptual bias is then gradually compensated for through visual experience until the error signal is small enough to be unnoticeable in most circumstances (point J). In the depicted dashed line, we have assumed that compensatory learning starts as soon as the bias rises above this “unnoticeable” amount, and once macular pigment density stabilizes in the infant, the bias drops back towards this asymptotic level (the curve is illustrative; we make no assumption about the learning rate relative to actual years of age). 
We believe this scenario would be plausible to many vision scientists. Clearly, departure from the no-learning baseline does indicate that learning occurred at some stage. But, learning does not necessarily equate with compensatory calibration if other forms of learning are also envisaged. Indeed, if perception is not hard-wired and must be learned from visual experience during infancy, the opportunity exists to account for the presence of macular pigment in the same process that enables infants to compare colors at all. With such preventive calibration forming part of the early learning process, we would not necessarily expect a bias to grow as soon as the density of macular pigment increases, nor would we expect the bias to be anywhere near point X (no-learning reference) at any time in development. In the extreme case, providing the time window of high plasticity in infants is long enough, compensatory calibration could even become unnecessary to account for our data. This possibility, illustrated by the red dotted line on Figure 5B, is explored below. 
No-bias as baseline
The data also show that adult color bias is systematically greater than it could be if learning was accomplished to the level achievable by visual information. Here, the “achievable learning” state (pink line in Figure 5B) is emphasized as the most important point of comparison. The red dotted line depicts a developmental scenario in which the visual system learns to compare colors across space in the first year or so of life, automatically taking into account the effects of macular pigment, as far as is possible from visual experience, and thus staying mostly bias-free (albeit immature and imprecise) until about age 1 (point Y). The observed bias in adults is then accounted for by the growth of macular pigment outpacing the plasticity of the system in the second year of life. This lack of continued learning might be because other mechanisms took over to make the bias unnoticeable, such as surface extrapolation (“filling in”) and change blindness. The counter-intuitive consequence of this reasoning is that the more plasticity is assumed during infancy, the less biased the infant visual system should be at any stage of its development, and thus the less compensatory calibration is needed to account for our data. 
The most plausible scenario is likely to be a hybrid between the two extremes outlined above, somewhere within the red shaded area. Whether it should lie closer to the dashed or dotted line (Figure 5B) will depend on one's theoretical viewpoint regarding how colors can be compared across space and whether this ability is hard-wired or acquired, as is discussed in the next section. 
How are colors compared across space?
The mechanisms underlying the ability to compare colors across the visual field are still largely unknown (Danilova & Mollon, 2006). One could hypothesize that for a central and a peripheral surface to look the same, it is necessary and sufficient that the neural response evoked individually by each of these surfaces in visual areas are the same. However, this viewpoint is incomplete, as it simply avoids addressing the very question of how surfaces are actually compared (implicitly leaving the hard job to an homunculus). Furthermore, it relies on the existence of an intrinsic link between the activity of certain neuronal populations and perceptual content. However, without reference to what neuronal activity reflects in the outside world, it is impossible to define what such intrinsic similarity between neural responses actually means. This is particularly obvious in early visual cortex when signals from central and peripheral visual fields are processed by very different populations of receptors and neurons—how does the system know what corresponds to what? 
Imagine an electrophysiologist recording from two distant electrodes in the visual cortex of a new monkey but who is not allowed any external knowledge of what the monkey is watching. Without previous knowledge of the receptive field and color sensitivity of each neuron, he would at first have no chance to know whether the two neurons are reacting to the same stimulus in the outside world or not. Where he records would make no difference to this argument: There is nothing in the properties or statistics of the individual responses that is intrinsically informative about what is out there, and thus no way to compare things out there on the sole basis of an isolated pattern of neuronal response. Only when the electrophysiologist starts analyzing the correlation between patterns of neuronal response across time can he infer the similarity of what is out there from the comparison of these patterns (see O'Regan & Noe, 2001 for a similar analogy). 
The above arguments, as well as those more specific to the neurophysiology of color pathways developed in the Introduction (section “Early and lifelong learning in color perception”), lead us to an alternative hypothesis—that of a comparison mechanism able to learn when two patterns of neural activity, one central and one peripheral, signal the same surface outside in the world. Neurons able to compare colors across large distance have not been found, and evidence that discrimination does not deteriorate as distance across space increases up to 10°, even across visual hemisphere, makes their existence very unlikely (Danilova & Mollon, 2006). This suggests that colors are compared at a high level, implying an essential role of learning from visual exposure. This process would be conceptually similar to the agreement between two persons talking a different language that the words “rouge” and “red” mean the same thing, independently of whether the two words actually sound identical. Correspondences between center and periphery could be built for instance by comparing a single surface across different positions in the visual field thanks to eye or body movement, as previously suggested (Danilova & Mollon, 2006). 
Such learning, since it is derived from statistical relationships, would automatically take into account any prereceptoral filtering, so long as its effects are systematic on everyday objects. For this reason, the correspondences built during this early stage have no reason to be systematically biased one way or another (hence the left-hand side of the red dotted line is coincident with the pink line). In other words, this early stage of learning from experience can only result in a bias-free system (though it could be very imprecise). 
What matters then for predicting the perceptual bias between center and periphery is not similarity between neural responses, but stability across time. Thus, if the distortion remains the same, one would expect the spatial comparison mechanism to remain calibrated so that colors should look the same in center and in periphery. However, if the distortion happened to change after the ability to compare colors across space is acquired and initially calibrated, then the system would become uncalibrated, resulting in perceptual biases. From this point of view, the key message from the data is that a bias has been introduced after initial learning, rather than that compensatory learning has happened since an initial biased state. Since macular pigment reaches adult levels before age 2, it follows that learning mainly occurred earlier than this. 
In summary, if we accept that perception is not fully hard-wired and has to be learnt, at least to some degree, and that plasticity is greater in infancy than in adulthood, the most plausible model will be closer to the dotted red line in Figure 5B than to the dashed line. Furthermore, as soon as any substantial infant plasticity is assumed, we would not at any point of our lives have had a perceptual bias close to the prediction derived for “no-learning.” This means that difference between this prediction and the observed data does not necessarily reflect the extent of compensatory calibration—i.e., calibration in response to an existing bias or error signal. 
An explicit prediction arises from this logic: The density of macular pigment in the eye is known to reflect the richness of carotenoids in the diet, which varies considerably between adults but also between infants (there is, for instance, 70 times more carotenoids in breast milk from a British mother than in standard formula milk [Zimmer & Hammond, 2007]). Our hypothesis predicts that the amplitude of perceived color differences between center and periphery should correlate, not so much with the current density of macular pigment in adults, but with how much it has changed in each individual since the end of the early acquisition period. 
Theoretical implications for the sensorimotor account of perception
Although the conceptual framework we adopt above is consistent with and tributary to the influential sensorimotor account of visual perception (O'Regan & Noe, 2001), our results put strong constraints on essential aspects of this theory. This theory builds partly upon change blindness (which we return to below), but the main (and more specific) tenet of this account is an attempt to solve the mind-body problem by proposing that perceptual experiences actually are learnt rules of sensorimotor interaction with the environment. While most modern accounts of vision assume some role of learning in perception, O'Regan and Noe's (2001) theory goes further, proposing that learned laws of sensorimotor contingencies are constitutive of perceptual content. Specifically for color perception, the proposal is formulated as “the visual experience of a red color patch depends on the structure of the changes in sensory input that occur when you move your eyes around relative to the patch” (O'Regan & Noe, 2001, p. 951). This would seem to imply that as the structure of changes evolves (e.g., due to increasing density of macular pigment), visual experience should adjust accordingly. Importantly, to our understanding, this prediction does not require the distortion to be consciously noticeable (awareness being a by-product of exercising sensorimotor contingencies, but in no case a causal factor in this theory), nor to generate a clear “error signal” for visual experience to adjust. Thus because the perceptual distortions discussed here are most of the time unnoticeable; this example is able to distinguish the prediction of O'Regan and Noe's from that of more traditional learning theories of perception where an error signal would be required. Our results are more consistent with accounts where recalibration occurs only when it is needed. 
O'Regan and Noe (2001) make the following explicit prediction, inspired by Kohler's (1962) seminal studies with goggles: If it were arranged that red surfaces turn green when transferred to peripheral vision by an eye movement, then after enough exposure, observers should perceive red in central vision and green in peripheral vision as the same color, i.e., perceptual stability would be recovered. However, despite lifelong exposure to a very similar (although subtler) condition (replace “red/green” by “pinker/bluer”), our findings indicate that perceptual differences remain. This is equally true when the color comparison is performed through an eye movement or during fixation (Experiments 2C and 2D). To our knowledge, this is the first clear empirical invalidation of an explicit prediction of the sensorimotor account of vision (see previous attempt, Bridgeman et al., 2008; and unconvinced response, O'Regan, 2008). 
Previous work appeared to support the original prediction: Color perception can be adapted, to a small degree, in line with the contingency between training patch chromaticities and the direction of eye movements (Bompas & O'Regan, 2006a, 2006b). The validity of these findings, replicated independently (Richters & Eskew, 2009), is not questioned by the present study, but our results do undermine the conclusion that perceived color is primarily determined by mechanisms continuously tuned to sensorimotor dependencies (Hurley & Noe, 2003; Myin & O'Regan, 2002; O'Regan, 2001, 2011; O'Regan & Noe, 2001) or sensory statistics (Clark, 2003; Clark & Skaff, 2009). In sum, although the calibration of perceived color across the visual space requires exposure to the environment, our results suggest that unless a change large enough to justify further compensation occurs (like those introduced with goggle experiments), such calibration is mainly limited to an early critical period (before age 2). This does not reject the sensorimotor theory as a whole, but suggests that for some aspects of visual experience, the constitution of perceptual content from sensorimotor contingencies is restricted in time. 
It could maybe be argued that exposure to purple stimuli throughout life is not necessarily sufficient to support learning. Although it is hard to see how such hypothesis could be rejected, we note that it proved very easy to gather numerous purple objects around our houses, offices, and local shops, and we can safely hypothesize that similar objects populate our observers' environments and have done so throughout their lives. In any case, according to the sensorimotor theory, insufficient exposure would not result only in the inability to recalibrate but also to the inability to reliably perceive purple stimuli at each eccentricity in the first place. Our observers are perfectly able to reliably perceive colors across space (although in a consistently biased way), suggesting they had enough exposure to satisfy this constraint of the sensorimotor theory. 
Relation to Webster et al., 2010
The results of the present work departs from those reported in Webster et al. (2010), despite the large similarities between the two studies. Webster et al. measured perceptual matches to unique and binary hues between central and 8° eccentricity and report no significant deviation from perfect constancy. Indeed, their perceptual matches were best fit with a density difference of macular pigment between 0.02 and 0.04, which is 5 to 10 times less than what we report here. Although reduced distortions are expected for unique hues (Parry et al., 2006), these should be maximal for binary (i.e., intermediary) hues. Ultimately, participants varied enough in their settings that the chromatic angles used filled the entire color circle, so this is unlikely to account for the difference in outcome between the two studies. A possible explanation might be that our focus on purple hues made us more likely to observe perceptual biases. However, in Webster et al., purple hues on an isoluminant gray background (i.e., the condition most similar to ours), did not appear to stand out from the pattern for other hues. One difference between the two studies is that our conclusions are mainly based on the results of Experiment 1, which was a 3-D staircase in which participants were asked to adjust not only the hue, but also the saturation and the brightness. It is unclear though how this would predict the difference in outcome. 
Relation to other phenomena
It is well known that color appearance accommodates complex information such as shadows, transparency, lighting, and object shape (Maloney, Boyaci, & Doerschner, 2005; Purves et al., 2011) and can adapt to new properties of the environment sometimes over days or months (Belmore & Shevell, 2008; Delahunt et al., 2004; McCollough, 1965; Neitz et al., 2002; Schefrin & Werner, 1990; Tseng et al., 2004). For instance, Schefrin and Werner (1990) showed that unique hues judged in central vision are largely invariant with age despite the increasing density of lens pigment, and recent analysis showed that this invariance is greater than predicted from adaptation to the average illumination (Webster et al., 2010). From this point of view, it is surprising that color perception fails to accommodate the relatively simple correspondence between a permanent filter and retinal position despite almost lifelong exposure. This suggests that keeping a system calibrated is constrained, not only by the information available, but also by the cost (negligible here) of any miscalibration. 
The cost of this bias is low, as most of the time we simply do not notice these small color changes. This is consistent with other examples of “change blindness” (O'Regan & Levy-Schoen, 1983; Rensink et al., 1997; Simons & Levin, 1997), where we do not consciously notice aspects of a scene change unless we directly pay attention to them. However, the mere existence of these color changes has important theoretical consequences, as discussed above. Beyond the specifics of the sensorimotor theory, this perceptual bias constitutes a counterexample to the common assumption, illustrated by a vast literature on illusions (Gregory, 1997), that the perceptual quality associated with a particular stimulus (e.g., what shape or color an object appears) reflects rules established by lifelong learning of what is meaningful in the world. 
On the other hand, there are many examples in which vision remains uncompensated for an input problem, such as scotoma (or blind spots), where the brain can at best “fill-in” but not truly compensate for the lost visual information (Magnussen et al., 2004). The critical difference in our case is that the information needed for compensation is available, as we showed by the modeled cone responses to everyday spectra. The distortion is ever-present in the healthy eye after infancy, ensuring repeated exposure (note that, although color discrimination is less good in the periphery, this decrease in sensitivity is unrelated to the reported color bias). 
The perceptual nonhomogeneity of colors across the visual field is more similar in essence to the well-referenced distortions that affect 3-D space perception, such as “illusory” curvature, compression in depth, or vertical-horizontal illusions (Cuijpers, Kappers, & Koenderink, 2003; Wolfe, Maloney, & Tam, 2005). However, while the well-known mechanisms and anatomy of low-level color processes makes our example accessible to precise measurements and modeling, spatial perception is much more complex to investigate. Moreover, spatial distortions can actually reflect an optimal combination of available evidence (Hogervorst & Eagle, 1998). Present literature investigating color perception from an ideal observer perspective (Brainard et al., 2006; Brainard, Williams, & Hofer, 2008; Geisler & Kersten, 2002) does not suggest that a parallel argument applies in the case of color constancy across the visual field. 
Natural versus artificial spectra and scenes
One hypothesis at the origin of this study was that natural spectra could be less affected by macular pigment than artificial stimuli, which tend to have more narrow spectra. From our modeling, it seems that this difference is actually very minor (compare the black dots to the pink dots on Figure 3C). Since natural and artificial spectra are similarly affected by macular pigment, there is no reason to expect that the visual system would be somewhat better able to compensate for natural than artificial spectra, providing these are presented in the same experimental conditions. It is thus unsurprising that we observe robust color changes to real stimuli, in agreement with previous findings (Parry et al., 2012). 
However, that the evidence for perceptual distortion generalizes from artificial lights to natural spectra does not necessarily mean that it generalizes to natural scenes. Indeed, small colored stimuli on homogenous neutral background are arguably not representative of the distribution of spatial frequency in natural scenes, and this distribution has been shown to affect color adaptation (Webster, Mizokami, Svec, & Elliott, 2006). We have no evidence that distortions would be measurable on more cluttered scenes. We expect that significant biases would be more difficult to report in such conditions, as surrounding surfaces are likely to be distracting and induce local contrast effects that would introduce noise in the perceptual judgments. 
The effect of other center-periphery differences
More than macular pigment differs between the center and the periphery of the retina. Most notably, cones become shorter and wider as their density drops dramatically; there are relatively more S cones; there are many more rods; and ganglion cells draw their receptive fields from a larger number of receptors. (Curcio et al., 1991; Curcio, Sloan, Kalina, & Hendrickson, 1990; Diller et al., 2004; Marcos, Tornow, Elsner, & Navarro, 1997; Martin, Lee, White, Soloman, & Ruttiger, 2001; Pokorny & Smith, 1976; Solomon, Lee, White, Ruttiger, & Martin, 2005). For the reasons explained below, we are confident that these factors would not alter our conclusions. 
The main effect of rod activity is to dilute the differential activity of different cone classes, decreasing the chromatic signal, and so potentially decreasing perceived saturation, though it could also have minor effects on hue (Buck, Knight, Fowler, & Hunt, 1998; Stabell & Stabell, 1976). This might have been a factor in the computer experiments, which had lower overall luminance than the natural ones. Secondly, as the number of cones projecting to each midget ganglion cell increases, the proportions of L and M cones contributing to the center and surrounds may equalize (Crook et al., 2011; Diller et al., 2004; Lennie, Haake, & Williams, 1991; Mullen, Sakurai, & Chu, 2005; Paulus & Kroger-Paulus, 1983; Solomon et al., 2005), which would also decrease the chromatic signal and thus might decrease saturation. Although midget ganglion cells may maintain their single-cone center up to about 10°, larger surrounds could potentially have an impact within this region. Potentially acting in the opposite direction, shorter outer segments have less self-screening, so the effective cone sensitivity functions should be slightly narrower for periphery, slightly benefiting the chromatic signal. Importantly however, chromatic signal changes should not be expected to directly produce perceptual saturation differences. There is little indication that perceived saturation actually changes in the region relevant for our study (up to 5°) (Abramov, Gordon, & Chan, 1992). Consistent with this, Experiment 1 revealed no clear pattern for perceived saturation differences between centre and periphery: On our six observers, two reported saturation to be slightly increased, two showed the opposite pattern, and two did not perceive any consistent saturation differences. 
If S-cone wiring remains more specific than that for L and M cones (Solomon et al., 2005; Sun, Smithson, Zaidi, & Lee, 2006), sensitivity on the L-M axis in color space would reduce more than the S axis (Mullen & Kingdom, 2002; Mullen et al., 2005). If sensitivity loss has a direct effect on chromaticity (which depends on the gain functions for different pathways), then it would move the chromaticity values for peripheral stimuli horizontally towards the vertical line intersecting gray in Figure 3B (McKeefry et al., 2007; Parry et al., 2006). This would mean the lengths of the arrows in Figure 3B have been overestimated on the left of gray and underestimated to the right of gray, which would change the exact shape of the curve in Figure 3C. This would not change any of our conclusions, which do not rely on the shape of the curve, but on it being systematic and a good approximation for most everyday objects. That is, the effectiveness of learning relies on scatter of the points around the curve, i.e., on how well the curve predicts the way each surface will be affected. In sum, although there are several other factors that differ between fovea and near-periphery, macular pigment is the main reason for hue shifts, and other factors would not alter our conclusions. 
Conclusion
In conclusion, we found that the effect of macular pigment on the chromaticity of everyday objects is highly systematic and thus predicable. The degree of perceived color change across the visual field is halfway between that expected if such systematicity was learnt by the visual system and that expected from adaptation without learning. Traditionally, the difference between a theoretical “no-learning” case and observed data is taken as evidence for compensatory calibration. However this makes the implicit assumption that a perceptual bias approximating the no-learning case existed in children, which minimizes the role of plasticity and learning in infants. On the other hand, in order to account for our data while assuming that perception is learned from experience with the world, and thus adapts to macular pigment as it develops, we must make the counterintuitive conclusion that learning must have mostly stopped or dramatically slowed at about age 1. Otherwise the growing density of macular pigment should never produce any systematic perceptual bias. From this perspective, our data challenges the widespread assumption that all aspects of perception are continuously learned and updated from infancy through adulthood to represent properties in the world rather than distortions of the retinal image. 
Acknowledgments
This research was funded by the ESRC. AB was funded by the Brace charity and the Wellcome Trust while writing this article. We are grateful to Alison Binns for providing the Farnsworth-Munsell 100-hue test and to Tom Freeman, Simon Rushton, Robert Snowden, and Chris Miles for their comments on the manuscript. 
Commercial relationships: none. 
Corresponding author: Aline Bompas. 
Address: CUBRIC – School of Psychology, Cardiff University, Cardiff, Wales, UK. 
References
Abramov I. Gordon J. Chan H. (1992). Color appearance across the retina: Effects of a white surround. Journal of the Optical Society of America A, 9(2), 195–202. [CrossRef]
Anstis S. M. Cavanagh P. (1983). A minimum motion technique for judging equiluminance. InMollon J. D. Sharpe L. T.(Eds.),Colour vision: Physiology and psychophysics (pp. 155–166). London: Academic Press.
Belmore S. C. Shevell S. K. (2008). Very-long-term chromatic adaptation: Test of gain theory and a new method. Vision Neuroscience, 25(3), 411–414.
Bompas A. O'Regan J. K. (2006a). Evidence for a role of action in colour perception. Perception, 35(1), 65–78. [CrossRef] [PubMed]
Bompas A. O'Regan J. K. (2006b). More evidence for sensorimotor adaptation in color perception. Journal of Vision, 6(2):5, 145–153, http://www.journalofvision.org/content/6/2/5, doi:10.1167/6.2.5. [PubMed] [Article] [CrossRef]
Bone R. A. Landrum J. T. Cains A. (1992). Optical-density spectra of the macular pigment in vivo and in vitro. Vision Research, 32(1), 105–110. [CrossRef] [PubMed]
Bone R. A. Landrum J. T. Fernandez L. Tarsis S. L. (1988). Analysis of the macular pigment by HPLC: Retinal distribution and age study. Investigative Ophthalmology & Visual Science, 29(6), 843–849, http://www.iovs.org/content/29/6/843. [PubMed] [Article] [PubMed]
Brainard D. H. Longere P. Delahunt P. B. Freeman W. T. Kraft J. M. Xiao B. (2006). Bayesian model of human color constancy. Journal of Vision, 6(11):10, 1267–1281, http://www.journalofvision.org/content/6/11/10, doi:10.1167/6.11.10. [PubMed] [Article] [CrossRef]
Brainard D. H. Williams D. R. Hofer H. (2008). Trichromatic reconstruction from the interleaved cone mosaic: Bayesian model and the color appearance of small spots. Journal of Vision, 8(5):1, 1–23, http://www.journalofvision.org/content/8/5/15, doi:10.1167/8.5.15. [PubMed] [Article] [CrossRef] [PubMed]
Brenner E. Cornelissen F. W. (2005). A way of selectively degrading color constancy demonstrates the experience dependence of colour vision. Current Biology, 15(21), R864–866. [CrossRef] [PubMed]
Bridgeman B. Gaunt J. Plumb E. Quan J. Chiu E. Woods C. (2008). A test of the sensorimotor account of vision and visual perception. Perception, 37(6), 811–814. [CrossRef] [PubMed]
Buck S. L. Knight R. Fowler G. Hunt B. (1998). Rod influence on hue-scaling functions. Vision Research, 38(21), 3259–3263. [CrossRef] [PubMed]
Byrne A. Hilbert D. R. (2003). Color realism and color science. Behavioral & Brain Sciences, 26(1), 3–63.
Clark J. J. (2003). Ecological considerations support color physicalism. Behavioral & Brain Sciences, 26(1), 24–25.
Clark J. J. Skaff S. (2009). A spectral theory of color perception. Journal of the Optical Society of America A: Optics, Image Science, & Vision, 26(12), 2488–2502. [CrossRef]
Crook J. D. Manookin M. B. Packer O. S. Dacey D. M. (2011). Horizontal cell feedback without cone type-selective inhibition mediates “red-green” color opponency in midget ganglion cells of the primate retina. Journal of Neuroscience, 31(5), 1762–1772. [CrossRef] [PubMed]
Cuijpers R. H. Kappers A. M. L. Koenderink J. J. (2003). The metrics of visual and haptic space based on parallelity judgements. Journal of Mathematical Psychology, 47, 278–291. [CrossRef]
Curcio C. A. Allen K. A. Sloan K. R. Lerea C. L. Hurley J. B. Klock I. B. (1991). Distribution and morphology of human cone photoreceptors stained with anti-blue opsin. Journal of Comparative Neurology, 312(4), 610–624. [CrossRef] [PubMed]
Curcio C. A. Sloan K. R. Kalina R. E. Hendrickson A. E. (1990). Human photoreceptor topography. Journal of Comparative Neurology, 292(4), 497–523. [CrossRef] [PubMed]
Danilova M. V. Mollon J. D. (2006). The comparison of spatially separated colours. Vision Research, 46(6–7), 823–836. [CrossRef] [PubMed]
De Valois R. L. Cottaris N. P. Elfar S. D. Mahon L. E. Wilson J. A. (2000). Some transformations of color information from lateral geniculate nucleus to striate cortex. Proceedings of the National Academy of Sciences of the United States of America, 97(9), 4997–5002. [CrossRef] [PubMed]
Delahunt P. B. Brainard D. H. (2004). Does human color constancy incorporate the statistical regularity of natural daylight?Journal of Vision, 4(2):1, 57–81, http://www.journalofvision.org/content/4/2/1, doi:10.1167/4.2.1. [PubMed] [Article] [CrossRef] [PubMed]
Delahunt P. B. Webster M. A. Ma L. Werner J. S. (2004). Long-term renormalization of chromatic mechanisms following cataract surgery. Vision Neuroscience, 21(3), 301–307. [CrossRef]
Diller L. Packer O. S. Verweij J. McMahon M. J. Williams D. R. Dacey D. M. (2004). L and M cone contributions to the midget and parasol ganglion cell receptive fields of macaque monkey retina. Journal of Neuroscience, 24(5), 1079–1088. [CrossRef] [PubMed]
Farnsworth D. (1943). The Farnsworth-Munsell 100-hue and dichotomous tests for color vision. Journal of the Optical Society of America, 33(10), 568–578. [CrossRef]
Fineman M. B. (1981). Complexity of context and orientation of figure in the corridor illusion. Perceptual & Motor Skills, 53(1), 11–14. [CrossRef]
Foster D. H. (2011). Color constancy. Vision Research, 51(7), 674–700. [CrossRef] [PubMed]
Geisler W. S. (2011). Contributions of ideal observer theory to vision research. Vision Research, 51, 771–781. [CrossRef] [PubMed]
Geisler W. S. Kersten D. (2002). Illusions, perception and Bayes. Nature Neuroscience, 5(6), 508–510. [CrossRef] [PubMed]
Gregory R. L. (1997). Knowledge in perception and illusion. Philosophical Transactions of the Royal Society London B: Biological Science, 352(1358), 1121–1127. [CrossRef]
Hardin C. L. (1993). Color for philosophers: Unweaving the rainbow, expanded edition. Indianapolis: Hackett Publishing.
Hogervorst M. A. Eagle R. A. (1998). Biases in three-dimensional structure from motion arise from noise in the early visual system. Proceedings of the Royal Society London B, 265, 1587–1593. [CrossRef]
Hurley S. Noe A. (2003). Neural plasticity and consciousness. Biology & Philosophy, 18(1), 131–168. [CrossRef]
Kohler I. (1962). Experiments with goggles. Scientific American, 206, 62–72. [CrossRef] [PubMed]
Landisman C. E. Ts'o D. Y. (2002). Color processing in macaque striate cortex: Electrophysiological properties. Journal of Neurophysiology, 87(6), 3138–3151. [PubMed]
Lennie P. Haake P. W. Williams D. R. (1991). The design of chromatically opponent receptive fields. InLandy M. S. Movshon J. A.(Eds.),Computational Models of Visual Processing (pp. 71–82). Cambridge, MA: MIT Press.
Macleod D. I. A. Boynton R. M. (1979). Chromaticity diagram showing cone excitation by stimuli of equal luminance. Journal of the Optical Society of America, 69(8), 1183–1186. [CrossRef] [PubMed]
Magnussen S. Spillmann L. Sturzel F. Werner J. S. (2004). Unveiling the foveal blue scotoma through an afterimage. Vision Research, 44(4), 377–383. [CrossRef] [PubMed]
Maloney L. T. Boyaci H. Doerschner K. (2005). Surface color perception as an inverse problem in biological vision. Computational Imaging III, 5674, 15–26.
Marcos S. Tornow R. P. Elsner A. E. Navarro R. (1997). Foveal cone spacing and cone photopigment density difference: Objective measurements in the same subjects. Vision Research, 37(14), 1909–1915. [CrossRef] [PubMed]
Martin P. R. Lee B. B. White A. J. R. Soloman S. G. Ruttiger L. (2001). Chromatic sensitivity of ganglion cells in the peripheral primate retina. Nature, 410(6831), 933–936. [CrossRef] [PubMed]
Maxwell J. C. (1860). On the theory of compound colours, and the relations of the colours of the spectrum. Philosophical Transactions of the Royal Society, 150, 57–84. [CrossRef]
McCollough C. (1965). Color adaptation of edge-detectors in the human visual system. Science, 149(3688), 1115–1116. [CrossRef] [PubMed]
McKeefry D. J. Murray I. J. Parry N. R. (2007). Perceived shifts in saturation and hue of chromatic stimuli in the near peripheral retina. Journal of the Optical Society of America A: Optics, Image Science, & Vision, 24(10), 3168–3179. [CrossRef]
Mullen K. T. Dumoulin S. O. McMahon K. L. de Zubicaray G. I. Hess R. F. (2007). Selectivity of human retinotopic visual cortex to S-cone-opponent, L/M-cone-opponent and achromatic stimulation. European Journal of Neuroscience, 25(2), 491–502. [CrossRef] [PubMed]
Mullen K. T. Kingdom F. A. (2002). Differential distributions of red-green and blue-yellow cone opponency across the visual field. Visual Neuroscience, 19(1), 109–118. [PubMed]
Mullen K. T. Sakurai M. Chu W. (2005). Does L/M cone opponency disappear in human periphery?Perception, 34(8), 951–959. [CrossRef] [PubMed]
Myin E. O'Regan J. K. (2002). Perceptual consciousness, access to modality and skill theories: A way to naturalize phenomenology?Journal of Consciousness Studies, 9(1), 27–45.
Neisser U. (1967). Cognitive psychology. New York: Appleton-Century-Crofts.
Neitz J. Carroll J. Yamauchi Y. Neitz M. Williams D. R. (2002). Color perception is mediated by a plastic neural mechanism that is adjustable in adults. Neuron, 35(4), 783–792. [CrossRef] [PubMed]
O'Regan J. K. (2001). The ‘feel' of seeing: An interview with J. Kevin O'Regan. Trends in Cognitive Sciences, 5(6), 278–279. [CrossRef] [PubMed]
O'Regan J. K. (2008). What is ‘up'? A reply to Bridgeman et al. Perception, 37, 815. [CrossRef]
O'Regan J. K. (2011). Why red doesn't sound like a bell. New York: Oxford University Press.
O'Regan J. K. Levy-Schoen A. (1983). Integrating visual information from successive fixations: Does trans-saccadic fusion exist?Vision Research, 23(8), 765–768. [CrossRef] [PubMed]
O'Regan J. K. Noe A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral & Brain Sciences, 24(5), 939–1031. [CrossRef]
Palmer S. E. (1999). Color, consciousness, and the isomorphism constraint. Behavioral & Brain Sciences, 22(6), 923–989.
Parry N. R. A. McKeefry D. J. Murray I. J. (2006). Variant and invariant color perception in the near peripheral retina. Journal of the Optical Society of America A: Optics, Image Science, & Vision, 23(7), 1586–1597. [CrossRef]
Parry N. R. A. Panorgias A. McKeefry D. J. Murray I. J. (2012). Real-world stimuli show perceived hue shifts in the peripheral visual field. Journal of the Optical Society of America A: Optics, Image Science, & Vision, 29(2), A96–101. [CrossRef]
Pascual-Leone A. Amedi A. Fregni F. Merabet L. B. (2005). The plastic human brain cortex. Annual Review of Neuroscience, 28, 377–401. [CrossRef] [PubMed]
Paulus W. Kroger-Paulus A. (1983). A new concept of retinal colour coding. Vision Research, 23(5), 529–540. [CrossRef] [PubMed]
Pokorny J. Smith V. C. (1976). Effect of field size on red-green colour mixing equations. Journal of the Optical Society of America, 66, 705–708. [CrossRef] [PubMed]
Purves D. Wojtach W. T. Lotto R. B. (2011). Understanding vision in wholly empirical terms. Proceedings of the National Academies of Science USA, 108(Suppl.3), 15588–15595. [CrossRef]
Rensink R. A. O'Regan J. K. Clark J. J. (1997). To see or not to see: The need for attention to perceive changes in visual scenes. Psychological Science, 8, 368–373. [CrossRef]
Richters D. P. Eskew R. T.Jr. (2009). Quantifying the effect of natural and arbitrary sensorimotor contingencies on chromatic judgments. Journal of Vision, 9(4):27, 1–11, http://www.journalofvision.org/content/9/4/27, doi:10.1167/9.4.27. [PubMed] [Article] [CrossRef] [PubMed]
Sagi D. (2011). Perceptual learning in Vision Research. Vision Research, 51(13), 1552–1566. [CrossRef] [PubMed]
Schefrin B. E. Werner J. S. (1990). Loci of spectral unique hues throughout the life span. Journal of the Optical Society of America A, 7(2), 305–311. [CrossRef]
Simons D. J. Levin D. T. (1997). Change blindness. Trends in Cognitive Science, 1(7), 261–267. [CrossRef]
Snodderly D. M. Auran J. D. Delori F. C. (1984). The macular pigment. II. Spatial distribution in primate retinas. Investigative Ophthalmology & Visual Science, 25(6), 674–685, http://www.iovs.org/content/25/6/674. [PubMed] [Article] [PubMed]
Solomon S. G. Lee B. B. White A. J. Ruttiger L. Martin P. R. (2005). Chromatic organization of ganglion cell receptive fields in the peripheral retina. Journal of Neuroscience, 25(18), 4527–4539. [CrossRef] [PubMed]
Stabell B. Stabell U. (1976). Rod and cone contribution to peripheral colour vision. Vision Research, 16(10), 1099–1104. [CrossRef] [PubMed]
Stockman A. Sharpe L. T. (2000). The spectral sensitivities of the middle- and long-wavelength-sensitive cones derived from measurements in observers of known genotype. Vision Research, 40(13), 1711–1737. [CrossRef] [PubMed]
Sugita Y. (2004). Experience in early infancy is indispensable for color perception. Current Biology, 14(14), 1267–1271. [CrossRef] [PubMed]
Sun H. Smithson H. E. Zaidi Q. Lee B. B. (2006). Specificity of cone inputs to macaque retinal ganglion cells. Journal of Neurophysiology, 95(2), 837–849. [PubMed]
Thompson E. Palacios A. Varela F. J. (1992). Ways of coloring: Comparative color vision as a case study for cognitive science. Behavioral & Brain Sciences, 15(1), 1–74. [CrossRef]
Tseng C. H. Gobell J. L. Sperling G. (2004). Long-lasting sensitization to a given colour after visual search. Nature, 428(6983), 657–660. [CrossRef] [PubMed]
von Helmholtz H. (1866). Concerning the perceptions in general. In Treatise on physiological optics, (Vol. III, 3rd ed.) (J. P. C. Southall, trans.). New York: Optical Society America.
Webster M. A. (2011). Adaptation and visual coding. Journal of Vision, 11(5):3, 1–23, http://www.journalofvision.org/content/11/5/3, doi:10.1167/11.5.3. [PubMed] [Article] [CrossRef] [PubMed]
Webster M. A. Halen K. Meyers A. J. Winkler P. Werner J. S. (2010). Colour appearance and compensation in the near periphery. Proceedings of the Royal Society B: Biological Sciences, 277(1689), 1817–1825. [CrossRef]
Webster M. A. Leonard D. (2008). Adaptation and perceptual norms in color vision. Journal of the Optical Society of America A: Optics, Image Science, & Vision, 25(11), 2817–2825. [CrossRef]
Webster M. A. Mizokami Y. Svec L. A. Elliott S. L. (2006). Neural adjustments to chromatic blur. Spatial Vision, 19(2–4), 111–132. [CrossRef] [PubMed]
Wolfe U. Maloney L. T. Tam M. (2005). Distortions of perceived length in the frontoparallel plane: Tests of perspective theories. Perception & Psychophysics, 67(6), 967–979. [CrossRef] [PubMed]
Worthey J. A. Brill M. H. (1986). Heuristic analysis of Vonkries color constancy. Journal of the Optical Society of America A: Optics, Image Science, & Vision, 3(10), 1708–1712. [CrossRef]
Zimmer J. P. Hammond B. R.Jr. (2007). Possible influences of lutein and zeaxanthin on the developing retina. Clinical Ophthalmology, 1(1), 25–35. [PubMed]
Figure 1
 
The metamer problem. (A) The same color can be produced by very different spectra, for example natural surfaces and computer monitors. Natural spectra tend to be much smoother (these examples come from our everyday objects and stimuli in the computer experiments described below). The effect of a filter, such as macular pigment (B) (taken from Bone et al., 1992), on a stimulus depends on its spectrum, and so spectra that produce identical colors in the periphery of vision would no longer produce identical colors in the center (C).
Figure 1
 
The metamer problem. (A) The same color can be produced by very different spectra, for example natural surfaces and computer monitors. Natural spectra tend to be much smoother (these examples come from our everyday objects and stimuli in the computer experiments described below). The effect of a filter, such as macular pigment (B) (taken from Bone et al., 1992), on a stimulus depends on its spectrum, and so spectra that produce identical colors in the periphery of vision would no longer produce identical colors in the center (C).
Figure 2
 
Calculating chromaticity shifts between central and peripheral vision. Chromaticity is calculated by convolving an object's spectrum with cone sensitivity functions to obtain relative activation of each cone class, and then calculating suitable cone activation ratio; follow the solid or dashed lines from spectrum (A) to chromaticity values (D) for central and peripheral vision, respectively. (A) The spectrum of light that would enter the eye from one of our everyday stimuli, a purple plastic object (pink line), and also from a white china plate (gray line), under the same northern daylight. (B) The effective spectral sensitivity functions of our three classes of cone receptor—short-wave (blue), medium-wave (green), and long-wave (red). These functions already include an adjustment for macular pigment (and lens) appropriate for central vision (as can be seen by the slight kink in the blue line). (C) Adjusted spectral sensitivity functions for cones in the periphery; we have removed the filtering effect of macular pigment and then adjusted the relative sensitivity of each cone class so that the white point (here given by the white reference plate) maintains the same chromaticity across the visual field. (D) The chromaticity coordinates of a stimulus are calculated from the relative activation values (S, M, and L) for each cone class (short-, medium-, and long-wave). MB space uses S / (L + M) and L / (L + M) to represent the “yellow-blue” dimension and the “green-red” dimension, respectively. Distance from the white point (gray diamond) represents saturation, while hue is represented by the angle around a circle with white at the center. The white stimulus has the same chromaticity coordinates for central and peripheral vision because we adapted the cones to ensure that this was the case. The chromaticity of the purple stimulus shifts between central and peripheral vision, as marked by the pink arrow (Figure 3B shows equivalent arrows for all the everyday stimuli). For each stimulus, we take the hue angles for central (Ac) and peripheral (Ap) vision as our main measure. These are plotted against each other in Figure 3C.
Figure 2
 
Calculating chromaticity shifts between central and peripheral vision. Chromaticity is calculated by convolving an object's spectrum with cone sensitivity functions to obtain relative activation of each cone class, and then calculating suitable cone activation ratio; follow the solid or dashed lines from spectrum (A) to chromaticity values (D) for central and peripheral vision, respectively. (A) The spectrum of light that would enter the eye from one of our everyday stimuli, a purple plastic object (pink line), and also from a white china plate (gray line), under the same northern daylight. (B) The effective spectral sensitivity functions of our three classes of cone receptor—short-wave (blue), medium-wave (green), and long-wave (red). These functions already include an adjustment for macular pigment (and lens) appropriate for central vision (as can be seen by the slight kink in the blue line). (C) Adjusted spectral sensitivity functions for cones in the periphery; we have removed the filtering effect of macular pigment and then adjusted the relative sensitivity of each cone class so that the white point (here given by the white reference plate) maintains the same chromaticity across the visual field. (D) The chromaticity coordinates of a stimulus are calculated from the relative activation values (S, M, and L) for each cone class (short-, medium-, and long-wave). MB space uses S / (L + M) and L / (L + M) to represent the “yellow-blue” dimension and the “green-red” dimension, respectively. Distance from the white point (gray diamond) represents saturation, while hue is represented by the angle around a circle with white at the center. The white stimulus has the same chromaticity coordinates for central and peripheral vision because we adapted the cones to ensure that this was the case. The chromaticity of the purple stimulus shifts between central and peripheral vision, as marked by the pink arrow (Figure 3B shows equivalent arrows for all the everyday stimuli). For each stimulus, we take the hue angles for central (Ac) and peripheral (Ap) vision as our main measure. These are plotted against each other in Figure 3C.
Figure 3
 
Learning is achievable but unachieved. (A) The photograph shows the everyday objects whose spectra were measured, including both natural (fruits, flowers, and vegetables bought in local shops), and man-made objects (paint, plastic, fabric, paper, crayons, and makeup, all unselectively gathered around our houses and offices); we also included samples of the standard Windows color palette (top right) displayed on a LCD monitor. (B) Pink arrows show, in MB color space, the calculated chromaticity shifts due to macular pigment for each object between center and periphery of the visual field (see Figure 2 for method). Black arrows show the calculated chromaticity shift for the four reference stimuli used for the 3-D color-matching task (Experiment 1). The gray diamond marks the background gray/white point, which has the same chromaticity in center and periphery due to independent cone adaptation to the background in center and periphery (see Methods and Figure 2). (C) The relationship between central and peripheral hue (chromaticity angle; see Figure 2D) is highly systematic across everyday spectra (pink dots), as shown by the very small jitter around the smooth average (black curve), which represents the average calibration level we would expect the visual system to achieve through everyday experience. The straight black line represents actual equality between central and peripheral chromaticity angles, and where the pink dots approach it, only saturation and not hue is affected by macular pigment (see lower right of B, which represents pink-reds). Black dots show the four reference hues from Experiment 1. (D) Predicted and observed hue-shifts for each of the four peripheral reference stimuli in Experiment 1, expressed as angular deviation in MB space. Black bars depict the hue-shifts predicted assuming perfect cone adaptation, but no learning; this is the horizontal distance between the black dots and the straight black line in (C). Pink bars depict the hue-shift predicted by learning from the everyday spectra shown in (A) and (B); this is the distance between the black dots and the curved line in (C) (the average of the learning set), with 95% confidence intervals estimated by bootstrapping on the learning set. Red bars show the observed hue-shifts between the reference stimuli and their perceptual match in central vision (average on five observers with standard errors). The observed shifts are halfway between those predicted for adaptation alone and those predicted for learning.
Figure 3
 
Learning is achievable but unachieved. (A) The photograph shows the everyday objects whose spectra were measured, including both natural (fruits, flowers, and vegetables bought in local shops), and man-made objects (paint, plastic, fabric, paper, crayons, and makeup, all unselectively gathered around our houses and offices); we also included samples of the standard Windows color palette (top right) displayed on a LCD monitor. (B) Pink arrows show, in MB color space, the calculated chromaticity shifts due to macular pigment for each object between center and periphery of the visual field (see Figure 2 for method). Black arrows show the calculated chromaticity shift for the four reference stimuli used for the 3-D color-matching task (Experiment 1). The gray diamond marks the background gray/white point, which has the same chromaticity in center and periphery due to independent cone adaptation to the background in center and periphery (see Methods and Figure 2). (C) The relationship between central and peripheral hue (chromaticity angle; see Figure 2D) is highly systematic across everyday spectra (pink dots), as shown by the very small jitter around the smooth average (black curve), which represents the average calibration level we would expect the visual system to achieve through everyday experience. The straight black line represents actual equality between central and peripheral chromaticity angles, and where the pink dots approach it, only saturation and not hue is affected by macular pigment (see lower right of B, which represents pink-reds). Black dots show the four reference hues from Experiment 1. (D) Predicted and observed hue-shifts for each of the four peripheral reference stimuli in Experiment 1, expressed as angular deviation in MB space. Black bars depict the hue-shifts predicted assuming perfect cone adaptation, but no learning; this is the horizontal distance between the black dots and the straight black line in (C). Pink bars depict the hue-shift predicted by learning from the everyday spectra shown in (A) and (B); this is the distance between the black dots and the curved line in (C) (the average of the learning set), with 95% confidence intervals estimated by bootstrapping on the learning set. Red bars show the observed hue-shifts between the reference stimuli and their perceptual match in central vision (average on five observers with standard errors). The observed shifts are halfway between those predicted for adaptation alone and those predicted for learning.
Figure 4
 
Purple objects look bluer in the near-periphery than in the center of the visual field. For each experiment, the observers' psychometric functions are plotted after a constant stimulus experiment comparing colors in the center and in periphery. If colors appeared identical in center and periphery of the visual field, the curves would go through the intersection of the gray lines marking “same” and “50%.” That they are biased to the left indicates that the stimuli are perceived to be pinker in the center / bluer in periphery. Note that in Experiments 2A and 2B, the central stimulus remained constant and the peripheral stimuli varied across trials, while in Experiments 2C and 2D, the peripheral stimulus was constant and the central one varied (hence the change in abscissa; see Methods). (A) Experiment 2A. Natural stimuli were produced by mixing cranberry and red grape juice with milk in various proportions. The seven ticks on the abscissa correspond to the seven test pots presented at the top of the panel. Observers were presented with the reference sample centrally (far right on the picture) followed by one of the seven test pots peripherally and judged whether the peripheral pot was pinker or bluer than the reference. (B) Experiment 2B. Stimuli were the pink-to-blue range of the Farnsworth-Munsell 100 Hue Test caps. The nine ticks correspond to the nine caps presented at the top of the panel. The same procedure was employed as in (A). (C–D) Experiments 2C and 2D. Computer-generated stimuli allowed us to compare fixation (C) and eye movement (D) conditions. A reference hue was briefly present in periphery, followed 1 s later by a central stimulus of varying hue. In the fixation condition, the change from center to periphery was obtained by changing the position of the stimulus on the screen (C, top panel), while in the eye movement condition, the stimuli appeared in the same location and the eyes moved (D, top panel). The stimuli on the abscissa range from stimulus 3 on the left (“bluer”) to 7 on the right (“pinker”), as described in Table 1.
Figure 4
 
Purple objects look bluer in the near-periphery than in the center of the visual field. For each experiment, the observers' psychometric functions are plotted after a constant stimulus experiment comparing colors in the center and in periphery. If colors appeared identical in center and periphery of the visual field, the curves would go through the intersection of the gray lines marking “same” and “50%.” That they are biased to the left indicates that the stimuli are perceived to be pinker in the center / bluer in periphery. Note that in Experiments 2A and 2B, the central stimulus remained constant and the peripheral stimuli varied across trials, while in Experiments 2C and 2D, the peripheral stimulus was constant and the central one varied (hence the change in abscissa; see Methods). (A) Experiment 2A. Natural stimuli were produced by mixing cranberry and red grape juice with milk in various proportions. The seven ticks on the abscissa correspond to the seven test pots presented at the top of the panel. Observers were presented with the reference sample centrally (far right on the picture) followed by one of the seven test pots peripherally and judged whether the peripheral pot was pinker or bluer than the reference. (B) Experiment 2B. Stimuli were the pink-to-blue range of the Farnsworth-Munsell 100 Hue Test caps. The nine ticks correspond to the nine caps presented at the top of the panel. The same procedure was employed as in (A). (C–D) Experiments 2C and 2D. Computer-generated stimuli allowed us to compare fixation (C) and eye movement (D) conditions. A reference hue was briefly present in periphery, followed 1 s later by a central stimulus of varying hue. In the fixation condition, the change from center to periphery was obtained by changing the position of the stimulus on the screen (C, top panel), while in the eye movement condition, the stimuli appeared in the same location and the eyes moved (D, top panel). The stimuli on the abscissa range from stimulus 3 on the left (“bluer”) to 7 on the right (“pinker”), as described in Table 1.
Figure 5
 
Schematic representation of the evolution in time of the density of macular pigment (A) and perceptual bias predicted by four different scenarios (B). The points on the far right represent the results from Figure 3D: the color bias expected with adaptation but no learning (black), the bias observed in adults (red, point J), and the residual bias on computer stimuli predicted after achievable learning from everyday objects (pink). Because the density of macular pigment at birth is very low, all scenarios would hypothesize a bias close to null shortly after birth (I). With no learning during the lifespan and a hard-wired ability to compare colors in the infant, perceptual bias would track the density of macular pigment (black line). With learning throughout life based on viewing everyday objects, the bias would stay close to zero. A small bias would emerge for each object if the effect of macular pigment differs from the learned average for all objects (pink line represents the computer stimuli from Figure 3D). Both of these scenarios are ruled out by our data because they do not reach point J, the measured bias in adults (from Figure 3D). Plausible scenarios consistent with our data lie between the red dashed and dotted lines (red shaded area). Note that the diagram represents only systematic biases; we do, of course, expect the precision and reliability of color comparisons to increase during infancy and childhood in any scenario, but that is separate from the question of systematic bias, and thus not represented in the diagram.
Figure 5
 
Schematic representation of the evolution in time of the density of macular pigment (A) and perceptual bias predicted by four different scenarios (B). The points on the far right represent the results from Figure 3D: the color bias expected with adaptation but no learning (black), the bias observed in adults (red, point J), and the residual bias on computer stimuli predicted after achievable learning from everyday objects (pink). Because the density of macular pigment at birth is very low, all scenarios would hypothesize a bias close to null shortly after birth (I). With no learning during the lifespan and a hard-wired ability to compare colors in the infant, perceptual bias would track the density of macular pigment (black line). With learning throughout life based on viewing everyday objects, the bias would stay close to zero. A small bias would emerge for each object if the effect of macular pigment differs from the learned average for all objects (pink line represents the computer stimuli from Figure 3D). Both of these scenarios are ruled out by our data because they do not reach point J, the measured bias in adults (from Figure 3D). Plausible scenarios consistent with our data lie between the red dashed and dotted lines (red shaded area). Note that the diagram represents only systematic biases; we do, of course, expect the precision and reliability of color comparisons to increase during infancy and childhood in any scenario, but that is separate from the question of systematic bias, and thus not represented in the diagram.
Table 1
 
MacLeod and Boyton (MLB) coordinates for the natural and artificial stimuli used in each experiment and shifts in chromaticity angle in degree (Hue-Shifts) between the central and peripheral stimulus in each condition, as estimated using the modeling described in Figure 2 (after normalization to white and without learning). Notes: Red figures indicate the stimuli used as reference, either in the center (Experiments 2A and 2B) or in the periphery (Experiments 1, 2C, and 2D). For Experiments 2C and 2D, the figures given are for the first session, which are the only figures that all participants had in common; in sessions two and three, the point of subjective equality (PSE, i.e., angle in color space at which the foveal patch perceptually matches the peripheral patch) was calculated for each observer in the previous session, and the foveal patches were varied in smaller increments about this value. FM: Farnsworth-Munsell.
Table 1
 
MacLeod and Boyton (MLB) coordinates for the natural and artificial stimuli used in each experiment and shifts in chromaticity angle in degree (Hue-Shifts) between the central and peripheral stimulus in each condition, as estimated using the modeling described in Figure 2 (after normalization to white and without learning). Notes: Red figures indicate the stimuli used as reference, either in the center (Experiments 2A and 2B) or in the periphery (Experiments 1, 2C, and 2D). For Experiments 2C and 2D, the figures given are for the first session, which are the only figures that all participants had in common; in sessions two and three, the point of subjective equality (PSE, i.e., angle in color space at which the foveal patch perceptually matches the peripheral patch) was calculated for each observer in the previous session, and the foveal patches were varied in smaller increments about this value. FM: Farnsworth-Munsell.
Experiment MLB coordinate 1 (bluer) 2 3 4 5 6 7 8 9 (pinker)
1 (computer) L / (L + M) 0.680 0.693 0.705 0.716
S / (L + M) 0.063 0.064 0.063 0.056
Hue-shift 13 18 21 18
2A (smoothies) L / (L + M) 0.693 0.698 0.703 0.706 0.711 0.716 0.722
S / (L + M) 0.026 0.025 0.024 0.024 0.023 0.022 0.021
Hue-shift 43 7.6 1.9 0.4 −1.4 −2.8 −4.3
2B (FM caps) L / (L + M) 0.695 0.696 0.699 0.702 0.705 0.707 0.709 0.713 0.714
S / (L + M) 0.028 0.028 0.027 0.026 0.025 0.025 0.024 0.023 0.023
Hue-shift 49 31 16 8.3 1.1 −2.5 −4.9 −7.5 −8.9
2C, 2D (computer) L / (L + M) 0.678 0.684 0.690 0.696 0.702 0.707 0.712 0.717 0.721
S / (L + M) 0.060 0.062 0.062 0.062 0.061 0.059 0.056 0.053 0.048
Hue-shift −22 −11 −0.5 8.9 18 26 34 42 50
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×