**Abstract**:

**Abstract**
When light is reflected off a surface, there is a linear relation between the three human photoreceptor responses to the incoming light and the three photoreceptor responses to the reflected light. Different colored surfaces have different linear relations. Recently, Philipona and O'Regan (2006) showed that when this relation is singular in a mathematical sense, then the surface is perceived as having a highly nameable color. Furthermore, white light reflected by that surface is perceived as corresponding precisely to one of the four psychophysically measured unique hues. However, Philipona and O'Regan's approach seems unrelated to classical psychophysical models of color constancy. In this paper we make this link. We begin by transforming cone sensors to spectrally sharpened counterparts. In sharp color space, illumination change can be modeled by simple von Kries type scalings of response values within each of the spectrally sharpened response channels. In this space, Philipona and O'Regan's linear relation is captured by a simple Land-type color designator defined by dividing reflected light by incident light. This link between Philipona and O'Regan's theory and Land's notion of color designator gives the model biological plausibility. We then show that Philipona and O'Regan's singular surfaces are surfaces which are very close to activating only one or only two of such newly defined spectrally sharpened sensors, instead of the usual three. Closeness to zero is quantified in a new simplified measure of singularity which is also shown to relate to the chromaticness of colors. As in Philipona and O'Regan's original work, our new theory accounts for a large variety of psychophysical color data.

*λ*as a scalar

*s*(

*λ*) attenuation between 0 and 1, and write a simple linear relation linking incident light energy

*e*(

*λ*) at wavelength

*λ*to reflected light energy

*p*(

*λ*) at that wavelength: so that the physical reflectance of a surface is simply the ratio of reflected to incident light at each wavelength.

*e*(

*λ*) is the vector corresponding to the responses of the three cone types to that illuminant: here

*t*denotes the transpose of the vector,

*Q*(

_{i}*λ*) for

*i = 1,2,3*define the absorption of the three human cone types at each wavelength

*λ*, and we integrate over the visible spectrum

*ψ*.

*s*(

*λ*) there exists a

*3×3*matrix

*A*which is independent of the illuminant

^{s}*e*and very accurately describes the way the surface transforms the accessible information about any incident light into the accessible information about reflected light:

*A*is the

^{s}*3×3*matrix best taking

*p*(for any illuminant

^{s,e}*e*) to

*w*in a least-squares sense. Philipona and O'Regan studied the validity of such an equation for a very large number of natural and artificial illuminants, and for a very large number of colored surfaces. In fact, the result is analytically true if incoming illumination is of dimensionality 3, that is, if it can be described as a weighted sum of three basis functions (Philipona & O'Regan, 2006). Since this is known to be true to a good approximation for daylights (Judd et al., 1964), the equation is very accurate.

^{e}*p*by the vector

^{s,e}*w*to obtain the biological equivalent of the physicist's reflectance in Equation 1. Philipona and O'Regan were able to do something similar however by first diagonalizing the matrix

^{e}*A*, that is, writing it as the product (

^{s}*T*)

^{s}^{−1}

*D*, where

^{s}T^{s}*D*is a diagonal matrix, and

^{s}*T*is a transformation matrix. In that case Equation 4 becomes so that

^{s}*T*operating on

^{s}*p*and

^{s,e}*w*maps these vectors into a basis where the accessible information matrix is diagonal. Because of the linearity of the integrals, the same effect can be achieved if instead of using the usual L, M, and S cones, we used a set of “virtual” sensors obtained precisely by taking this linear combination

^{e}*T*of the cone responses:

^{s}*r*, each being the ratio of reflected to incident light within one of the three virtual wavelength bands defined for

_{i}^{s}*i = 1,2,3*.

*color designator*. The difference in Land's approach is that he used LMS responses, hoping that color designators would be approximately independent of illumination. Philipona and O'Regan, on the other hand, used responses of the recomposed virtual sensors defined for each surface by

*T*.

^{s}*T*found by Philipona and O'Regan will typically map the cone sensor functions into virtual sensors which have more concentrated support in certain wavelength regions: they are LMS type sensors but appear spectrally

^{s}*sharper*than the cones. Because of this property they will more nearly have the property that the associated color designators are independent of illumination.

*T*for each surface, spectral sharpening seeks a single transformation for all surfaces and lights. One of the main contributions of this paper is to show that we can use a single, carefully chosen, transformation

^{s}*T*and predict unique hue and color naming data equally well as the Philipona and O'Regan approach which used a per surface transformation

*T*. Thus, and this is a significant improvement over the original work, we need not know the surface we are looking at in order to apply the theory.

^{s}*S*is large when one or more of the Philipona and O'Regan biological reflectance components are relatively very small. Philipona and O'Regan's hypothesis was that large singularity would correspond to colors that would be likely to be given a focal name in a given culture. Indeed, Philipona and O'Regan showed that this was the case: a strong correlation was found between the

^{PO}*S*of Equation 12 and the frequency with which colors in the WCS dataset are considered prototypical in different cultures. Philipona and O'Regan also extended their analysis to the question of unique hues and demonstrated that the singularity index could predict the position of the wavelengths for unique hues found classically in color psychophysics.

^{PO}*color designator*defined similarly to Philipona and O'Regan's notion of biological reflectance: The LMS triplet for an unknown surface under unknown light is divided by the response of a white surface (under the same light). In so doing the intent (or hope) is that the light should “cancel” and the color designator should be illuminant independent. However, designators calculated for the original cone sensors are not optimally illuminant independent. Thus the technique of Spectral Sharpening is used to find a single transform of cone responses with respect to which color designators are as independent of the illuminant as possible. Such sensors have sensitivities that are more narrowly concentrated and less overlapping in the visible spectrum than those of the original cones. Spectrally sharpened color designators are similar to Philipona and O'Regan's notion of biological reflectance, except that a unique transformation is used to create virtual responses, instead of having a different transform for each surface.

*T*such that over all surfaces

*s*: which implies Note that, in contradistinction to Philipona and O'Regan, all surfaces share the same sharpening transform (no dependency on

*s*).

*T*. In (Finlayson et al., 1994a) the starting point for sharpening was exactly the Equation 14. There it was shown that if reflectance and illumination are respectively modeled by 2- and 3-dimensional linear models (or the converse), then Equation 14 holds exactly. This is a remarkable result in two respects. First, using the statistical analysis provided by Marimont and Wandell (1992) (that modeled light and reflectance by how they projected to form sensor responses) a 2-dimensional model for illumination and a 3-dimensional model for surface provides a tolerable model of real response data. Second, this result provides a strong theoretical argument for believing that a single sharp transform can be used for all surfaces. Other optimization methods exist for deriving sharp sensors from Equation 14 including Data-based sharpening (Finlayson et al., 1994b), Tensor-based sharpening (Chong et al., 2007) and Sensor-based sharpening (Finlayson et al., 1994b). Figure 1 gives sharp sensors derived using these last three methods together with the Smith-Pokorny cone fundamentals (Smith & Pokorny, 1975).

*n*reflectances viewed under a D65 illuminant where we map cone responses to sharp counterparts using the

*3×3*sharpening matrix

*T*. We calculate the designator for the

*s*th surface: Dividing Equation 15 by Equation 16 gives the color designator r

*the components of which are: In Equation 17 the color designator has D65 in the superscript. This is because although we seek color designators which are illuminant independent, we will not achieve perfect invariance. Rather, as the illuminant varies, so too will the computed designators. To select the sensors giving the best illuminant independence, we will work with each sensor separately; that is, we will minimize each row of the matrix*

^{s}*T*individually (we denote each row as

*T*).

_{i}*containing the designators defined in Equation 17 for one of the sensors and a set of surface reflectances under the*

^{t}*D65*illuminant, and let $ v i e $ = [ $ r i s 1 , e ,, r i s n , e $]

*be a vector containing the designators for the same surfaces and the same sensor under another illuminant*

^{t}*e*. The individual terms for both these vectors are the responses of a single sharp sensor divided by the responses of the light. As the illuminant changes, we expect, for the best sharpening transform, that these vectors of designators will be similar to one another. Assuming

*m*illuminants we seek the transform

*T*which minimizes:

*T*we shall use the Spherical Sampling technique proposed by Finlayson and Susstrunk (2001). This method treats the sharpening problem combinatorially, defining all possible reasonable sharpening transforms. Without recapitulating the detail, their key insight was that only if two sensors are sufficiently different (by a criterion amount) will this difference impact strongly on color computations. Indeed they argued that for spectral sharpening it suffices only to consider linear combinations of the cones resulting in sensors that are one or more degrees apart. Using this insight, we find there are a discrete number of possible sensors and a discrete number of triplets of sensors. We simply take each of a finite set of sharp sensors and find the red, green, and blue sharp sensor that minimizes Equation 18. The minimization was carried out using the WCS reflectances (a subset of 320 Munsell reflectances) and the same set of illuminants as in Philipona and O'Regan's paper (Chiao, Cronin, & Osorio, 2000; Judd et al., 1964; Romero, Garcia-Beltran, & Hernandez-Andres, 1997).

*S*on the Philipona and O'Regan biological reflectances and the sharp color designators. These too are correlated (0.9251). While not identical, these high correlations provide prima facie evidence that color designators calculated with respect to a single sharpening transform can be used instead of the per-surface biological reflectance functions proposed by Philipona and O'Regan (which are based on a per surface sharpening transform).

^{PO}*r, g,*and

*b*to denote the color designators calculated with respect to our sharp sensitivities (rather than r

_{1}, r

_{2}, and r

_{3}). Further, let us begin by considering singularity in each color channel separately. By substituting test values into Equation 20 through Equation 22 we see each individual equation implements, correctly, a per channel idea of singularity. As an example, we can see that when

*r ≈ 0*and

*g*and

*b*are

*> > 0*, then,

*I*and

_{2}*I*will be very large. We simply add these three terms together to define our new Compact Singularity Index:

_{3}*S*computes a single measure which is large when the rgb designator has one or two values close to 0. Further, the function is symmetric with each of

^{C}*r,*

*g*, and

*b*playing the same role. That is, unlike the Philipona and O'Regan definition of singularity (see Equations 11, 12) we need not sort our sensor response or apply a maximum function.

*r = g = b*, the numerator will be 0 (note that, since we are dealing with designators, illumination effects have been canceled out). In contrast, for any chromatic surface the numerator will be positive, becoming bigger as we move away from the achromatic axis. Significantly, unlike traditional measures of saturation our chromaticness measure is unbounded: as the rgb becomes more and more saturated and the individual channel values go toward zero, so our measure becomes unboundedly large.

Dataset | Subjects | Unique Yellow | Unique Green | ||

Mean (nm) | Range (nm) | Mean (nm) | Range (nm) | ||

Schefrin | 50 | 577 | 568–589 | 509 | 488–536 |

Jordan-Mollon | 97 | — | — | 512 | 487–557 |

Volbrecht | 100 | — | — | 522 | 498–555 |

Webster (a) | 51 | 576 | 572–580 | 544 | 491–565 |

Webster (b) | 175 | 580 | 575–583 | 540 | 497–566 |

Webster (c) | 105 | 576 | 571–581 | 539 | 493–567 |

Philipona and O'Regan's SI prediction | — | 575 | 570–580 | 540 | 510–560 |

Our-model reflectances | — | 580 | 570–585 | 555 | 540–565 |

Our model-sharp sensors | — | 588 | 585–595 | 536 | 515–545 |

Dataset | Subjects | Unique Blue | Unique Red | ||

Mean (nm) | Range (nm) | Mean (nm) | Range (nm) | ||

Schefrin | 50 | 480 | 465–495 | — | — |

Jordan-Mollon | 97 | — | — | — | — |

Volbrecht | 100 | — | — | — | — |

Webster (a) | 51 | 477 | 467–485 | EOS | — |

Webster (b) | 175 | 479 | 474–485 | 605 | 596–700 |

Webster (c) | 105 | 472 | 431–486 | EOS | — |

Philipona and O'Regan's SI prediction | — | 465 | 450–480 | 625 | 590-EOS |

Our-model reflectances | — | 470 | 460–480 | 615 | 600-EOS |

Our model-sharp sensors | — | 464 | 454–470 | 607 | 600–640 |

*x-y*projection of the figure, and we have circled the four local maxima of the plot. We have connected these maxima to the neutral point, and extrapolated out to the monochromatic locus where we predict the unique hues should be. As seen in Table 1, our predictions are very close to Philipona and O'Regan's, and very close to the empirical data. The range of expected variation of the unique hues can be estimated in our approach by taking the range over which our compact singularity index exceeds some threshold. The range shown in the Table is obtained using a threshold of 15% of the maxima of each different mountain. It also corresponds accurately to the range of unique hues found in the empirical data. However, we should note the existence of the Abney effect: there is some curvature in the lines of perceived hue in the chromaticity diagram. Therefore, our table shows an approximation of the hues.

*αR-β*G. The optimal values we obtain are

*α = 0.56*and

*β = 0.77*. Second, with these values of

*α*and

*β*fixed, we move to the blue-yellow equilibrium, minimizing

*δ*(

*α*R-

*β*G)-(2

*δ*)

*γ*B. The -(

*2δ*)

*γ*term is defined in this way to have

*δ*regarding the opponency and

*γ*regarding the amplitude of the blue sensor. In this way,

*δ*allows us to adapt the blue-yellow opponency away from the more usual

*δ*= 1. Following this approach we obtain

*γ*= 0.4860.

*δ = 0.6477*; that is, the blue-yellow opponency is defined as

*0.6477*(

*R*)

_{c}+G_{c}*-1,2954B*(already with the amplitude-corrected sensors

_{c}*R*,

_{c}*G*,

_{c}*B*). The two cancellation curves show, on the one hand, the intensity of a monochromatic yellow light that must be added to a bluish light so that the corresponding stimulus is on the locus defining a unique hue different from yellow or blue, and on the other hand the same thing for red and green lights.

_{c}*es*while the solid lines represent the predictions using estimations of unique hues shown in Table 1. We can see that our predictions are about as close to the experimental data as those obtained from Philipona and O'Regan's approach. Finally, predictions using unique hues found by sharp sensors are shown in Figure 9c.

*Basic Color terms: Their universality and evolution*. Berkeley, CA: University of California Press.

*Vision Research*

*,*24(5),479–489. [CrossRef] [PubMed]

*Vision Research*

*,*15(10),1125–1135. [CrossRef] [PubMed]

*Journal of the Optical Society of America a-Optics Image Science and Vision*

*,*17(2),218–224. [CrossRef]

*Vision Research*

*,*39(20),3444–3458. [CrossRef] [PubMed]

*2007 Ieee11th International Conference on Computer Vision*

*,*1–6,2143–2150.

*Journal of the Optical Society of America a-Optics Image Science and Vision*

*,*11(11),3011–3019. [CrossRef]

*Journal of the Optical Society of America a-Optics Image Science and Vision*

*,*11(5),1553–1563. [CrossRef]

*Sechs Mittheilungen an die Kaiserliche Akademie der Wissenschaften in Wien*.

*Journal of the Optical Society of America*

*,*45(7),546–552. [CrossRef]

*Journal of the Optical Society of America*, 54(8),1031–1036. [CrossRef]

*Cross-Cultural Research*

*,*39(1),39–55. [CrossRef]

*Proceedings of the National Academy of Sciences of the United States of America*

*,*100(15),9085–9089. [CrossRef] [PubMed]

*Color Research and Application*

*,*29(2),158–162.

*American Scientist*

*,*52,247–264.

*Journal of the Optical Society of America a-Optics Image Science and Vision*

*,*9(11),1905–1913. [CrossRef]

*John Dalton's Colour Vision-Legacy*

*,*54,391–403.

*Current Biology*

*,*12(6),483–487. [CrossRef] [PubMed]

*Visual Neuroscience*

*,*23(3–4),331–339. [PubMed]

*Journal of the Optical Society of America a-Optics Image Science and Vision*

*,*14(5),1007–1014. [CrossRef]

*Vision Research*

*,*15(2),161–171. [CrossRef] [PubMed]

*Vision Research*

*,*41(13),1645–1657. [CrossRef] [PubMed]

*Journal of the Optical Society of America a-Optics Image Science and Vision*

*,*17(9),1545–1555. [CrossRef]

*Vision Research*

*,*45(25–26),3210–3223. [CrossRef] [PubMed]

*Journal of Imaging Science and Technology*

*,*45(5),409–417.