In F. Faul and V. Ekroll (2002), we proposed a filter model of perceptual transparency that describes typical color changes caused by optical filters and accurately predicts perceived transparency. Here, we provide a more elaborate analysis of this model: (A) We address the question of how the model parameters can be estimated in a robust way. (B) We show that the parameters of the original model, which are closely related to physical properties, can be transformed into the alternative parameters hue *H*, saturation *S*, transmittance *V*, and clarity *C* that better reflect perceptual dimensions of perceived transparency. (C) We investigate the relation of *H*, *S*, *V*, and *C* to the physical parameters of optical filters and show that *C* is closely related to the refractive index of the filter, whereas *V* and *S* are closely related to its thickness. We also demonstrate that the latter relationship can be used to estimate relative filter thickness from *S* and *V*. (D) We investigate restrictions on *S* that result from properties of color space and determine its distribution under realistic choices of physical parameters. (E) We experimentally determine iso-saturation curves that yield nominal saturation values for filters of different hue such that they appear equally saturated.

*m*(

*λ*), 0 ≤

*m*(

*λ*) ≤ 1, the filter thickness

*x*> 0, and the refractive index

*n*(

*λ*). The latter is assumed to be a constant function of wavelength, with

*n*≥ 1. The Bouguer–Beer law,

*I*

_{1}/

*I*

_{0}=

*θ*(

*λ*) = exp[−

*m*(

*λ*)

*x*], describes how the inner transmittance

*θ*(

*λ*) (the ratio of the amount

*I*

_{1}of light reaching the bottom of the filter to the amount

*I*

_{0}entering at the top) depends on absorption and thickness. Fresnel's equations describe how the relative amount

*k*of light that is specularly reflected at an air–filter interface depends on the angle of the incoming light and the refractive index. For normal incidence as assumed here,

*k*= [(

*n*− 1) / (

*n*+ 1)]

^{2}.

*r*(

*λ*) and total transmittance

*t*(

*λ*) (i.e., the relative amounts of light leaving the filter after multiple inner reflections at the illuminated and opposite sides, respectively) can be given in closed form:

*r*(

*λ*) =

*k*+

*k*(1 −

*k*)

^{2}

*θ*

^{2}(

*λ*) / [1 −

*k*

^{2}

*θ*

^{2}(

*λ*)], and

*t*(

*λ*) = (1 −

*k*)

^{2}

*θ*(

*λ*) / [1 −

*k*

^{2}

*θ*

^{2}(

*λ*)]. If the filter is placed in front of a background with reflectance

*a*(

*λ*), then the virtual reflectance

*p*(

*λ*) of the filter surface (i.e., the relative amount of incident light that is reflected from the filter area) can be written as

*p*(

*λ*), the cone excitation

*P*

_{ i },

*i*=

*L*,

*M*,

*S*, can be computed in the usual way, that is,

*P*

_{ i }= ∫

_{ λ }

*p*(

*λ*)

*I*(

*λ*)

*R*

_{ i }(

*λ*)

*dλ,*where

*I*(

*λ*) is the illumination spectrum and

*R*

_{ i }(

*λ*) is the sensitivity spectrum of cone class

*i*.

*A*and

*B*denote the color codes of the bipartite background region, and

*P*and

*Q*denote the color codes of the same regions viewed through the filter (see Equations 17 and 18 in Faul & Ekroll, 2002). In the following, we will always assume that the color codes are cone excitation values, where the index indicates one of

*L*,

*M*,

*S*.

*A*,

*B*,

*P*,

*Q*are the observables that provide the basis for inferring transparency. The remaining variables are parameters of the model that are not

*directly*given in the input:

*I*is the color of the illumination,

*τ*is the vector of transmittance factors, and

*δ*is a factor related to the amount of direct reflection. A comparison of the model equations with a simplified version of the image generation model suggests (see Faul & Ekroll, 2002, p. 1086) that

*τ*roughly represents the squared total transmittance

*t*

^{2}(

*λ*) and that

*δ*is related to the direct reflection factor

*k*. This motivates the parameter restrictions 0 ≤

*τ*

_{ i }≤ 1 and

*δ*≥ 0. The remaining parameter

*μ*controls the relative amount of directly reflected light of first order that is reflected from the top surface of the filter to higher order contributions that traveled through the filter and are, thus, affected by its transmissive properties. Here, it mainly has a technical meaning and is used to distinguish between different submodels. The model in which

*μ*= 1 will, in the following, be called the

**full model**(see Equations 17 and 18 in Faul & Ekroll, 2002).

*I*and/or by setting different values for

*μ*. For all models of this class, it holds that

*δ,*in contrast, requires knowledge about the illumination. For the general model, we have

*μ*= 0:

**reduced model**, the equation for

*δ*simplifies to

*τ*

_{ i }and

*δ*have singularities at

*A*

_{ i }=

*B*

_{ i }and

*P*

_{ i }=

*Q*

_{ i }, respectively. Thus, even if the model describes the stimulus exactly, the computation may be undefined or may at least become rather unstable if the background or the filtered colors are similar within one or more color channels. Second, the computations do not consider the presence of noise that must be assumed under realistic viewing conditions. Third, the computation uses only local information between four colors (for example, at

*X*-junctions) and ignores more global information that may improve the accuracy of the computation in more complex stimuli. This local computation also requires the exact localization and assignment of the four related colors, which may be difficult to achieve if the background has a fine or complex texture. The latter aspect is especially problematic, because due to refraction effects exact alignment of contours on both sides of the border of an optical filter cannot usually be expected.

**. We first consider the special case of two background colors and**

*Case 1: N = 2 and I = mean (background)**I*= (

*A*+

*B*) / 2, because it is especially easy and highlights some important regularities: Adding Equations 6 and 7 for the two filtered colors

*P*and

*Q*yields

*P*

_{ i }+

*Q*

_{ i }=

*τ*

_{ j }(

*A*

_{ j }+

*B*

_{ j }+

*δ*(

*A*

_{ j }+

*B*

_{ j })). A simple transformation of this equation leads to

*R*

_{ i }:= (

*P*

_{ i }+

*Q*

_{ i }) / (

*A*

_{ i }+

*B*

_{ i }) and observe that this quantity—a ratio of means—can be computed in a robust way: The only (practically irrelevant) singularity occurs at

*A*

_{ i }=

*B*

_{ i }= 0, that is, if the background is completely black. We further define

*γ*:= 1 / (1 +

*δ*). This provides a convenient reparameterization that maps the infinite parameter range [0, ∞] of the “direct reflection factor”

*δ*onto the finite interval [0, 1], where

*γ*= 1 for

*δ*= 0 and where

*γ*approaches zero for

*δ*→ ∞. As will be discussed in the Remapping the model parameters to a phenomenological space section,

*γ*bears a close relationship to the perceived clarity of the filter.

*γR*

_{ i }=

*τ*

_{ i }. To estimate

*τ*in a robust way, we first compute

*τ*

_{ i }= (

*P*

_{ i }−

*Q*

_{ i }) / (

*A*

_{ i }−

*B*

_{ i }) for channel

*i*in which the contrast ∣

*A*

_{ i }−

*B*

_{ i }∣ is maximal, then compute

*γ*=

*τ*

_{ i }/

*R*

_{ i }and finally set

*τ*=

*γR*. Figure 3 gives a visual representation of the solution using the above definitions. Note that

*γ*and

*R*

_{ i }are not independent, but that

*R*

_{ i }> 1 (a brightening effect) implies

*γ*< 1.

**If**

*Case 2: N ≥ 2 colors and I = mean (background).**N*≥ 2 background/filter color pairs (

*A*

^{ j },

*P*

^{ j }) are given, then under the assumption

*I*

_{ i }= mean(

*A*

_{ i }) = 1 /

*N*∑

_{ j }

*A*

_{ i }

^{ j }we get analogous to Case 1:

*R*

_{ i }:= mean(

*A*

_{ i })/mean(

*P*

_{ i }), so again

*γR*=

*τ*. To compute

*τ*directly, we use

*τ*

_{ i }= sd(

*P*

_{ i })/sd(

*A*

_{ i }), where sd(

*X*

_{ i }) denotes the standard deviation of

*N*colors

*X*

^{ j }in channel

*i*. It is easy to see that this is actually a solution for

*τ,*because since

*P*

_{ i }

^{ j }=

*τ*

_{ i }

*A*

_{ i }

^{ j }+

*τ*

_{ i }

*I*

_{ i }with

*C*

_{ i }:=

*τ*

_{ i }

*I*

_{ i }constant, we have var(

*P*

_{ i }) = var(

*τ*

_{ i }

*A*

_{ i }+

*C*

_{ i }) =

*τ*

_{ i }

^{2}var(

*A*

_{ i }), and thus,

*τ*

_{ i }= sd(

*P*

_{ i })/sd(

*A*

_{ i }). This method uses information from all available colors and, thus, provides a more robust solution than computations based on (random) subsets of these colors.

*A*

_{ i }) = 0, that is,

*τ*

_{ i }cannot be computed if the background does not vary in color channel

*i*. To compute

*τ*in a robust way, we may use a similar strategy as above, that is, we first compute

*τ*

_{ i }= sd(

*P*

_{ i })/sd(

*A*

_{ i }) for channel

*i*in which sd(

*A*

_{ i }) is maximal, and then proceed in exactly the same way as before.

**. We now consider the case of**

*Case 3: The general case**N*≥ 2 background/filter color pairs (

*A*

^{ j },

*P*

^{ j }) and the general model given in Equations 2 and 3 with no special assumptions about how the illumination

*I*is estimated from the input. A direct method to compute

*τ*is again

*τ*

_{ i }= sd(

*P*

_{ i })/sd(

*A*

_{ i }). A different, indirect way is:

*τ*

_{ i }= (mean(

*P*

_{ i }) −

*μδI*

_{ i }) / (mean(

*A*

_{ i }) +

*δI*

_{ i }). The latter method obviously requires that

*δ*and

*I*are known, but it has the advantage that the computation is more robust. To compute

*τ*using this method, we again start by computing

*τ*

_{ i }= sd(

*P*

_{ i })/sd(

*A*

_{ i }) for channel

*i*in which sd(

*A*

_{ i }) is maximal. Using this value, we can compute

*δ*=

*u*

_{ i }/

*v*

_{ i }, where

*u*

_{ i }:= mean(

*P*

_{ i }) −

*τ*

_{ i }mean(

*A*

_{ i }) and

*v*

_{ i }:= (

*τ*

_{ i }+

*μ*)

*I*

_{ i }. With

*δ*known, the indirect method is applied to compute

*τ*for the other color channels.

*δ*

_{ i }in

*all*channels, where

*τ*

_{ i }is defined (i.e., where sd(

*A*

_{ i }) > 0) and to integrate them into a single estimate

*δ*. In principle, any integration function

*f*may be used, which gives the correct solution if the model describes the stimulus exactly. Thus, the choice of

*f*only matters if the stimulus deviates from the model, and in this case, it influences the weighting of the model deviations across channels. Two integration functions have been found to work especially well, namely,

*f*

_{ M }:= mean(

*δ*

_{ i }) and

*f*

_{ L }:= (∑

_{ i }

*v*

_{ i }

*u*

_{ i })/(∑

_{ i }

*v*

_{ i }

^{2}) with

*u*

_{ i }and

*v*

_{ i }as defined above. Simulations have shown that the model parameters estimated using these integration functions are very similar to those obtained with a least square fit, as described in the next paragraph.

*Iterative procedures*. Iterative (fit) procedures may also be used to estimate the parameters of a complex stimulus in a robust way. A simple method is to find the parameters

*τ*and

*δ*that minimize the loss function

*E*= ∑

_{ j=1}

^{ N }∑

_{ i=1}

^{3}(

_{ i }

^{ j }−

*P*

_{ i }

^{ j })

^{2}/

*w*

_{ i }under the constraints 0 ≤

*τ*

_{ i }≤ 0 and

*δ*≥ 0. Here,

^{ j }is the model prediction for the observed

*P*

^{ j }and

*w*

_{ i }is a factor that compensates for different scales in the color channels (

*w*may, for example, be chosen to be the cone excitations elicited by a constant light spectrum).

*X*-junction, whereas global procedures may also use information from more distant parts of the retinal image. The statistical estimation that underlies “global estimates” can be done in a spatial or a temporal manner. In the first case, the integration would be across responses of neighboring receptors getting input from different areas of the background and the filter region. In the latter case, the integration is across the responses of single receptors, for example, during a random walk across the stimulus caused by ocular tremor while fixating the stimulus.

*I, τ,*and

*δ*. Applying instead Case 1 locally at all

*m*different

*X*-junctions of the stimulus would yield

*m*estimates of

*I, τ,*and

*δ,*and an additional mechanism would be required to integrate these local estimates.

*τ*if computed with the “direct procedure,” because this computation is illumination invariant. This inconsistency problem does not occur, however, if we estimate the illumination globally and then use the procedure described above as “general case” for the two background/filter color pairs at each

*X*-junction.

*A*

_{ i }and

*P*

_{ i }. Thus, it also works if there are nonmatching pairs of colors (for example, those belonging to regions that are completely covered by the filter or are only visible in plain view) or if it is difficult to assign corresponding

*A*

_{ i },

*P*

_{ i }pairs (for example, in complex backgrounds with a fine structure). The global method is, in general, also more robust, because the parameter estimation is based on more input. However, the advantages of the global approach do only apply if the actual situation does not change. That is, the filter properties, the illumination, and the background texture must be approximately constant within the integration region. It seems, therefore, advantageous to use adaptive areas of integration and/or—depending on the structure of the stimulus—different estimation strategies. Such adaptive estimation strategies might, for example, explain why contrast polarity at an

*X*-junction, which has been found to be a very strong constraint in simple stimuli with large areas and few

*X*-junctions, is considerably less important and may even be ignored if the structure of the background gets more complex (see Figure 19 in Singh & Anderson, 2002).

*A*

_{ i }

^{ c },

*P*

_{ i }

^{ c }) denote pairs of cone excitations outside and inside the optical filter at the same position along a putative filter contour, then the criterion corr(

*A*

^{ c },

*P*

^{ c }) >

*t,*where 0 ≪

*t*< 1 is a threshold value, may be used to decide whether the contour is actually the border of a transparent overlay or not.

*x*= 1, refractive index

*n*= 1.5, and random choices of absorption and reflectance spectra (frequency limited with

*ω*= 1/150 cycles/nm and

*ω*= 1/75 cycles/nm, respectively; see 1). The absorption spectrum was normalized to the range [0.1, 0.9] to obtain saturated filters, and the reflectance spectra were scaled with random numbers between 0 and 1 to increase the variance in albedo. All spectra were defined in the wavelength range from 400 to 700 nm in steps of 5 nm. The relative errors in region

*j*= 1…10 of each stimulus are computed as

*ɛ*

_{ i }

^{ j }= ∣

_{ i }

^{ j }−

*P*

_{ i }

^{ j }∣ /

*P*

_{ i }

^{ j }, where

*P*

_{ i }

^{ j }and

_{ i }

^{ j }denote the

*i*th color coordinate in filter region

*j*resulting from the simulation and from using the model with the estimated parameters, respectively.

*S*

_{sim}). The procedure used was essentially the same as in the simulation described in the previous section, with the exception that the thickness

*x*and the refractive index

*n*of the optical filter were no longer fixed but chosen randomly from the intervals [0.8, 1.8] and [1, 2], respectively. We then applied the procedure described in the Estimation procedures section [Case 3, with

*I*= mean (background) and

*f*=

*f*

_{ M }] to estimate the parameters of the full and reduced filter models. These parameters were then used to calculate adjusted filtered colors in two similar stimuli

*S*

_{F},

*S*

_{R}, conforming exactly to the full and reduced models, respectively. To avoid a large number of indistinguishable stimulus pairs, we selected 150 cases using the criterion that max(∣

*X*

_{ i }−

_{ i }∣/

*X*

_{ i }) ≥ 0.25, where

*X*

_{ i }and

_{ i }are corresponding coordinates of the filtered colors in

*S*

_{sim}and

*S*

_{R}, respectively. The pairs (

*S*

_{sim},

*S*

_{R}) and (

*S*

_{sim},

*S*

_{F}) were presented in separate trials. Thus, the subjects had to judge a total of 300 pairs.

*S*

_{sim}was displayed in the top or bottom row with equal probability. The homogeneous background was set to the mean of the background colors in the stimuli. The square background of the stimuli had a side length of 8.7 cm (6.2°) and the diameter of the circular filter region was 5 cm (3.6°). The horizontal center-to-center distance of the background rectangles was 10.5 cm (7.5°) and that of the circular filter regions was 9.5 cm (6.8°). The subjects viewed the stereo pairs from a distance of 80 cm through a mirror stereoscope (SA200 Screenscope Pro).

*χ*

^{2}= 70.58,

*p*≪ 0.01). Second, if the two stimuli were perceived as different, then the model-conforming stimuli were clearly preferred over simulated stimuli. According to a binomial test, this preference was significantly different from chance (

*p*≪ 0.01), both in the “reduced” and “full” conditions. Third, the preference for model-conforming stimuli was much higher in the “reduced” than in the “full” condition. A

*χ*

^{2}-test indicates that this preference was also significantly different from chance (

*χ*

^{2}= 7.16,

*p*= 0.007).

*τ*corresponds to the squared transmittance of the filter, whereas

*δ*correlates with the amount of direct reflection of incoming light at air–filter interfaces, which in the physical model is described by Fresnel's equations.

*τ*represents the hue, saturation, and overall degree of transmittance of the filter. A filter with

*τ*= (1, 0, 0), for example, could be called a “saturated red” filter in the sense that it transmits predominantly “long-wavelength light” and that a white surface seen through this filter would appear in a saturated red color. A filter with

*τ*= (1, 0.8, 0.8) could analogously be called a “desaturated red” filter because a white surface seen through this filter would appear in a desaturated red. By the same logic, a filter with

*τ*= (1, 1, 1) would be called white, but in this case, it is more common (and more appropriate) to say that it is completely clear. These examples illustrate that there is a close relationship between

*τ*and color codes attributed to surfaces, but it should also be obvious that

*τ,*although it describes chromatic attributes of the filter, is itself

*not*a color code.

*τ*parameter should represent the above-mentioned perceptual dimensions in a more intuitive manner. The structural similarity to color codes, on the one hand, and the parameter restrictions on

*τ*

_{ i }, on the other hand, suggest to use a transformation of a normalized RGB color space into a perceptually based space, where

*R, G, B*are replaced with

*τ*

_{ L },

*τ*

_{ M },

*τ*

_{ S }. For this purpose, we decided to use the HSV space proposed by Smith (1978). The transformation from

*τ*to HSV (see 2 for MATLAB code) maps the transmittance vector

*τ*to

*H*,

*S*,

*V*values, which all lie in the range [0, 1]. Here, “hue,” “saturation,” and “value” are understood as mental representations of filter properties and must not be confused with descriptions of the perceived color in the filter region, which also depends on other factors such as the background seen through the filter and the prevailing illumination. The single case, where these two concepts virtually coincide, is a filter in front of a white background. Obviously, the concept of hue, saturation, and value of a filter is rather complex, and an important aim of the present work was to investigate whether it is useful at all.

*H*represents the “hue of the filter.” A rough physical correlate is the dominant wavelength of the light transmitted by an optical filter. The value

*H*= 0 is taken to be red. By increasing

*H*, the hues take on the colors of the light spectrum from red, over yellow, green, cyan, blue, magenta, to red again. Thus,

*H*= 1 denotes the same hue as

*H*= 0. Just as in most color spaces, the hue value has no metrical meaning, that is, the numerical distance between two hue values says nothing about the perceptual difference of these hues. Figure 7 shows a simulated filter, which changes hue in equal steps of

*H*.

*S*describes the “saturation” of the filter. Its meaning can best be understood with reference to the physical correlate: A saturation of zero means a completely achromatic filter, which transmits light of all wavelength equally, and increasing saturation means increasing wavelength selectivity of transmittance. A “saturated red filter,” thus, means a filter with high selectivity for long-wavelength light. Conceptually, this meaning must be distinguished from the usual use of the term saturation in a color space, where

*S*= 0 indicates a gray color and where increasing the saturation value means to decrease the “gray content” of the color.

*H*and

*V*are kept constant (and

*V*is nonzero), then the

*S*dimension is at least an ordinal scale, that is, perceived filter saturation increases monotonically with

*S*. Preliminary observations suggest that this scale may be even approximately linear with

*S*. Comparisons of

*S*values across different hues are more problematic. This problem is discussed in more detail in the Restrictions on layer saturation section and Iso-saturation curves section. Figure 8 shows a simulated red filter that changes its saturation in equal

*S*steps.

*V*= 0 (“no transmittance”) and

*V*= 1 (“full transmittance”). In the HSV model,

*V*= max(

*τ*

_{ L },

*τ*

_{ M },

*τ*

_{ S }). If the additive component due to direct reflection is zero (i.e.,

*δ*= 0), then a filter with

*V*= 1 is actually invisible if the filter is also achromatic (

*S*= 0), and the color of filters with

*S*> 0 is of high purity. If

*V*decreases below 1, then the filter appears increasingly darker and in the limit completely black. Figure 9 shows a simulated achromatic filter with varying transmittance value

*V*.

*δ*influences the perceived properties of the filter. This is not without reason, because the perceptual effect depends both on

*δ*and the nature of the illumination

*I*. In the present context, we restrict ourselves to the special case of diffuse uniform illuminations. Under natural viewing conditions, this would be approximately correct if the filter is indirectly illuminated by a clear sky or by a uniformly colored room. In this case, the light that is directly reflected from the filter's top surface is uniform and adds to the spatially structured light reflected from the background. With increasing

*δ,*the filter looks less clear but instead appears increasingly hazy (see Figure 10). This motivates to use the term “filter clarity” to describe the corresponding impressions. With nonuniform illuminations, the direct reflection is, in general, a distorted image of the surround, and the filter surface appears more or less glossy (thus, with respect to this case, it would be more appropriate to use the term “filter glossiness”). These perceptual effects can be appreciated in Figure 2.

*C*proposed here, it is interesting to note that it is closely related to a measure found by Singh and Anderson (2002) to determine the perceived properties of the transparent layer in achromatic stimuli. They simulated two transparent layers in front of identical sinusoidal gratings by changing the mean and contrast of the sinusoidal grating inside a central “filter region.” In the standard stimulus, the mean and contrast of the “filter” were fixed. The mean of the comparison “filter” was systematically varied, and for each specific value, the task of the subjects was to match the perceived transmittance of standard and comparison filters by adjusting the contrast of the latter. The results revealed that the subjects always chose a setting that made the ratio of the Michelson contrast in the filter region to the Michelson contrast in the background equal in both stimuli. This finding is rather difficult to explain in the framework of the episcotister model to which the authors refer and was even presented by them as an argument against physically inspired models. In the present framework, however, it can easily be understood by assuming that their subjects matched the perceived clarity of the filter, because the clarity parameter is closely related to the ratio of the Michelson contrast in the filtered region to the Michelson contrast in the background. In the reduced model, it is even identical to this measure. This can be seen if the formula for

*τ*

_{ i }is inserted in Equation 10 to yield

*C*= 1 / (1 +

*δ*) = (

*A*

_{ i }+

*B*

_{ i })(

*P*

_{ i }−

*Q*

_{ i }) / [(

*A*

_{ i }−

*B*

_{ i })(

*P*

_{ i }+

*Q*

_{ i })]. That the subjects actually matched the clarity of the perceived transparent layer instead of its “transmittance,” as requested, is also suggested by the fact that the means of the two “filters” (that determine their perceived darkness) were different.

*stable*under varying viewing conditions and contexts.

*N*= 10 background/filter color pairs (

*A*

^{ j },

*P*

^{ j }) and then applied the global estimation procedure described as Case 3 in the Estimation procedures section to calculate the model parameters for each simulated stimulus. The fixed scene properties in a sample were the absorption spectrum of the filter (a randomly generated frequency-limited spectrum with limiting frequency

*ω*= 1/150 cycles/nm), the illumination spectrum (CIE daylight spectrum D65), and the refractive index

*n,*which was set to a fixed value of ≥1. To avoid a bias in favor of low saturation filters (one may also say: to induce a bias to high saturation filters), the absorption spectrum was shifted and scaled to set its minimum

*a*

_{min}and range

*a*

_{range}to specific values (0.1 and 0.8, respectively, if not otherwise specified). The scene properties that were chosen differently for each stimulus of the sample were the reflection spectra of the

*N*background regions (randomly generated frequency-limited spectra with

*ω*= 1/75 cycles/nm) and the filter thickness, which was randomly chosen between 0.1 and 3. To increase the luminance range of the corresponding surface colors, the reflectance spectra were multiplied with a uniformly distributed random number between 0 and 1.

*SD*of the estimated hues was calculated. To avoid problems with the circular nature of hue values (

*H*= 0 is identical to

*H*= 1), we first determined the mean hue from the angle of the centroid of the distribution of

*H*/

*S*pairs in a polar plot in the

*H*/

*S*plane and then determined the

*SD*of estimated hues remapped to the interval [−0.5, 0.5] around this mean hue. The results in Table 1 show that the mean

*SD*of estimated hues for saturation values

*S*> 0.2 was less than 0.02 with the full model. Using the reduced model, the

*SD*s were even smaller. Given that

*H*values determine the perceived hue of the filter, these results suggest that the effect of changes in the filter thickness on perceived hue is, in most cases, negligible.

Condition | SD of hue values | |||||
---|---|---|---|---|---|---|

n | a _{min} | a _{range} | S > 0.2 | All S | ||

Mean | Max | Mean | Max | |||

1.0 | 0.1 | 0.4 | 0.0023 | 0.0165 | 0.0046 | 0.0997 |

1.0 | 0.1 | 0.8 | 0.0055 | 0.0472 | 0.0081 | 0.0816 |

1.0 | 0.5 | 0.4 | 0.0023 | 0.0181 | 0.0043 | 0.0321 |

1.5 | 0.1 | 0.4 | 0.0023 | 0.0167 | 0.0167 | 0.1224 |

1.5 | 0.1 | 0.8 | 0.0071 | 0.1070 | 0.0144 | 0.1230 |

1.5 | 0.5 | 0.4 | 0.0167 | 0.2206 | 0.0287 | 0.2073 |

*V*decreases exponentially with filter thickness. This holds for both the full and reduced models. More specifically, we found that for a given filter the function

*V*(

*x*) =

*a*exp(−

*bx*) +

*c*, with

*a*,

*b*> 0, described the dependence of the transmittance parameter on filter thickness

*x*almost perfectly. We performed least square fits of this model to 200 samples of 200 stimuli, each under the same six conditions described above. The fit was very good throughout with

*R*

^{2}always larger than 0.999. The mean and

*SD*of the parameter estimates are summarized in Table 2. The value of parameter

*a*was always between 0.81 and 1 and that of

*c*was close to 0. The parameter

*b,*which represents the speed of the decay of

*V*with increasing thickness, exhibited a systematic dependence on the minimum and the range of the absorption spectrum: The larger the overall absorption (i.e., the lower the overall inner transmittance of the optical filter), the larger the value of

*b*and the faster the decay. Together, these results confirm that the transmittance parameter of the model indeed bears a close relationship to the inner transmittance of optical filters: Both decrease in essentially the same way with increasing thickness of the optical filter.

Condition | Mean estimate ± SD | ||||
---|---|---|---|---|---|

n | a _{min} | a _{range} | a | b | c |

1.0 | 0.1 | 0.4 | 0.94 ± 0.04 | 0.47 ± 0.16 | 0.06 ± 0.04 |

1.0 | 0.1 | 0.8 | 0.91 ± 0.06 | 0.69 ± 0.31 | 0.08 ± 0.05 |

1.0 | 0.5 | 0.4 | 1.00 ± 0.00 | 1.25 ± 0.18 | 0.00 ± 0.00 |

Restricted model | |||||

1.5 | 0.1 | 0.4 | 0.91 ± 0.06 | 0.47 ± 0.18 | −0.01 ± 0.06 |

1.5 | 0.1 | 0.8 | 0.87 ± 0.05 | 0.74 ± 0.29 | 0.02 ± 0.04 |

1.5 | 0.5 | 0.4 | 0.90 ± 0.02 | 1.28 ± 0.15 | −0.00 ± 0.00 |

Full model | |||||

1.5 | 0.1 | 0.4 | 0.83 ± 0.04 | 0.49 ± 0.17 | 0.07 ± 0.04 |

1.5 | 0.1 | 0.8 | 0.81 ± 0.05 | 0.75 ± 0.36 | 0.07 ± 0.05 |

1.5 | 0.5 | 0.4 | 0.89 ± 0.01 | 1.27 ± 0.18 | 0.01 ± 0.01 |

*a*(

*λ*). This is a direct consequence of the fact that the exponential exp(−

*a*(

*λ*)

*x*) in the Bouguer–Beer law tends to 1 if the thickness

*x*tends to zero. With increasing thickness, the filter transmits less light [unless

*a*(

*λ*) = 0)] and is—due to “spectral sharpening”—increasingly wavelength selective [unless

*a*(

*λ*) = const] and, thus, looks increasingly darker and more saturated. For refractive indices

*n*> 1, there is also an additive component due to reflection of the illumination at filter–air interfaces. The dominant contribution in this component is the reflection at the top surface of the filter, and this part does not dependent on filter thickness. If the illumination is approximately neutral, then this component has an additional desaturating effect.

*x*can often be well described by the equation

*S*(

*x*) :=

*u*[1 − exp(−

*vx*)], with

*v*> 0 and 0 <

*u*≤ 1. The latter condition is motivated by the fact that

*S*is restricted to the interval [0, 1]. To test the appropriateness of this description, we again performed least square fits to 200 samples of 200 stimuli each, under the six conditions described above. The mean

*R*

^{2}of the fit and the mean and

*SD*of the parameter estimates are summarized in Table 3. Under conditions with refractive index

*n*= 1, the mean

*R*

^{2}values were always larger than 0.996 indicating rather good fits. The parameter

*v,*which controls how fast the saturation increases with thickness, has a maximum at the largest range of the absorption spectrum; it is clearly lower for the two narrower ranges, irrespective of whether the range is at the low end (highly transmissive filters) or the high end (dark filters). For conditions with refractive index

*n*= 1.5, essentially the same pattern is found for the full model: The distributions of the parameter estimates are almost identical and the goodness of fit is only slightly reduced. This is clearly not the case in the reduced model. On average, the fits of the reduced model are clearly worse (lower

*R*

^{2}values) and lead to lower overall saturation values (low values of parameter

*u*) and faster increases of the saturation with thickness (high values of parameter

*v*). With increasing thickness, the relative contribution of direct reflection to the total light emanating from the filter region increases, because the amount of direct reflection is almost independent from thickness. Thus, all colors in the filter region are shifted in the direction of the illumination color. In the reduced model, this effect is “misrepresented” as a change in the filter properties, partly as a decrease in filter saturation and partly as a decrease in filter clarity. Especially for low overall saturation, this may even result in a saturation distribution with a single peak at mean thickness, where both lower and larger thicknesses lead to decreasing saturation.

Condition | Mean estimate ± SD | ||||
---|---|---|---|---|---|

n | a _{min} | a _{range} | u | v | R ^{2} |

1.0 | 0.1 | 0.4 | 0.90 ± 0.17 | 0.37 ± 0.16 | 0.998 |

1.0 | 0.1 | 0.8 | 0.89 ± 0.17 | 0.72 ± 0.28 | 0.996 |

1.0 | 0.5 | 0.4 | 0.87 ± 0.21 | 0.37 ± 0.15 | 0.998 |

Restricted model | |||||

1.5 | 0.1 | 0.4 | 0.46 ± 0.18 | 0.67 ± 0.22 | 0.985 |

1.5 | 0.1 | 0.8 | 0.48 ± 0.17 | 2.19 ± 10.2 | 0.940 |

1.5 | 0.5 | 0.4 | 0.19 ± 0.17 | 2.72 ± 1.11 | 0.576 |

Full model | |||||

1.5 | 0.1 | 0.4 | 0.86 ± 0.22 | 0.38 ± 0.18 | 0.983 |

1.5 | 0.1 | 0.8 | 0.87 ± 0.18 | 0.81 ± 0.31 | 0.980 |

1.5 | 0.5 | 0.4 | 0.86 ± 0.25 | 0.58 ± 0.41 | 0.872 |

*n*= 1, which implies zero direct reflection, we always found estimates of the clarity parameter near the maximum value of 1 (see top row in Figure 12). This is exactly what one would expect in this case. For refractive indices

*n*> 1, the estimated clarity values should be less than 1 and approximately constant with changes in thickness, because the first-order reflection at the top surface, which constitutes the dominant part of the total direct reflection, does not depend on filter thickness. The typical results for

*n*= 1.5 obtained with the full model conform closely to this expectation (see middle row in Figure 12). This is not true for the results obtained with the reduced model shown in the bottom row of Figure 12. Here, the clarity values decrease with thickness. This again indicates that the parameters of the full model reflect properties of the physical parameters more accurately than the reduced model.

*V*and

*S*.

*V*) =

*bx*− log(

*a*) is a linear function of filter thickness

*x,*if the small contribution of parameter

*c*is ignored. With respect to saturation, we find that −log(

*u*−

*S*) =

*vx*+ log(

*u*) is a linear function of

*x*. A problematic point in this case is that

*u*is normally not known. However, the range of

*S*is usually relatively small, and thus, a potential solution may be to either use

*u*= 1 or to take the maximum saturation in a spatial region as an estimate for

*u*. These results suggest that the spatial distributions of −log(

*V*) and of −log(

*u*−

*S*) provide an independent basis for an estimate of the changes in filter thickness and thus, indirectly, of filter form. This information may, for instance, be exploited in spatial correlation algorithms (which are invariant to linear transforms of the input) that can be used to detect an optical filter based on form information.

*ω*= 1/75 cycles/nm) and the illumination was fixed (CIE D65). These “spectral images” were generated from RGB bitmaps using the method described in 1. The physical filter model was then used to compute the colors inside the filter region using the reflection spectra of the image pixels, the constant absorption spectrum of the simulated filter, a fixed refractive index, and thickness values that varied sinusoidally along the filter contour.

*f*=

*f*

_{ M }] were then used to compute local estimates of transmittance

*V*(

*c*) and saturation

*S*(

*c*) at point

*c*along the contour of the simulated optical filter. The inputs to this algorithm were the mean and standard deviation of background and filtered colors inside a window centered around

*c*(see Figure 13). To avoid problems at the image borders, the images were first padded with mirror reflections of the image. Finally, to compensate for noise in the local parameter estimates, they were smoothed by applying a box filter, i.e., by computing the moving average of the

*V*and

*S*estimates along the contour. The filter length was 81 pixels and the unpadded length of the filter contour was 512 pixels.

*n*= 1.3 and a sinusoidal thickness function with 3/2 cycles along the contour. In these examples, the correlation of −log(

*V*) and −log(1 −

*S*) with the true thickness is rather high, throughout. We performed more extensive tests with different refractive indices (

*n*= 1 and

*n*= 1.5), different filter absorption spectra (frequency limited with

*ω*= 1/150 cycles/nm), and different thickness functions (varying in mean = 1, 2, 4, frequency = 1/2, 1, 2 cycles, and amplitude = 0.25, 0.5, 0.75 of mean). The main results were:

- The estimation based on saturation works almost perfectly (
*r*> 0.9) for*n*= 1 but is much less reliable for*n*= 1.5, especially if the reduced model is assumed. In the latter case, we often found strong negative correlations between estimated and true thicknesses. - The estimation based on transmittance proved more robust. In most cases, the estimate was rather good. The only exceptions were filters with both low spatial frequency and low mean absorption.
- The most reliable estimates were found with the vertical stripes image shown at the top right position in Figure 14.

*N*= 10 background colors where all properties were fixed, except for the background colors and the parameter of interest (here the refractive index), which were randomly chosen. The global estimation procedure described as Case 3 in the Estimation procedures section was then used to estimate for each simulated stimulus the parameters of the filter model. All conditions were identical to those described in the Model parameter and filter thickness section, except for the choice of saturation instead of thickness as the randomly varied variable.

*n*= 1 and

*n*= 1.7 in 50 samples of 200 stimuli each. In each subplot, a different thickness of the filter was used. This difference between estimates at

*n*= 1.0 and

*n*= 1.7 under otherwise identical conditions is a good indicator of the strength of the effect, because (as is illustrated in Figure 16) all curves either are approximately constant or show a monotonic decline with increasing refractive index.

*τ*to the squared transmittance of optical filters motivated the restriction 0 ≤

*τ*

_{ i }≤ 1 (Faul & Ekroll, 2002, p. 1076). Here, we consider a second kind of restriction on

*τ*that results from the fact that the model describes relationships between

*color codes*in the input. To gain insight into the nature of this restriction, it is helpful to consider the reduced model

*P*

_{ i }=

*τ*

_{ i }(

*A*

_{ i }+

*δI*

_{ i }), which can be written as

*P*

_{ i }=

*τ*

_{ i }

*X*

_{ i }, where

*X*:=

*A*+

*δI*is a weighted sum of the color codes of background

*A*and illumination

*I*and thus itself a valid color inside the color cone.

*P*is a stimulus color and thus by necessity also a valid color. This implies that

*τ*is restricted to values that map

*X*to colors inside the color cone. The parameter values

*τ*= (

*x*,

*x*,

*x*),

*x*≥ 0—corresponding to neutral filters—are always possible, because they map

*X*to the same vector in color space, whereas

*τ*= (0, 1, 1) is an example of an impossible parameter value, because this would result in a zero

*L*and a nonzero

*M*cone excitation in

*P,*which is not realizable due to correlations between the

*L*and

*M*channels. In this way, the parameter vector

*τ*“inherits” restrictions on color codes.

*τ*in terms of “hue”

*H,*“saturation”

*S,*and “transmittance”

*V*proposed above. In this parameter space, the “physical restrictions” on

*τ*translate into the restrictions 0 ≤

*H*,

*S*,

*V*≤ 1 and the “color space restrictions” into limits for the maximum value for

*S*. The maximum

*S*varies with

*H*and can best be visualized in a polar plot of

*H*and

*S*, where

*H*is mapped to the angle and

*S*to the radius. In such plots, the set of maximal

*S*encloses a subregion of the unit disk (see Figure 18). Only points inside this region correspond to valid (

*H, S*) pairs, which combined with admissible transmittance values

*V*∈ [0, 1] correspond to valid

*τ*. Points on the border of the region correspond to values of

*τ*that map

*X*to filtered colors

*P*on the boundary surface of the color cone, and the point

*S*= 0, which is always contained in the region, corresponds to neutral filters.

*X,*which in turn depends on the color codes of background and illumination. An important special case are achromatic

*X*. In this case, the boundary of the region (shown in gray in Figure 18) is numerically identical to a transformation of the boundary of the color cone to the HS space. To compute the transformed values, the cone sensitivities

*l*(

*λ*),

*m*(

*λ*), and

*s*(

*λ*) are first scaled in such a way that they have equal area and values ≤1 and then the normal RGB to HSV transform is applied to the spectral colors. For chromatic

*X,*the degree to which the “valid region” deviates from that of the achromatic case (the “achromatic region”) increases with the saturation of

*X*(compare the red and blue regions in Figure 18).

*τ*must be simultaneously compatible with all corresponding

*X*. The solution space for

*H*and

*S*is, therefore, the intersection of the valid regions for all

*X*. It can be seen in the rightmost panel of Figure 18 that even the combination of only two (saturated) colors may restrict the solution space considerably. The intersection region usually deviates much less from the “achromatic region” than the source regions (in the sense that the nonoverlapping area is smaller) and may even lie completely inside.

*X*. Thus, it is interesting to consider factors that may limit the saturation of

*X*=

*A*+

*δI*. A first factor is related to the concept of optimal color stimuli (Wyszecki & Stiles, 1982) for a given illumination and leads to limits on the saturation of background colors

*A*. The key observations are (a) that surfaces with a unit spectrum are the brightest possible and that they always reflect light with the chromaticity of the illumination, and (b) that monochromatic reflection spectra are the most saturated ones for a given hue but also the darkest possible. If one determines for each luminance level between these two extremes the range of possible chromaticities (the MacAdams limits, see p. 179 ff. in Wyszecki & Stiles, 1982), one gets the Rösch color solid, which encloses (up to a factor related to the intensity of the illuminant) all possible surface colors under the given illumination. Figure 19 shows the MacAdams limits for CIE standard illuminants A and D65 in the CIE

*xy*chromaticity plane. The restrictions on possible chromaticities are even more severe if we assume frequency-limited reflection spectra that putatively resemble “natural” spectra. Figure 19 shows examples of the color solids for different frequency limits. It illustrates that the restrictions get tighter with decreasing frequency limit. A second factor that limits the saturation of

*X*is due to the term

*δI*: With increasing values of

*δ,*that is, with increasing amounts of direct reflection, the relative contribution of the illumination color to

*X*increases and the chromaticity of

*X*is shifted nearer to that of

*I*. As this term is identical for all background colors, the gamut of possible chromaticities shrinks around the chromaticity of

*I*.

*τ*. It is, however, not easy to envision how these different factors combine in typical scenarios. Therefore, we simulated optical filters using realistic choices of the physical parameters to determine the distribution of the parameters of the filter model, in particular the distribution of filter saturation.

*N*background spectra. The procedure described as Case 3 in the Estimation procedures section (with

*f*=

*f*

_{ M }) was then used to compute from the color codes resulting from the simulation the parameters

*τ*and clarity of the filter model. The parameter

*τ*was then transformed to

*H, S,*and

*V*values. In each case, the

*N*reflection spectra of the background were chosen randomly from the set of frequency-limited spectra with

*ω*= 1/75 cycles/nm, and the absorption spectrum of the filter was a randomly chosen frequency-limited spectrum with

*ω*= 1/150 cycles/nm. The remaining parameters, that is, filter thickness, refractive index,

*N,*the illumination, and the type of model (full vs. reduced) were varied systematically in separate simulations.

*N*= 10.

*H*and

*S*lie inside a relatively small, roughly elliptical region around the zero saturation point

*S*= 0. The frequency plot in the leftmost panel shows that there is, in this case, a strong bias toward achromatic filters. The estimated value of the clarity parameter is always clearly less than 1, as is to be expected with a refractive index greater than 1, and lies inside a small interval. The clarity estimate is also independent of the estimated transmittance value of the filter.

*reduced*instead of the full model was estimated. In this case, the general form of the hue and saturation distribution is retained, but the saturation values are clearly lower than those found with the full model. The estimated clarity values are also much lower and are no longer independent from estimated transmittance. In row (e), the difference is that the CIE standard illuminant A is used instead of standard illuminant D65. Although this manipulation has a noticeable effect on the distribution of hue and saturation, the general form of the distribution is very similar to that shown in row (b). The distribution of the clarity and transmittance parameter is virtually unaffected.

*N*= 2 background colors. The conditions in rows (a) and (b) are otherwise identical to those realized in rows (b) and (c) in Figure 21, respectively. Although the general form of the distributions found with

*N*= 2 is similar to those obtained with

*N*= 10, there are also marked differences. The most obvious effect is that the precision of the clarity estimate is strongly reduced with

*N*= 2 background colors. Another striking effect is that the border of the hue and saturation distribution is fuzzier: There are many estimates with large saturation values well outside the elliptical region. Row (c) shows the results of a simulation in which we tried to maximize the saturation of the estimated filter. The filter thickness was set to 3, the refractive index was set to 1, and the absorption spectrum was scaled and shifted to obtain values in the interval [0.001, 0.999]. This introduces a bias toward relatively dark and strongly saturated filters. However, even in this case, the saturation of most filters was within the “achromatic region,” outlined by the red curve.

*N*= 2 background colors that are shown in Figure 21.

*H*/

*S*polar plot, the hue and saturation values lie inside an elongated elliptical region, where the angle between the major axis and the

*x*-axis is roughly 45 deg. The general form and orientation of this elliptical region is rather robust against changes in the refractive index (Figure 20c), the model (Figure 20d), and the illumination (Figure 20e). As will be shown in the Iso-saturation curves section, this characteristic elliptical form is also found in iso-saturation curves.

*N*= 10 background colors shown in Figures 20b and 20c with the corresponding ones with

*N*= 2 shown in Figures 21a and 21b clearly demonstrates the advantages of including more background and filtered colors when estimating the filter parameters. The increase in precision is especially obvious with respect to the clarity parameter.

*S*

_{m}≤ 1 that depends on hue. This maximum defines the boundary of a valid region in the

*H*/

*S*diagram and we have discussed various factors that influence the form of this boundary.

*perceived saturation*depends on hue. More specifically, we determined iso-saturation curves in the

*H*/

*S*diagram. By definition, all filters lying on these iso-saturation curves appear equally saturated irrespective of filter hue.

*x*= 0.302,

*y*= 0.308) equidistant in luminance between 17.4 and 36.8 cd/m

^{2}. To minimize any influence of background configuration, the assignment of luminance to background area was random in each trial. The standard stimuli in rows 1 and 3 constituted a series of 12 filters with fixed hue value (

*H*= 0.345, greenish) in which the saturation increased from

*m*/12 to

*m*in equidistant steps, where

*m*is the maximally realizable saturation at the standard hue. The hue of the standard stimuli was the one at which the maximum nominal saturation value realizable on our monitor was minimal. The stimuli in rows 2 and 4 contained a series of comparison filters with a different hue, selected from a set of 67 hue values that ranged from

*H*= 0 to

*H*= 0.99 in equidistant steps of 0.015. The subjects' task was to adjust in the comparison filters the endpoint

*u*of an equidistant saturation scale between

*u*/12 and

*u*in such a way that the perceived saturation of the comparison filters was as similar as possible to that in the standard filters over the whole saturation scale. Thus, in an ideal case, the perceived saturation of each comparison stimulus along the scale should be identical to that of the standard stimulus depicted immediately above it.

*V*and “clarity”

*C*were varied in three combinations: (

*V*= 0.5,

*C*= 1.0), (

*V*= 0.5,

*C*= 1.0), (

*V*= 1.0,

*C*= 0.5). These three conditions were applied in separate sessions, and in each session, the settings for all 67 hues were repeated 3 times. This resulted in a total of 201 trials for each subject, which were completed in random order.

*V*= 1.0,

*C*= 1.0).

*a*is mainly determined by the saturation value at the standard hue. The variation of transmittance and clarity had only a relatively small effect on the size and orientation of the fitting ellipses. The most noticeable change is a slight shift of the ellipse center.

V | C | x | y | a | b | ɛ | α (deg) |
---|---|---|---|---|---|---|---|

1.0 | 1.0 | 0.045 | 0.082 | 0.386 | 0.159 | 0.912 | 58.4 |

0.5 | 1.0 | 0.029 | 0.048 | 0.368 | 0.157 | 0.904 | 56.7 |

1.0 | 0.5 | 0.013 | 0.034 | 0.373 | 0.151 | 0.914 | 55.2 |

*H*/

*S*space is approximately elliptical. This ellipse closely resembles the form and orientation of the distribution of hue and saturation values found in the simulations reported in the Distribution of filter parameters section.

*V*and clarity

*C*. We may also conclude that this also holds with respect to changes in absolute saturation, because otherwise it would not have been possible for the subjects to match saturation at different hues simultaneously along the whole saturation scale.

*S,*then the center of the ellipses would mark the point of zero perceived saturation and should coincide with the zero saturation point

*S*= 0. However, we instead found that the center of the ellipses deviate slightly from the zero saturation point

*S*= 0, especially in the condition

*V*= 1,

*C*= 1. These deviations are not artifacts of averaging, because they were also observed in all individual data sets. A possible explanation would be that invariance of the shape of iso-saturation curves under changes of

*S*holds only approximately. This could, in principle, be tested by conducting similar experiments as the one described here but that use low and high saturation subscales in the matching. If invariance holds, then the location and form of the fit ellipses should not be affected by this change in the matching task.

*X*-junction is not necessary, or that isolated colors, like those that are completely covered by the filter, can be integrated in the estimation without any difficulty. However, these advantages can only be brought to bear, if the integration is done over sets of colors that belong to regions with (from the perspective of the model) identical meaning and roughly identical parameter values. It is, at present, an open question how image-based criteria can be used to properly determine such integration regions. It may be interesting to note that Singh and Anderson (2002) used an alternative way to estimate the model parameters from stimuli with many different gray levels (sinusoidal gratings). To reduce the number of gray levels to the four values used in the episcotister model, they selected the minimal and maximal gray levels in the background and the transparent region. However, this approach has two potential drawbacks. A first one is that it is unclear how the selection of maxima and minima can be generalized to three-dimensional color codes, and a second one is that this method does

*not*improve the robustness of the parameter estimation.

*less*accurately. Thus, computational criteria related to the distal side speak in favor of the “full” model, whereas phenomenal criteria conform better with the “reduced” model. These findings exclude the possibility that the visual system always uses the “full” model in transparency perception. From the perspective of the current approach—that emphasizes the performance aspect of vision—this is somewhat surprising, because the “full” model seems to provide a better basis for transparency detection and is also of similar complexity as the “reduced” model. We discussed several possible interpretations of these findings and experimental strategies to test them. The currently available evidence does not allow to decide between the “full” and “restricted” models and we, therefore, compared both models in further investigations.

*τ*of the original model in terms of “hue,” “saturation,” and (overall) “transmittance” and of the direct reflection parameter

*δ*in terms of “clarity.” This alternative parameters are understood as properties of an internally represented transparent layer. Thus, the alternative parameters hue, saturation, and (overall) transmittance—just as the transmittance

*τ*from which they are computed—are not dimensions of a color code and must not be confused with the corresponding dimensions in color space.

*τ*to hue, saturation, and (overall) transmittance is by necessity somewhat ad hoc and can only be justified by showing that it yields a good description of empirical observations. A first hint that this is actually the case is provided by the demonstrations in Figures 7–10, which suggest that the alternative parameters actually describe different and intuitively plausible dimensions of the perceived transparent layer. A supporting result with respect to the clarity parameter is that it provides a simple explanation of Singh and Anderson's (2002, 2006) finding that the ratio of the Michelson contrast in the filter and background regions determined an aspect of perceived transmittance in a matching experiment: As we have shown in the Filter transparency in simple stimuli section, the clarity parameter stands in a close relationship to this measure, and the results of Singh and Anderson (2002) thus suggest the interpretation that the subjects actually matched filter clarity.

*τ*, the saturation parameter turned out to be more complex than the two other dimensions. The main reason for this is that the maximally attainable saturation value is constrained in complex ways, whereas any value between 0 and 1 may be chosen for the hue and transmittance parameters. In the Restrictions on layer saturation section, we investigated restrictions on possible saturation values resulting from properties of color space. Basically, the question we addressed was: Given a set of background colors and fixed values for the hue, transmittance, and clarity parameters, what is the maximum value possible for the saturation parameter, such that all transformed background colors are still valid, that is, have coordinates inside the color cone? With respect to this question, the case of an achromatic background color is especially interesting, because possible hue/saturation pairs are then isomorphic to the chromaticities of the filtered color. The region of valid hue/saturation pairs for this case is, therefore, numerically identical to the image of a chromaticity diagram in the hue/saturation space. For other cases, however, this isomorphism does not hold. In the context of the filter model, this is not problematic, because it is obvious that the transmittance

*τ*and the derived alternative dimensions hue, saturation, and (overall) transmittance should not be interpreted as a color code. In the episcotister model, in contrast, the model parameter that determines the color of the perceived transparent layer is understood as a color code (in the prototypical case of a rotating sector disk, it corresponds to the color of the disk surface). This leads to paradoxes, because it was found that stimuli may appear transparent even if the corresponding parameter of the episcotister model is an impossible “color code” outside the color cone. Richards et al. (2009) dubbed such invalid parameters as “imaginary colors.”

*construct*stimuli conforming to the model and having specific parameter values, because a saturation value suitable for

*N*background colors and a given hue may no longer be possible if an additional background color is included or if the hue value is changed. At first sight, this may appear surprising, because one may ask, why do similar problems not occur with “real optical filters” that in a way also “construct stimuli”? The reason why a real filter always produces valid colors, whereas computing filtered colors from the model may not, is that in the former case we manipulate the light impinging on the cones, but in the latter, we manipulate dimensions of color codes (i.e., the activation of the cones) in an independent way that ignores correlations between color channels that are due to an overlap in their sensitivity curves. It is important to note, however, that these problems are irrelevant from the perspective of the visual system, because the colors in the proximal stimulus are, by definition, always “valid” and the goal is not to construct stimuli but to interpret them. Seen from this point of view, our results merely illustrate that the range of saturation values that may potentially be estimated from a given stimulus can be narrowly reduced if the background colors are highly saturated.

*ρ*(

*λ*) = [1 +

*β*(

*λ*)] / 2, with

*μ*

_{ i }≤ 0.5. The essential parameter is the limiting frequency

*ω*that controls the smoothness of the resulting spectrum: The smaller

*ω*is, the smoother the spectrum. The limiting frequency of natural reflectance spectra has been found to lie inside the range of 1/100 to 1/50 cycles/nm (Maloney, 1986). The parameters

*λ*

_{0}and

*l*are of minor interest and were, in our simulations, set to

*λ*

_{0}= 350 nm and

*l*= 50, throughout.

*μ*

_{ i }to random values from the interval [−0.5, 0.5]. It is also possible to compute reflectance spectra, where the reflected light under a given illumination

*I*(

*λ*) has a specified color code

*C*: Let

*R*

_{ j }(

*λ*) denotes the sensitivity spectrum of cone class

*j,*and

*b*

_{ ij }is the excitation of cone class

*j*by basis function

*i*under the given illumination and

*ϕ*

_{ j }is a weighted sum of these contributions. Linear programming is then used to determine

*μ*

_{ i }under the restrictions −0.5 ≤

*μ*

_{ i }≤ 0.5 such that (

*ϕ*

_{ L },

*ϕ*

_{ M },

*ϕ*

_{ S }) =

*X,*with

*X*:= 2

*C*−

*I*. Here,

*I*denotes the color code of the illuminant. The frequency-limited spectrum constructed with these

*μ*

_{ i }has the required property.

*Journal of Vision*, 10(5):26, 1–16, http://www.journalofvision.org/content/10/5/26, doi:10.1167/10.5.26. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 46, 1982–1995. [CrossRef] [PubMed]

*Perception & Psychophysics*, 23, 265–267. [CrossRef] [PubMed]

*Perception & Psychophysics*, 35, 407–422. [CrossRef] [PubMed]

*Perception & Psychophysics*, 30, 407–410. [CrossRef] [PubMed]

*Computer Vision, Graphics, and Image Processing*, 28, 356–362. [CrossRef]

*Perception*, 27, 595–608. [CrossRef] [PubMed]

*Trasparenze*. Padua, Italy: Icone.

*Proceedings of SIGGRAPH*, 1998, 189–198.

*Poggendorfs Annalen*, 83, 169–183.

*Perception*, 26, 471–492. [CrossRef] [PubMed]

*Perception*, 29, 911–926. [CrossRef] [PubMed]

*Perception*, 25, 105.

*Journal of the Optical Society of America A*, 19, 1084–1095. [CrossRef]

*Journal of Vision*, 3(5):3, 347–368, http://www.journalofvision.org/content/3/5/3, doi:10.1167/3.5.3. [PubMed] [Article] [CrossRef]

*Journal of Vision*, 6(8):1, 760–776, http://www.journalofvision.org/content/6/8/1, doi:10.1167/6.8.1. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Experimental Psychology: Human Perception and Performance*, 16, 3–20. [CrossRef] [PubMed]

*Journal of the Optical Society of America A*, 18, 1–11. [CrossRef]

*Journal of Vision*, 2(6):3, 451–466, http://www.journalofvision.org/content/2/6/3, doi:10.1167/2.6.3. [PubMed] [Article] [CrossRef]

*Journal of the Optical Society of America A*, 25, 190–202. [CrossRef]

*Perception*, 39, 872–883. [CrossRef] [PubMed]

*Proceedings of the National Academy of Sciences of the United States of America*, 45, 115–129. [CrossRef]

*Journal of the Optical Society of America A*, 3, 1673–1683. [CrossRef]

*Color vision: From genes to perception*(pp. 387–413). Cambridge, UK: Cambridge University Press.

*Journal of the Optical Society of America A*, 3, 29–33. [CrossRef]

*Journal of Vision*, 3(8):5, 573–585, http://www.journalofvision.org/content/3/8/5, doi:10.1167/3.8.5. [PubMed] [Article] [CrossRef]

*Perception beyond inference. The information content of visual processes*(pp. 159–200). Cambridge, Mass., US: MIT Press.

*Ergonomics*, 13, 59–66. [CrossRef] [PubMed]

*Journal of the Optical Society of America A*, 16, 2612–2624. [CrossRef]

*Journal of the Optical Society of America A*, 17, 225–231. [CrossRef]

*Journal of the Optical Society of America A*, 26, 1119–1128. [CrossRef]

*Journal of Vision*, 2(5):3, 388–403, http://www.journalofvision.org/content/2/5/3, doi:10.1167/2.5.3. [PubMed] [Article] [CrossRef]

*Journal of Vision*, 4(3):5, 183–195, http://www.journalofvision.org/content/4/3/5, doi:10.1167/4.3.5. [PubMed] [Article] [CrossRef]

*Psychological Review*, 109, 492–519. [CrossRef] [PubMed]

*Vision Research*, 46, 879–894. [CrossRef] [PubMed]

*Philosophical Transactions of the Royal Society B*, 360, 1329–1346. [CrossRef]

*Journal of the Optical Society of America*, 67, 779–784. [CrossRef]

*Journal of Vision*, 10(9):7, 1–17, http://www.journalofvision.org/content/10/9/7, doi:10.1167/10.9.7. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of the Optical Society of America A*, 17, 255–264. [CrossRef]

*Current Biology*, 19, 430–435. [CrossRef] [PubMed]

*Color science: Concepts and methods, quantitative data and formulae*(2nd ed.). New York: John Wiley and Sons.