**Abstract**:

**Abstract**
**The watercolor effect is a long-range, assimilative, filling-in phenomenon induced by a pair of distant, wavy contours of different chromaticities. Here, we measured joint influences of the contour frequency and amplitude and the luminance of the interior contour on the strength of the effect. Contour pairs, each enclosing a circular region, were presented with two of the dimensions varying independently across trials (luminance/frequency, luminance/amplitude, frequency/amplitude) in a conjoint measurement paradigm (Luce & Tukey, 1964). In each trial, observers judged which of the stimuli evoked the strongest fill-in color. Control stimuli were identical except that the contours were intertwined and generated little filling-in. Perceptual scales were estimated by a maximum likelihood method (Ho, Landy, & Maloney, 2008). An additive model accounted for the joint contributions of any pair of dimensions. As shown previously using difference scaling (Devinck & Knoblauch, 2012), the strength increases with luminance of the interior contour. The strength of the phenomenon was nearly independent of the amplitude of modulation of the contour but increased with its frequency up to an asymptotic level. On average, the strength of the effect was similar along a given dimension regardless of the other dimension with which it was paired, demonstrating consistency of the underlying estimated perceptual scales.**

*SD*: 33 ± 8 years). Three participated in all three conditions, two in only one condition, and one in two of the conditions. All observers but one (author PG) were naive, and all had normal color vision as assessed by a Farnsworth Panel D15. Observers who required optical corrections wore their glasses while performing the experiments.

^{2}, CIE xy = 0.29, 0.30). The contour pairs that defined the stimuli were each of width 16 min, that is, 8 min for the interior and exterior contours, each. The outer contour was purple (CIE xy = 0.32, 0.19) and the inner orange (CIE xy = 0.48, 0.34). The stimuli were also specified in the DKL color space (Derrington, Krauskopf, & Lennie, 1984) with purple and orange contours at azimuths of 320° and 45°, respectively. Control stimuli were identical except that the contours were interlaced and generated little filling-in (Figure 1c).

*R*is the stimulus radius at angle

*θ*,

*r*the average radius of the stimulus,

*A*the modulation, and

*f*the frequency in cycles per revolution (cpr). The stimuli are then plotted by transforming the polar coordinates (

*R*,

*θ*) to rectangular coordinates (

*x*,

*y*) with the equations

*additive model*and then describe the simpler and more complex models with respect to it.

*ϕ*,

_{ij}*i*,

*j*= 1, … 5, where the indices refer to two of the dimensions in the set (luminance, frequency, amplitude), for example, a row and column, respectively, of Figure 1a. In the additive model, we suppose that each of the two dimensions contributes to a filling-in response,

*ψ*, to a stimulus,

_{ij}*ϕ*, is the sum of the component responses

_{ij}*intervals*across dimensions.

*, is contaminated by internal noise so that the observer chooses the first stimulus exactly when where each ϵ*

_{ijkl}*is a draw from a distribution of independent and identically distributed normal variables with*

_{ijkl}*μ*= 0 and variance = 4

*σ*

^{2}. This is an equal-variance, Gaussian signal-detection model. The coefficient of four on the variance parameterizes the estimated scale values so that variance of the response along each dimension is equal to

*σ*

^{2}. As a result, the estimated response values

*σ*are distributed as normal variables with

*σ*

^{2}= 1 and are, thus, on the same scale as the sensitivity measure

*d*′ from signal detection theory (Green & Swets, 1966

^{1}).

*p*intensity levels sampled along each dimension and the estimate of the variance of

*ϵ*, there are 2

*p*+ 1 parameters in the model. The reparameterizations of the scale described above suggest that we may multiply the estimated values by any constant without changing the model predictions of the observer's responses. In fact, we may similarly add any constant to the scales with similar results. To fix the estimated scales, we set the lowest value of each scale to zero.

*σ*

^{2}= 1, we eliminate one more so that the fitting process requires estimating 2

*p*– 2 parameters. The parameterization that we propose is slightly different from that used by Ho et al. (2008), who estimated

*σ*but normalized the estimated functions so that the maximum scale value was unity. Both parameterizations yield identical predictions of performance.

*ψ*

^{1}and

*ψ*

^{2}, each of which, for simplicity, is a linear function of the stimulus level. Stimulus level here and elsewhere is indicated by an index, not physical units. This convention, as used elsewhere (Ho et al., 2008; Knoblauch & Maloney, 2012b), allows the scales for both dimensions to be plotted together. The response to any stimulus with levels

*i*and

*j*, respectively, along the two stimulus dimensions is represented by the sum of their component responses. As an example, consider a stimulus of level three along the first dimension and level four along the second. The component responses to the level along each dimension are indicated by the two black points. Figure 2d shows the set of summed responses for all pairings of the two dimensions with the base dimension corresponding to

*ψ*

^{2}and the parameter indicated along each curve corresponding to

*ψ*

^{1}. Each curve has the shape of the component curve for

*ψ*

^{2}but is displaced vertically by the value of

*ψ*

^{1}, resulting in a set of parallel contours. Analogous to a linear model, we could say that each dimension shows a main effect but no interaction. The summed response to the stimulus with the levels. indicated in Figure 2c is indicated by the black point.

*independence*model. The independence model requires estimating only

*p*– 1 parameters.

*ψ*

^{1}increases with stimulus level along the first dimension, but

*ψ*

^{2}is flat or independent of the level of the second dimension. The summed responses are indicated in Figure 2b by a set of lines of zero slope that are vertically displaced by the responses along the

*ψ*

^{1}curve. In analogy to a linear model, there is a significant main effect of the first dimension but not of the second.

*p*

^{2}– 1 parameters, one less than the number of stimuli tested, and will be referred to as the

*saturated*model. An example of response curves that might result from a saturated model are shown in Figure 2e. The important feature is that the curves are not parallel and, thus, the curves cannot be explained by a simple additive combination of component curves as in the previous two cases.

*conjoint proportion plot*(CPP) by Knoblauch and Maloney (2012a). Each CPP summarizes the proportion of times that the ordinate stimulus,

*S*, was judged to show a greater fill-in than the abscissa stimulus,

_{kl}*S*, coded according to the grey levels shown in the color bar at the right of each set of graphs, for every stimulus pair presented. The levels along the two dimensions are represented along each axis using a scheme in which a 5 × 5 outer grid demarcates the stimulus levels along one dimension (e.g., amplitude in Figure 3a), and the levels of the second dimension are represented as an inner subgrid nested within each square of the outer grid (e.g., frequency in the same figure). As the responses are combined across random left/right orders in the presentation—only the upper left triangle of a CPP is unique and displayed.

_{ij}^{3}= 125 different stimuli and 125 × 124/2 = 7,750 paired comparisons, which could be reasonably argued is excessive. Simulation results for two-way experiments suggest that it is the number of total judgments and not the number of conditions that determines the precision of the estimates (Knoblauch & Maloney, 2012b). Similar results have been obtained for the MLDS technique (Maloney & Yang, 2003). This raises the possibility of obtaining good scale estimates for a three-way experiment by subsampling from the full set of stimulus pairs although it will be necessary to verify such a conjecture via simulation.

*Spatial Vision**,*10

*,*433–436. [PubMed] [CrossRef] [PubMed]

*Journal of Vision**,*11 (3): 18, 1–8, http://www.journalofvision.org/content/11/3/18, doi:10.1167/11.3.18. [PubMed] [Article]

*The Journal of Physiology (London)**,*357

*,*241–265. [PubMed] [CrossRef]

*Vision Research**,*45

*,*1413–1424. [PubMed][PubMed] [CrossRef] [PubMed]

*Perception**,*35

*,*461–468. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A**,*31

*,*A1–A6. [CrossRef]

*Journal of Vision**,*12 (3): 19, 1–14, http://www.journalofvision.org/content/12/3/19, doi:10.1167/12.3.19. [PubMed] [Article]

*Vision Research**,*49

*,*2911–2917. [PubMed] [CrossRef] [PubMed]

*. Oxford: Oxford University Press.*

*Elements of psychophysical theory**. Huntington, NY: Robert E. Krieger Publishing Company.*

*Signal detection theory and psychophysics*

*Vision Research**,*44

*,*2815–2823. [PubMed] [CrossRef] [PubMed]

*Psychological Science**,*19

*,*196–204. [PubMed] [CrossRef] [PubMed]

*MLCM: Maximum likelihood conjoint measurement*. [Computer software manual, Available from http://CRAN.R-project.org/package=MLCM (R package version 0.2)].

*. New York: Springer.*

*Modeling psychophysical data in R**. New York: Academic Press.*

*Foundations of measurement (vol. 1): Additive and polynomial representations*

*Journal of Mathematical Psychology**,*32

*,*466–473.

*Journal of Vision**,*3 (8): 5, 573–585, http://www.journalofvision.org/content/3/8/5, doi:10.1167/3.8.5. [PubMed][Article] [PubMed]

*. London: Chapman and Hall.*

*Generalized linear models*

*Spatial Vision**,*10

*,*437–442. [PubMed] [CrossRef] [PubMed]

*( p. 158). Milano: Societ Italiana di Psicologia.*

*l laboratorio e la citt. xxi congresso degli psicologi italiani*

*Spatial Vision**,*18

*,*185–207. [PubMed] [CrossRef] [PubMed]

*Vision Research**,*41

*,*2669–2676. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A**,*22

*,*2207–2221. [PubMed] [CrossRef]

*Journal of Vision**,*8 (7): 8

*,*1–15, http://www.journalofvision.org/content/8/7/8, doi:10.1167/8.7.8. [PubMed][Article] [PubMed]

*Vision Research**,*43 (1), 43–52. [PubMed] [CrossRef] [PubMed]

*Vision Research**,*46

*,*2443–2455. [PubMed] [CrossRef] [PubMed]

*R: A language and environment for statistical computing. [Computer software manual]*. Vienna, Austria. Available from http://www.R-project.org/ (ISBN 3-900051-07-0).

*. Cambridge, UK: Cambridge University Press.*

*Measurement theory*

*Neuron**,*74

*,*12–29. [PubMed] [CrossRef] [PubMed]

*Proceedings of the National Academy of Sciences, USA**,*80

*,*5776–5778. [PubMed] [CrossRef]

*Vision Research**,*51

*,*701–717. [PubMed] [CrossRef] [PubMed]

*Trends in Neuroscience**,*19

*,*428–434. [PubMed] [CrossRef]

*Spatial Vision**,*19

*,*323–340. [PubMed] [CrossRef] [PubMed]

*IEEE*

*Transactions on Computers**,*C–21

*,*269–281. [CrossRef]

^{1}See Devinck and Knoblauch (2012) and Knoblauch and Maloney (2012b), p. 202, for the derivation for MLDS and Devinck and Knoblauch (2012) for empirical verification. The MLCM model is formally the same as that for MLDS except for two sign changes in the decision variable, and therefore, the same ideas should apply.