**Is perception of translucence based on estimations of scattering and absorption of light or on statistical pseudocues associated with familiar materials? We compared perceptual performance with real and computer-generated stimuli. Real stimuli were glasses of milky tea. Milk predominantly scatters light and tea absorbs it, but since the tea absorbs less as the milk concentration increases, the effects of milkiness and strength on scattering and absorption are not independent. Conversely, computer-generated stimuli were glasses of “milky tea” in which absorption and scattering were independently manipulated. Observers judged tea concentrations regardless of milk concentrations, or vice versa. Maximum-likelihood conjoint measurement was used to estimate the contributions of each physical component—concentrations of milk and tea, or amounts of scattering and absorption—to perceived milkiness or tea strength. Separability of the two physical dimensions was better for real than for computer-generated teas, suggesting that interactions between scattering and absorption were correctly accounted for in perceptual unmixing, but unmixing was always imperfect. Since the real and rendered stimuli represent different physical processes and therefore differ in their image statistics, perceptual judgments with these stimuli allowed us to identify particular pseudocues (presumably learned with real stimuli) that explain judgments with both stimulus sets.**

^{1}or pseudocues?

^{2}The results we present here provide the first systematic assessment of how humans perceive scattering and absorption of light within translucent materials.

*independent*model—which uses only a single physical variable to model performance in each task—would provide the most parsimonious fit to the data. Alternatively, both physical factors might affect perceptual judgments but do so separately, so that the effect of a change in one physical factor will remain the same no matter what the strength of the other physical factor. An

*additive*model is the most parsimonious way of describing this scenario. Finally, a

*saturated*model is required if the two physical variables interact with each other in the effect they have on perceptual judgments. We fit these three models to our observers' judgments using MLCM analysis, and the model of best fit is determined using a nested-hypothesis test.

*d*′) for the two physical parameters (generating a perceptually linear “tea-space”). We thus ensured that any interactions found in the main MLCM experiment were not due to scale differences in discriminability in the range of physical parameters selected (this approach has been used elsewhere; Hansmann-Roth & Mamassian, 2017; Rogers, Knoblauch, & Franklin, 2016).

*d*′ was computed using the Knoblauch and Maloney (2008) MLDS package for R (R Core Team, 2017). Having obtained the MLDS scales, we used them to choose four values of the physical variables that were perceptually equally spaced (in discriminability). The set of stimuli for the MLCM experiment comprised all possible combinations of these values for the physical variables. Since the tea-space included four levels of milkiness and four levels of tea strength, we had 16 MLCM stimuli in total.

*d*′). The models fit latent parameters that describe the contribution of specific levels, or combinations of levels, of the physical parameters to the intensity of the perceptual quality about which the judgments were made.

*s*and absorption

*a*to the percept milkiness

*M*add together:

*i*th level of scatter and the

*j*th level of absorption, and

*k*th level of scatter and the

*l*th level of absorption. The probability of reporting that the milkiness

*M*of an item with physical scattering and absorption (

*i*,

*j*) is greater than that of an item with scattering and absorption (

*k*,

*l*) is

*ε*is a Gaussian random variable,

*i*,

*j*,

*k*,

*l*) having possible response values 1 and 0 is

*r*is the probability of making the decision.

*d*′ units. We fitted the simulation data to the saturated MLCM parameter estimates of perceived milkiness and strength, and used the adjusted

*r*

^{2}values from the fits to evaluate how well the candidate image statistics accounted for the observed data.

*p*< 0.01 in all cases (Supplementary Table S2). This demonstrates that

*both*physical variables always contributed to perceptual judgments in this task. In the perceived-milkiness task, the independent model was rejected for seven of the eight observers (Supplementary Table S3). For most observers in both tasks, the physical variables contributed in additive combination to perceptual judgments. So, when judging milkiness, every level of the distractor variable (tea strength) contributed a fixed offset to the perceptual judgment. The same was true for judgments of perceived tea strength, in that milkiness produced a fixed offset to the perceptual judgment (Figure 2a and 2b). Only two of the eight observers needed a model more complex than the additive one to fit judgments of tea strength, and only one needed more than an additive model to fit judgments of milkiness (

*p*< 0.01 in all cases). For perceived tea strength, two of the additive observers (CM and DT) showed a clear tendency for increasing milkiness to increase perceived tea strength, whereas for three others (AG, CG, and GT) there was a clear tendency for it to decrease perceived tea strength. In the milkiness task, increased tea strength decreased perceived milkiness for all observers apart from observer BC. Observers' judgments were largely driven by the physical parameter that we asked them to judge, as regressions of the additive models showed greater contribution of “relevant” variables toward parameter estimates (as seen most clearly in the additive plots in Figure 2a and 2b).

*p*< 0.01 in all cases). The independent model never provided an optimal fit (Supplementary Tables S4 and S5). In perceptual terms, variation in scattering and absorption did not map separably onto perceived milkiness and strength. Observers cannot therefore be basing their judgments on independent estimates of the light-transport properties of absorption and scattering in the constituent liquids.

*d*′ values, and the normalized average was rescaled to the average

*d*′ range for that task. When calculating best fits of the ideal-observer estimates with the averaged estimates from our observers, we used a simple linear model fit with an offset term and scaled contribution(s) from the image statistic(s) under test. Since there is no variability in the extracted image statistics, scaling is necessary to account for the noisy decisions of the real observers. We used the adjusted

*r*

^{2}values from the fits to evaluate how well the candidate image statistics accounted for the observed data. Mean color saturation (the S of HSV) provided a good explanation of performance in the milkiness task with real tea (adjusted

*r*

^{2}= 0.920). The

*same*pseudocue was also best at reproducing performance on the rendered task (adjusted

*r*

^{2}= 0.970), even though the patterns of perceptual estimates of the two tasks were quite different. In the tea-strength tasks, no simple statistics provided a good account of behavior. For this task, the inclusion of spatial information was crucial for getting a good fit to the data. A linear mixture of mean value (the V of HSV) and color-saturation gradient (from the top surface of the liquid into the tea volume, summarized by a fitted exponent describing the space constant of variation in saturation as light penetrates the volume) provided a good account for the real images (adjusted

*r*

^{2}= 0.812). Again, the statistic that successfully accounted for performance in the task with real stimuli also produced the best fit to the perceptual estimates of the rendered stimuli (adjusted

*r*

^{2}= 0.894).

*r*

^{2}> 0.870) for all five observers viewing rendered tea and for six out of eight observers viewing real tea. The remaining two observers, BC and PB, gave

*r*

^{2}values of 0.642 and 0.778, respectively. For strength judgments, linear mixtures of mean value and color=saturation gradient provided acceptable fits to the data (adjusted

*r*

^{2}> 0.7) for all five observers viewing rendered tea and for five out of eight observers viewing real tea. The remaining three observers—CM, DT, and HW—gave

*r*

^{2}values of 0.417, 0.571, and 0.413, respectively. For all but two observers, both parameters contributed significantly to the fit (

*p*< 0.005), with individual differences in performance consistent with differential weighting of the two cues. For CM the fit was poor and the mean-value parameter did not reach significance, and for AC the fit was very good with only mean value and no significant contribution from saturation gradient. Full details of the fits are provided in Supplementary Tables S6–S9.

*d*′ measure in the preliminary MLDS task and shown in the range of extracted perceptual estimates from MLCM (Figures 2 and 3). The number of trials differed between experiments; however, when we reanalyzed the data, subsampling the same number of trials for each, there were no meaningful changes to our findings.

*(pp. 77–81). San Mateo, CA: Morgan Kaufmann.*

*AAAI Workshop on Qualitative Vision, Boston MA**, 98 (10), 6727–6738.*

*Journal of Dairy Science**, 21 (24), R978–R983.*

*Current Biology**, 35 (5), 407–422.*

*Perception & Psychophysics**, 34 (15), 2802–2810.*

*Applied Optics**, 36 (6), 1386–1398.*

*Applied Optics**, 109, 221–235.*

*Vision Research**(pp. 369–378). New York, NY: ACM.*

*Proceedings of the 24th annual Conference on Computer Graphics and Interactive Techniques**, 2 (3), 346–382.*

*ACM Transactions on Applied Perception**, 22 (6), 812–820.*

*Psychological Science**, 4 (9): 10, 798–820, https://doi.org/10.1167/4.9.10. [PubMed] [Article]*

*Journal of Vision**, 32 (5), 1–19.*

*ACM Transactions on Graphics**, 8 (1), 1–16.*

*i-Perception**α*- and

*β*-caseins with tea polyphenols.

*, 126 (2), 630–639.*

*Food Chemistry**(pp. 511–518). New York, NY: ACM.*

*Proceedings of SIGGRAPH 2001, Annual Conference Series**, 25, 1–26, http://www.jstatsoft.org/v25/i02/.*

*Journal of Statistical Software**. New York: Springer.*

*Modeling psychophysical data in R**R package version 0.4.1*[Computer software]. Retrieved from https://CRAN.R-project.org/package=MLCM

*, 4299, 312–320.*

*Proceedings of SPIE**, 447, 158–159.*

*Nature**, 3 (1), 59–66.*

*Ergonomics**. New York: McGraw-Hill.*

*Introduction to the theory of statistics*(3rd ed.)*, 10 (9): 6, 1–11, https://doi.org/10.1167/10.9.6. [PubMed] [Article]*

*Journal of Vision**, 25 (3), 1003–1012.*

*ACM Transactions on Graphics**. Burlington, MA: Morgan Kaufmann.*

*Physically based rendering: From theory to implementation**, 33 (3), A184–A193.*

*Journal of the Optical Society of America A**, 17 (5): 7, 1–24, https://doi.org/10.1167/17.5.7. [PubMed] [Article]*

*Journal of Vision**, 14 (4), e1006061, https://doi.org/10.1371/journal.pcbi.1006061.*

*PLoS Computational Biology**, 9 (8): 784, https://doi.org/10.1167/9.8.784. [Abstract]*

*Journal of Vision**, 31 (5), 531–552.*

*Perception**, 109 (3), 492–519.*

*Psychological Review**, 9 (5), 370–378.*

*Psychological Science**, 360 (1458), 1329–1346.*

*Philosophical Transactions of the Royal Society of London B: Biological Sciences**, 14 (3): 17, 1–22, https://doi.org/10.1167/14.3.17. [PubMed] [Article]*

*Journal of Vision*