**MatMix 1.0 is a novel material probe we developed for quantitatively measuring visual perception of materials. We implemented optical mixing of four canonical scattering modes, represented by photographs, as the basis of the probe. In order to account for a wide range of materials, velvety and glittery (asperity and meso-facet scattering) were included besides the common matte and glossy modes (diffuse and forward scattering). To test the probe, we conducted matching experiments in which inexperienced observers were instructed to adjust the modes of the probe to match its material to that of a test stimulus. Observers were well able to handle the probe and match the perceived materials. Results were robust across individuals, across combinations of materials, and across lighting conditions. We conclude that the approach via canonical scattering modes and optical mixing works well, although the image basis of our probe still needs to be optimized. We argue that the approach is intuitive, since it combines key image characteristics in a “painterly” approach. We discuss these characteristics and how we will optimize their representations.**

**Figure 1**

**Figure 1**

**Figure 2**

**Figure 2**

**Figure 3**

**Figure 3**

*w*

_{m},

*w*

_{v},

*w*

_{s},

*w*

_{g}} are the weight values corresponding to the positions of the slider bars, ranging from 0 to 1.2 (see Figure 2); and {

*I*

_{m},

*I*

_{v},

*I*

_{s},

*I*

_{g}} are the basis images under office lighting (top row in Figure 3) for Experiments 1 and 2 in the main study. The linearly mixed image

*I*

_{probe}plus the interface forms the probe MatMix 1.0, which allows real-time dynamic and interactive variation of a visual presentation of material through adjustments of the slider bars.

*x*

_{m},

*x*

_{v},

*x*

_{s},

*x*

_{g}} are randomly generated offsets in a range from −0.1 to 0.1 that were added to the nonzero weights only; and {

*I*

_{m},

*I*

_{v},

*I*

_{s},

*I*

_{g}} are the stimulus basis images shown in Figure 3 (top row for Experiment 1, bottom row for Experiment 2). The resulting linearly mixed image is the stimulus image

*I*

_{stimulus}. The complete set of stimulus images for Experiments 1 and 2 is shown in Figure 4.

**Table 1**

**Figure 4**

**Figure 4**

**A**in Equation 3: where and the residuals

**X**represents the weights of the four scattering modes in the stimulus image, and the corresponding column in matrix

**Y**represents the weights of the four scattering modes in the probe image, i.e., the values represented by the positions of the four sliders set by the participant. We consider all eight participants together. Thus, there are 45 trials for 8 participants = 360 columns in matrix

**X**, matrix

**Y**, and matrix

**E**(the residuals). The 4 × 4 linear factor matrix

**A**was solved using a least-squares fit in MATLAB, and then the matrix

**E**was simply calculated as the difference between

**Y**and

**A**·

**X**. If observers were to move all sliders so that the weights in matrix

**Y**would be exactly equal to the corresponding weights in matrix

**X**(i.e., the matching would be veridical), then

**A**would be a 4 × 4 identity matrix and

**E**would be a zero matrix.

**A**of Experiment 1 is surprisingly close to an identity matrix (see Table 2). To be more specific, the nondiagonal values are 0.18 or lower and close to 0, and the diagonal values are 0.78, 0.89, 0.91, and 1.08 for the matte, velvety, specular, and glittery modes, respectively. In the resulting matrix for Experiment 2 the first three diagonal elements decreased to 0.65, 0.69, and 0.63 for the matte, velvety, and specular modes, respectively. The diagonal value for the glittery mode is 1.09, which is similar to that of Experiment 1. The nondiagonal values that represent the interactions between the scattering modes are larger for Experiment 2 than for Experiment 1. To be more specific, {

*w*

_{m}, }—the value between

*w*

_{m}and in matrix

**A**—was 0.14 in Experiment 1, which means that occasionally velvety contributions in the stimuli were perceived to match with a matte contribution in the probe

*w*

_{m}. The value increased from 0.14 to 0.25 in Experiment 2, showing that the chance increased of perceiving velvety contributions in the stimuli to match a matte contribution in the probe. Similarly, for the combination {

*w*

_{m}, } the value increased from 0.16 to 0.32; for {

*w*

_{s}, } it increased from 0.18 to 0.30; for {

*w*

_{s}, } it increased from 0.04 to 0.24; and for {

*w*

_{s}, } it increased from 0.04 to 0.19. Thus, overall, a comparison of the off-diagonal elements between the two experiments shows that the interactions between perceptions of matte, velvety, and specular modes became stronger when stimulus and probe were under different lighting and viewing conditions.

**Table 2**

**A**and the sum of all values in matrix

**A**. This ratio can vary from 0 to 1, with veridical behavior at 1 (identity matrix) and chance level at 0.25 (all values in matrix

**A**being equal). For each individual, we solved the linear factor matrix

**A**with 45 trials per observer per experiment and calculated the ratios. As shown in Figure 5, in Experiment 1 the ratios for the observers were 0.80, 0.85, 0.72, 0.83, 0.80, 0.77, 0.70, and 0.80 (

*M*= 0.78,

*SD*= 0.05). In Experiment 2 these ratios were 0.47, 0.70, 0.58, 0.76, 0.51, 0.55, 0.64, and 0.69 (

*M*= 0.61,

*SD*= 0.10). Overall, all observers performed far above chance level.

**Figure 5**

**Figure 5**

**E**) were to 0. We first took the absolute values of the 4 × 360 matrices, and then calculated the mean of all elements in each 4 × 45 matrix, for each observer. The results were quite similar between observers per experiment. As shown in Figure 6, in Experiment 1 the means of the residuals' absolute values for the eight observers were 0.06, 0.11, 0.12, 0.10, 0.08, 0.10, 0.12, and 0.10 (

*M*= 0.10,

*SD*= 0.02). In Experiment 2 these values became 0.14, 0.12, 0.13, 0.13, 0.13, 0.18, 0.16, and 0.15 (

*M*= 0.14,

*SD*= 0.02). We can conclude that the least-squares fit method properly solved the linear Equation 3.

**Figure 6**

**Figure 6**

*p*< 0.001). The difference between the two slopes was also significant (0.58 ± 0.3,

*p*= 0.05) resulting in a slope of −0.53 for Experiment 2. The offset for Experiment 1 (99.85 ± 6.0,

*p*< 0.001) was higher than that for Experiment 2 (76.35; difference equals −23.49 ± 8.46,

*p*< 0.001). These results imply that the duration for Experiment 1 started at a higher level than for Experiment 2, and afterward the durations of both experiments systematically decreased with trial number, converging to the same level at the final trials. In conclusion, the main effect is a gradual but small decrease of trial duration as a function of trial number. On average, the duration was slightly above 1 min per matching trial.

**Figure 7**

**Figure 7**

*not satisfied*with the matching) to 1 (

*satisfied*with the matching). Subsequently, we took the average over all observers per trial. Excluding the first five trials, data were again fitted by multiple linear regression with one dummy variable (Figure 8). The only significant effects were the offset for Experiment 1 (0.81 ± 0.02,

*p*< 0.001) and the difference between the two offsets (−0.09 ± 0.03,

*p*< 0.001). Both slopes (0.001 ± 0.01,

*p*= 0.17) did not significantly deviate from 0. We can conclude that the participants generally found the matching task feasible, as the average satisfaction is quite high, but that changing the illumination and viewpoint conditions significantly decreased the satisfaction ratings.

**Figure 8**

**Figure 8**

*x*

_{m},

*x*

_{v},

*x*

_{s},

*x*

_{g}} in the stimuli, the sums of the weights in the stimuli were very close to 1 but not exactly equal to 1. The averages of the sums in the stimuli were actually 1.00 ± 0.01 in Experiment 1 and 0.99 ± 0.00 in Experiment 2. We calculated the differences between the sums in the probe and the sums in the stimuli and found that these differences significantly deviated from 0 (one-sample

*t*test,

*p*< 0.001 for both experiments), with the sums of the weights in the probe being larger than those of the stimuli in both experiments. We also found a significant difference between the two experiments (paired two-sample

*t*test,

*p*< 0.001), with the average sum of the weights in the probe of Experiment 2 being larger than that of Experiment 1.

*SD*of bivariate normal distributions fitted to the 24 data points (8 observers × 3 repetitions) for each stimulus. Every data point represents the settings of two of the four sliders in the probe in one trial. For clarity of presentation the data points themselves were rendered invisible in the plots. Each subplot contains 6 ellipses, which are the results of three different weight combinations in the stimuli in the two experiments. The crosses depict the corresponding stimulus weight combinations. This provides a means to visualize the extent to which participants would trade off—or confuse—the weights of different reflectance modes.

**Figure 9**

**Figure 9**

**Figure 10**

**Figure 10**

**Figure 11**

**Figure 11**

**Figure 12**

**Figure 12**

**A**for Experiment 3 is shown in Table 3. It is very close to an identity matrix, except for the values that represent the perception of the velvety mode. The ratios between the sum of the diagonal values and all values for the five observers were 0.79, 0.71, 0.62, 0.70, and 0.60 (

*M*= 0.68,

*SD*= 0.08), and thus far above chance level (0.25). The nondiagonal values, specifically 0.46 for {

*w*

_{m}, } and 0.35 for {

*w*

_{v}, }, indicate that the perception of the velvety mode strongly interacted with the perception of the matte mode in Experiment 3. The residuals were all close to 0. The averages of the absolute value of the residuals were 0.16, 0.12, 0.14, 0.16, and 0.16 for the five observers, and thus similar to those of Experiment 2.

**Table 3**

**A**. For example, in Figure 9 the subplot for matte versus specular (middle left) shows that matte–specular interactions increased in Experiment 2 compared to those in Experiment 1, which corresponds to an increase of the nondiagonal values {

*w*

_{m}, } and {

*w*

_{s}, } in Table 2. Material–lighting interactions have been addressed by many researchers (Dror, Willsky, & Adelson, 2004; Fleming et al., 2003; Hunter, 1975; Marlow et al., 2012; Motoyoshi & Matoba, 2012; Olkkonen & Brainard, 2010, 2011; Pont & te Pas, 2006; te Pas & Pont, 2005). In a recent study we combined our canonical material modes with three canonical lighting modes, and in this manner we were able to systematically investigate material–lighting interactions for a broader range of materials and lightings (Zhang, de Ridder, & Pont, 2015). We found systematic effects that depended on lighting

*and*material.

**Figure 13**

**Figure 13**

*, 21 (24), R978–R983.*

*Current Biology**, 35 (1), 33–44.*

*International Journal of Computer Vision**. Amsterdam, The Netherlands: BIS Publishers.*

*Delft design guide: Design methods**, 1 (1), 7–24.*

*ACM Transactions on Graphics**(pp. 189–198). New York: ACM Press .*

*Proceedings of SIGGRAPH**, 22 (20), R865–R866.*

*Current Biology**, 94, 62–75.*

*Vision Research**, 37 (1), 130–139.*

*Applied Optics**, 16, 2825–2835.*

*Journal of the Optical Society of America A**, 133 (1), 89–97.*

*Icarus**, 19 (2), 196–204.*

*Psychological Science**. New York: John Wiley.*

*The measurement of appearance**. New York: IEEE.*

*Proceedings - First international conference on computer vision**, 25 (2), 1–26.*

*Journal of Statistical Software**, 14 (4), 260–268.*

*Machine Vision and Applications**, 1 (1), 43–53.*

*International Journal for Computational Vision and Biomechanics**International Journal of Computer Vision,*

*31*(2–3), 129–144.

*, 22 (20), 1909–1913.*

*Current Biology**(pp. 241–248). Aire-la-Ville, Switzerland: Eurographics Association.*

*Proceedings of Eurographics/SIGGRAPH workshop rendering**, 53 (1), 30–39.*

*Vision Research**, 447 (7141), 206–209.*

*Nature**, 267 (5201), 1153–1156.*

*Science**(pp. 444–448). New York: ACM.*

*Proceedings of the 1977 Annual Conference**, 15 (12): 940, doi:10.1167/15.12.940. [Abstract]*

*Journal of Vision**, 15 (12), 2951–2965.*

*Journal of the Optical Society of America A**, 2 (9), 1014–1034.*

*i-Perception**, 14 (3), 227–251.*

*International Journal of Computer Vision**(pp. 75–81). New York: ACM.*

*Proceedings of the 2nd symposium on applied perception in graphics and visualization**(pp. 55–64). New York: ACM Press/Addison-Wesley Publishing Co.*

*Proceedings of the 27th annual conference on computer graphics and interactive techniques**, 18 (6), 311–317.*

*Communications of the ACM**, 8291, 82910D, doi:10.1117/12.916450.*

*Proceedings of SPIE**, 35 (10), 1331–1350.*

*Perception**, 14 (10): 458, doi:10.1167/14.10.458. [Abstract]*

*Journal of Vision**, 10 (4), 210–218.*

*Color Research & Application**, 57 (9), 1105–1112.*

*Journal of the Optical Society of America A**, 56 (7), 916–924.*

*Journal of the Optical Society of America A**, 26 (3), 1–9.*

*ACM Transactions on Graphics**. Amsterdam: Rijksmuseum.*

*Still lifes: Technique and style—The examination of paintings from the Rijksmuseum**, 26 (2), 265–272.*

*Computer Graphics**, 115 (B), 175–187.*

*Vision Research**, 33 (1), 285–293.*

*Optical Engineering**, 25 (3), 371–385.*

*Visual Neuroscience**, 9394, 93940Q, doi:10.1117/12.2085021.*

*Proceedings of SPIE*