The exact meaning of translucency is not universally accepted and remains subjective (
Pointer, 2003). This basic definition problem might make the scientific communication difficult and hinder the advance in the translucency perception research. We have particularized these problems in the recent position paper (
Gigilashvili et al., 2020b). Care is needed to avoid miscommunication of the empirical results and to ensure the reproducibility of the psychophysical experiments. Experimenters should make sure that the instructions are correctly understood and interpreted by their observers when the task concerns translucency perception, especially when the experiments are conducted in languages other than English, because the translation of the term
translucency might or might not differ from that of
transparency. For example,
Motoyoshi (2010) reports that there is no distinction between
transparent and
translucent in the Japanese language, which might have impacted his experimental results. However, he reports that observers assess translucent and transparent stimuli differently from each other, seemingly understanding the semantic difference between the two visual phenomena. This makes the author propose that the two concepts might be orthogonal. Scaling translucency remains a challenging and confusing task. To the best of our knowledge,
Hutchings and Scott (1977) and
Hutchings and Gordon (1981) (cited in
Hutchings, 2011) have been the first ones to observe the confusion among the experiment participants while scaling translucency. The authors argue that “care should be taken when using the term Translucency for scaling. An increase in translucency may mean an increase in transparency to some panelists while meaning the opposite to others” (
Hutchings, 2011). We have also observed a similar kind of problem in our experiments (
Gigilashvili et al., 2018b,
2020a,
2021b). The lack of knowledge on how to quantify translucency makes it challenging to measure it by magnitude estimation techniques (
Torgerson, 1958) and psychophysical scaling methods, such as the pair comparison and rank order (
Engeldrum, 2000). For example, it has been possible to quantify the magnitude of glossiness (
Pellacini et al., 2000) or to differentiate more glossy and less glossy stimuli (
Gigilashvili et al., 2019b;
Thomas et al., 2017). However, there is no universal agreement what “more translucent” means, neither can we tell “how much” translucency is in a given stimulus. When comparing multiple stimuli, which one is the most translucent (e.g., in
Figure 2) — the one closest to transparency, closest to opacity or closest to a hypothetical peak between the two?
Di Cicco et al. (2020b) have observed that translucency was judged least consistently among all assessed parameters in the still life paintings of grapes, which might be attributed to the variation in the conceptual understanding, rather than the anatomical differences among observers.
Nagai et al. (2013) defined
more translucent in their experiments as having stronger subsurface scattering.
Wijntjes et al. (2020) have defined translucency as “the opposite of opaqueness, but... not limiting to pure transparency. For example, tea with milk is more translucent than a cup of white paint.”
Di Cicco et al. (2020b) asked observers to quantify the magnitude of translucency of the painted grapes and defined the term in a similar manner: “Translucency: how translucent do the grapes appear to you? Low values indicate that no light passes through the grapes and the appearance is opaque; high values indicate that some light passes through the grapes.” However, care should be taken in these cases as well, because we do not know whether the relation between scattering and translucency is monotonous. Materials with high and low scattering might be considered opaque and transparent, respectively, with both having zero translucency. Many works avoid direct quantification of translucency in the psychophysical experiments and encapsulate it in the matching tasks asking observers to match the stimuli by appearance (
Fleming & Bülthoff, 2005;
Xiao et al., 2014) and/or by translucency (
Gigilashvili et al., 2019c;
Gkioulekas et al., 2013;
Xiao et al., 2020). This, at first glance, simplifies the task. However, there is little empirical evidence that the HVS can fully isolate translucency from other attributes of total appearance. If the definition of translucency is ambiguous to the observers, how can they match materials by translucency and how can we guarantee that they are not making up their own rules for matching the stimuli, e.g. by lightness, or any property other than translucency? To identify what observers are basing their decisions on, the experimenters can calculate particular image statistics and check how well these statistics explain the variation in the observer responses (as done by
Chadwick et al., 2019). However, there is no guarantee that the actual statistics or cues used by observers will be correctly identified by the experimenters. Another workaround found in the literature is using the terms more familiar and less abstract than translucency. For instance,
Chadwick et al. (2019) asked observers to assess
strength and
milkiness of the tea images. However, the association between the strength, milkiness, and translucency is not clear either.
Hutchings (2011) proposes using
extent of visibility scale of
Galvez and Resurreccion (1990) instead of referring to
“more translucent” and
“less translucent.” However, the scale is intended for assessing the appearance of the mungbean noodles in a plastic cup and for quantifying the visibility of the objects behind the noodle strands — thus, it is not readily applicable to the solid non-see-through materials. Furthermore, the inconstancy of translucency across different shapes makes it challenging to clearly separate translucency as a property of a given object and as a property of a material the object is made of. We observed (
Gigilashvili et al., 2018b,
2019c,
2021b) that human observers find it challenging to compare or match translucency across different shapes for two reasons: first, it is difficult to estimate optical properties of a material and to decouple its visual appearance from the shape-related effects (speaks of the limited ability to “invert optics” as it has been noted previously;
Anderson, 2011;
Chadwick et al., 2019;
Fleming & Bülthoff, 2005); second, the task is inherently ambiguous; translucency cues vary not only between the thick and thin objects, but also between the thick and thin regions of a particular object, making observers uncertain which region to assess and how to come up with a single translucency measure. According to
Hutchings (1994), a heterogeneous material might have “more than one colour, perhaps more than one translucency, gloss, or surface irregularity” that no appearance profile system can deal with. The observers in the experiments by
Nagai et al. (2013) pointed out that heterogeneous translucency which resulted from a varying shape, complicated the task, but it remained still viable according to the authors. This raises a question: should translucency of a complex-shaped homogeneous material be judged globally for a given object or material, or locally for each specific region of an object?