Abstract
Human observers are able to arrive at rather precise estimates of the illumination direction of shaded rough surfaces. When surfaces are rough, the illumination generates “visible texture” due to differential shading at the level of the roughness, whereas shading at the level of significant global surface curvature leads to the more familiar “shading". Shading and texture are related because due to the same direction of illumination. The shading is used in the so-called “Shape From Shading” (SFS) algorithms. Formal analysis reveals that much more powerful SFS algorithms are possible if the texture is taken into account. Since conventional SFS ignores the illumination texture cue, human observers are perhaps more likely to apply different methods.
When the roughness is not isotropic one expects systematic errors in the visual detection of illumination direction, conceivably giving rise to erroneous shape estimates. Although it is possible to construct (complicated) algorithms to deal with this, it is unknown whether human observers are able to deal with anisotropy. It seems a priori likely that observers will use simple, robust methods that work well for the majority of cases (isotropic roughness).
We address this issue through systematic psychophysics on illumination direction detection as a function of the roughness anisotropy. We find that the observers indeed commit systematic errors that are quantitatively predicted by the theory. The results are precise enough that they allow the inference that illumination direction detection is based on the second order statistics of edge detector (rather than line detector) activity.