The recovery of shape from shading is a classic problem in psychology and computer vision. The majority of work has studied the perception of shape using Lambertian or matte surfaces that have an analytically simple reflectance function, whereby the light that reaches the surface is scattered equally in all directions (e.g., Erens, Kappers, & Koenderink,
1993; Horn,
1970; Kleffner & Ramachandran,
1992; Ramachandran,
1988; Todd & Mingolla,
1983). The luminance,
I, of Lambertian surfaces varies as a cosine function of the angle, α, between the outward pointing surface normal,
N, and the vector oriented toward the light source,
L, such that
I = cosα =
N · L. Although the inverse cosine function of luminance recovers the angle between the light source and the surface normal (i.e., α = cos
−1I), there is no simple relationship between that angle, α, and the surface normal,
N, that the visual system can integrate to recover 3D shape. Luminance only varies with the elevation of the surface normal from the direction of the illuminant and provides no information to disambiguate the azimuth orientation of surface normals (see
Figure 1). This theoretical challenge has inspired many to study how accurately the human visual system recovers shape from shading. An early study by Todd and Mingolla (
1983) found that observers generally judged the curvature of matte cylindrical surfaces as lower than ground truth. Thus, observers seem to distort the representation of the 3D shape of Lambertian surfaces in the form of a systematic underestimation of surface curvature or depth, as confirmed by further studies (e.g. Curran & Johnston,
1996; Mingolla & Todd,
1986; Wijntjes, Doerschner, Kucukoglu, & Pont,
2012).