Each gauge figure setting represents a local surface attitude, which can be interpreted as a depth gradient. The depth gradients can be integrated to a surface, which comprises the (
x,
y) values of the triangulation vertices and perceived depth values
z. The depth values were compared to analyze perceptual differences between matte and velvet shapes. First, we quantified possible depth compressions for the velvet shapes by performing a linear regression between matte and velvet conditions. If the slope (
a) of the regression
z velvet =
az matte +
b is smaller than 1, depth is compressed; if
a > 1, depth is extended. This was analyzed within observers (between BRDF). Second, the interobserver similarity was quantified by calculating the adjusted
R 2 (coefficient of determination) of the regression between the depth values of each stimulus. The higher the adjusted
R 2, the higher the similarity in perceived shape and the lower the level of ambiguity. Besides this “straight” regression that reveals depth compression (
a) and similarity (
R 2), we also performed affine regressions. It has been proposed by Koenderink et al. (
2001) that when one assumes that “planes can reliably be differentiated from curved surfaces, owing to cues such as shading and so forth,” image ambiguities are described by the affine transformation
z(
x,
y) =
az +
b +
cx +
dy. It is thus similar to the straight regression with the addition of a plane
cx +
dy. The affine regression basically captures all linear differences between perceived depths. If the adjusted
R 2 of the affine regression (taking into account the extra two parameters
c and
d) is significantly larger than the straight regression adjusted
R 2, the difference between depths is attributed to the affine plane, since the compression/stretch parameter
a is already present in the straight regression.