Abstract
The perceived lightness of a surface varies with how it is oriented with respect to a directional light source (Ripamonti et al. VSS 03). Some of this effect is photometric — the luminance of the surface varies with its orientation. But some of the effect is perceptual — the visual system takes geometry into account in its computation of surface lightness and achieves partial lightness constancy. Here we present additional data on lightness constancy with respect to variation in scene geometry, and ask whether individual observer's data may be understood using an equivalent illuminant model (Boyaci et al, JOV, 2003). The equivalent illuminant principle is that observers apply the correct form of the inverse optics calculation required to achieve constancy, but that they do so with an incorrect estimate of the physical properties of the scene illumination. In our experiments, observers viewed a series of matte test cards posed in an experimental chamber. The chamber was illuminated by a single light source. On each trial observers chose a sample from a grayscale palette that matched the test card in lightness. Such matches were measured as a function of test card slant for two different light source positions. We elaborated an equivalent illuminant model with two parameters: the position of a directional light source and the relative intensity of directional and ambient illumination in the chamber. These parameters, together with a model of image formation, predict how lightness should vary as a function of test card slant. We found i) each observer's data was well-described by the model, ii) the parameters of the model varied from observer to observer, iii) the parameters of the model varied sensibly when the physical light source position was changed. These results allow a compact summary of lightness matching performance and suggest that it is reasonable to explain human lightness constancy using the computational language of equivalent illuminant models.
Supported by: NIH Grant #EY10016