Abstract
The perceived brightness of an image region is determined not just by its local luminance, but also by its surrounding context. Various image-computable models of brightness perception have been proposed to account for these context effects, and are typically evaluated with empirical data from matching experiments. Here we test models of the (FL)ODOG family in their capacity to account for perceptual scales derived with Maximum Likelihood Conjoint Measurement. Our test case is White's effect, where a target embedded in the white phase of a high-contrast grating looks darker than an isoluminant target embedded in the black phase regardless of their luminance boundary with flanking bars. We estimated perceptual scales for both targets across a range of luminances from black to white. Perceptual scales for targets in the black phase were compressive, whereas scales for targets in the white phase had an S-shape. We compared these scales to transfer functions (TFs) of multiscale spatial filtering models with divisive normalization (-ODOG family of models, e.g. Blakeslee & McCourt, 1999; Robinson, Hammon, & de Sa, 2007). While the TFs qualitatively predict (the direction of) White’s effect they fail to capture the shape characteristics of the perceptual scales. We adjusted the divisive normalization step of the model in two ways: in addition to orientation-specificity, firstly, we made the normalization contrast-polarity specific, separating filter outputs into positive and negative contrast channels analogous to ON and OFF pathways in the visual system. Polarity-specific normalization did not affect TFs much. Secondly, we added a nonlinearity to the filter outputs (exponent on the filter outputs). Adding the exponent to the filter output did provide more control over the shapes of the TFs. However, even with different exponents for ON and OFF channels, model TFs could not capture the shapes of the perceptual scales.