Abstract
Invariance or constancy is a hallmark of visual processing. Linear techniques such as classification images and spike-triggered averaging are thought to be incapable of recovering the front-end template or receptive-field structure of a higher-order visual mechanism whose response may be invariant to the position, size, or orientation of a target. Using the max-pooling property of a typical uncertainty model, we show analytically, in simulations, and with human experiments (single-letter identification in fovea and periphery, with and without positional uncertainty) that the effect of intrinsic uncertainty (i.e. invariance) can be reduced or even eliminated by embedding a signal of sufficient strength in the masking noise of a classification-image experiment. We refer to this technique as “signal clamping”. We argue against combining the classification images across stimulus-response categories as is typically done. We show that the signal-clamped classification images from the error trials contain a clear high-contrast image that is negatively correlated with the perceptual template associated with the presented signal; they also contain a low-contrast “haze” that is positively correlated with the superposition of all the templates associated with the erroneous response. In the case of positional uncertainty, we show that this “haze” provides an estimate of the spatial extent of the uncertainty. With the effect of intrinsic uncertainty significantly reduced by signal clamping, we further show that a covariance analysis can be applied to different regions of a classification image to reveal the elementary features that are the components of the perceptual template seen in the classification image.
Supported by: NIH/NEI R03-EY016391