Abstract
The mean percept of 10Hz ±[L-M] flicker with a sawtooth temporal waveform depends on the polarity of the sawtooth, such that rapid-to-L flicker appears greenish and rapid-to-M flicker appears reddish. We use this phenomenon to explore a model of adaptation that includes gain adjustments that depend on both the mean and the variance of the sensory signal. The model has two components. Component A captures adaptation to the mean of the signal by first estimating a “running expectation” of signal strength, and then transmitting only the difference between the instantaneous signal and the expectation value. The expectation value is calculated as a mean over the past signal history, weighted by a single exponential decay with a characteristic time constant. Component B captures adaptation to the variance of the signals by modifying a multiplicative gain applied to the output of Component A. The gain is reduced (beyond a threshold term) in inverse proportion to a running expectation of the signal variance, calculated over the past variance history, weighted by a single exponential decay with a second characteristic time constant. The model can be used to account for two sets of empirical data: Shifts in the point-of-subjective equivalence in red vs green judgements of ±[L-M] probes following adaptation to sawtooth ±[L-M] flicker, and the [L-M]-offsets required to null the average chromatic appearance of sawtooth flicker at different frequencies and asymmetries of sawtooth. Adaptation to the recent variance of sensory signals provides a plausible account of the non-linear visual response to sawtooth flicker.