Abstract
The visual system faces two challenges in trying to construct stable and useful percepts (appearances) from measured signals. First, it must keep itself calibrated. To do this, the response of a sensor can be corrected by making it dependent on additional signals that correlate with what the sensor measures, exploiting an assumption that the signals' joint distribution is stationary. Adaptations of this sort can explain negative aftereffects in which apparent color, velocity, texture density, stereoscopic depth, etc. become contingent on some other signal in a display (McCollough 1965, Allan & Siegel 1993; Mayhew & Anstis 1972; Durgin 1996; Blaser & Domini 2002). Second, the system must monitor the ecological validities of potential cues to exploit new cues in a changing world (Brunswik, 1956). In this type of learning, a new signal comes to have the same effect as cues that were contingent during training, so aftereffects are positive (Haijiang et al., 2005). Since both types of perceptual adaptation might be engaged by stimuli with strong contingencies between signals, a theory is needed to predict which effect will dominate in a given experiment. Here we discuss the role of some factors that are, from a computational view, involved: the rates at which sensors drift; how signals' actual distributions vary over time in natural environments; the system's ability to monitor changes in the distributions of signal values (which depends in part on sampling density); and the manner in which the ecological validities of signals change over time in the organism's environment.
NIH grants EY-013988 and P30 EY-001583