Abstract
We have previously reported that following four hours of natural viewing with a 3-fold reduction of input contrast, subjects' contrast discrimination improves in the adapted range (psychophysics) and cortical response increases (fMRI) (Kwon et al., VSS 2007). Here, we ask whether the adaptation is best characterized as “response gain” (steepening of the underlying contrast response function, CRF) or “contrast gain” (shift in the midpoint of the CRF without a change in shape). We present a theoretical rationale for predicting adaptation to long-term contrast reduction should result in response gain while short-term adaptation to changes in stimulus contrast should result in contrast gain.
Three normally-sighted subjects contributed psychophysical contrast-discrimination data and fMRI BOLD responses for a range of contrasts (1 – 33 %) before and after four hours of reduced-contrast viewing. CRFs were fit with a four-parameter variant of the Naka-Rushton equation (Rmax, C50, n, m), using both psychophysical and fMRI data and a standard linking hypothesis. We examined the parameter changes between pre- and post-adaptation and interpreted these changes to distinguish between contrast gain and response gain: an increase of Rmax in the post-test signifies response gain; a decrease of C50 signifies contrast gain. The mechanisms of the adaptation were tested with the F-test designed to identify the model that best accounts for the given data with the fewest parameters. Our results showed that the best-fitting model is response gain (F(2,20) = 30.64, p F(2,20) = 28.90, p Rmax value by a factor of approximately 1.30 for V1 and V2.
Our results indicate that long-term contrast adaptation (on the scale of four hours) is better described as “response gain” than “contrast gain.”
This work was supported by NIH grant R01 EY002934.