In this work, we have demonstrated and quantified contrast amplification through a texture orientation discrimination task. Psychometric functions for forced-choice orientation discrimination showed a classical dipper shape in which pairing below-threshold low-contrast test rows with high-contrast amplifier rows enabled detection of the global texture orientation. Contrast amplification was observed in all participants, was tested on both first- and second-order textures, and was presented over a range of geometries and display durations. Amplification factors greater than 5× were observed for first-order textures, and those greater than 2.5× were observed for second-order textures. Compared with absolute detection thresholds, amplification exceeded 11×. The amplification curves presented in
Figure 3bamount to a performance curve for the resolution of texture orientation discrimination. This curve is defined at low contrasts by intrinsic nonamplifiable noise and at high contrasts by contrast gain control mechanisms. The overall shape of the response function is well described by the contrast gain control model provided in
Equation 4for both first- and second-order textures. The overall equation describing the contrast gain control model's response to our stimuli is quite similar in form to equations that have been developed to describe feed-forward gain control (shunting inhibition) in human psychophysical models (e.g., Lu & Sperling,
1996; Sperling & Sondhi,
1968), gain control in cortical neurons (Carandini et al.,
1997; Heeger,
1992; Kapadia, Ito, Gilbert, & Westheimer,
1995), and purely empirical formulations to describe data in discrimination tasks (Foley & Legge,
1981; Legge & Foley,
1980). As was noted, nearly all of the elements of the model are components that are widely believed to be involved in visual processing. What has been added here has been (1) embedding the components for global orientation discrimination within a larger framework of early visual processing, (2) proposing that a nonlinearity more complex than a gain control-modulated power law is required to model visual transduction, and (3) demonstrating that computational model can fit a large data set—more than 200 data points for each observer.