Abstract
The identification of two attributes of a single object exceeds the identification of the same attributes, one in each of two objects. If focusing attention on one object narrows the tuning of the perceptual template, the effect should be magnified when the similarity of the alternatives fall on the rapidly changing portion of the template where performance is most sensitive to changes tuning. Recent results suggest that attention effects depend on discrimination precision. The goal of the current project was to extend the taxonomy of attention by quantitatively examining the interaction between focusing attention and judgment precision. Observers made moderately precise judgments of the orientation (±10°) and phase (center light/dark) of one Gabor object or the orientation of one and the phase of another in six levels of external noise. The objects appeared at 7 deg eccentricity left and right of fixation. The family of contrast psychometric functions in different external noises showed object attention effects at all contrasts, with a magnitude that varied considerably across observers. An elaborated perceptual template model, the ePTM (Jeon, Lu, & Dosher, 2009), that deals with non-orthogonal stimuli, accounts for the full family of contrast psychometric functions in both single-object and dual-object conditions for these moderately precise discriminations, providing a direct test of template sharpening. The ePTM framework provides a systematic account of object attention and the joint effects of external noise, contrast, and orientation difference, with object attention resulting in narrower tuning and therefore higher asymptotic performance across external noise levels and a reduced effect of external noise, as suggested by (Liu, Dosher, & Lu, 2009). Object attention affects the tuning of the template and excludes external noise with its impact dependent upon judgment precision. The attention-precision framework provides an explanation of variation in the magnitude of attention effects in different tasks.
Funded by 5R01MH81018 and by the AFOSR.