Traditionally, experiments using the classification image analysis have used additive white Gaussian noise as the masking stimulus. White noise images, being spatially uncorrelated by design, do not contribute to any artificial structure to the classification images. In our experiments, we used 1/
f noise as the stimulus because the presence of many large-scale, target-like salient features inherent in the noise structure made it an effective masker. Due to the correlated nature of 1/
f noise, the resulting classification images are not unbiased linear templates. To obtain the true unbiased linear estimator, we can apply a prewhitening filter (Abbey & Eckstein,
2000) to the classification images obtained using 1/
f noise. An example of the dipole classification image for observer L.K.C. before and after prewhitening is shown in
Figure 6. However, even the classification images obtained using the prewhitening filter does not reflect the true unbiased template in our experiments. This is because, unlike the experimental situation in psychophysics, the contribution of each pixel to the creation of the classification image is shift variant across trials (fixation points in this case) due to several factors. First, oculomotor precision and measurement errors inherent in eye movements and their recording result in spatial uncertainty in the exact location of an observer's fixation. Second, even if we ignore errors in recording fixations and simply assume that an observer was using only a single visual feature to succeed in the search task, there is no guarantee that the observer will precisely fixate the same location on this feature every time. For example, assume that the observer always looked for black triangles in the case of the “bow-tie” search. During search, the observer could decide to fixate the left triangle in some trials and the right triangle on others. Thus, the noise samples extracted around these fixations are not necessarily perfectly aligned across trials, resulting in spatially blurred classification images. However, this does not imply that observers are unable to use precise shape information. In a related study (Beutter, Eckstein, & Stone,
2004), subjects performed an 8-AFC contrast discrimination task and classification images were generated using a saccade-contingent analysis and an 8-AFC perceptual decision framework. The resulting classification images for these two cases were not found to be significantly different from each other, indicating that saccade mechanisms can indeed use precise shape information to guide search.