Abstract
We investigate how bottom-up features such as color, intensity, and orientation at different spatial scales may be biased in a top-down manner, so as to promote the detection of a known target and suppress the interference from known distractors. Using eye tracking during visual search, we probed the extent to which our visual system promotes the target's features or suppresses the distractors' features. Three subjects searched for a known target (an upright āLā) among three known distractor types ā SAME, MORE and NEW. SAME distractors had the same amount of features as the target, i.e, the target was rotated clockwise by 90 or 180 degrees. MORE distractors had a greater amount of the target's features. NEW distractors had a new feature that was absent in the target, in addition to the target's features. Each subject performed 240 trials containing 25 items (one target and eight items of each distractor type) appearing at random locations. Analysis of the captured eye movement data consistently showed a higher number of fixations on the SAME distractors than on the MORE and NEW, for all subjects. This suggests that the horizontal and the vertical features were suppressed mildly, followed by strong suppression of the diagonal feature, resulting in maximum suppression of NEW distractors, and least suppression of SAME distractors. Thus, the exact amount of promotion or suppression of a feature seems to be related to the difference in its response to the target and the distractor,i.e., if a feature responds stronger to the target than the distractor, it is promoted; else, if it responds equally to both, it is not promoted; and if it responds stronger to the distractor than the target, it is suppressed. In conclusion, this study suggests a computational mechanism for feature biasing based on knowledge of the target and distractors, which can yield useful predictions in single unit recordings, and psychophysics experiments.