Purchase this article with an account.
Nicole Gaid, Jennifer Mills, Laurie Wilcox; The role of meaning in visual search. Journal of Vision 2008;8(6):321. doi: 10.1167/8.6.321.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The visual search paradigm is widely recognized as a means of assessing the salience of image features, and distinguishing properties that are processed pre-attentively. Recent research has used face stimuli to show high-level influences on putative pre-attentive processing (Hershler & Hochstein, 2005; Reddy, Wilken & Koch, 2004). Others have shown similar advantages with natural images (Rousselet et al., 2004; Li et al., 2002), however, with such complex stimuli it is often difficult to determine the basis of the effect.
Here we use meaningful, non-face, stimuli to evaluate high-level influences in a visual search task. Targets were black and white images of food and everyday objects; distractor stimuli were scrambled versions of the target items in which local features were re-positioned. The two classes of images and their distractors did not different in their low-level image properties (RMS contrast, frequency content). In a visual search experiment, observers indicated if a target was present in a set of distractors. Within a session all image types and distractor levels were randomly interleaved. Reaction times for non-food images increased with the number of distractors over the full range tested (n = 5−80). Reaction times for food images initially increased, but flattened at approximately 20 distractors. Further increases in the number of distractors had no effect on performance for this class of stimuli. This food specific pop-out effect is robust, and shows no effect of gender. Moreover, an image identification task shows that there is no difference in the discriminability of the two groups of images.
Our results show that so-called pre-attentive processing is not restricted to low-level image properties, but is clearly influenced by meaning. These data provide another piece of evidence against simple hierarchical models of visual information processing, and for more integrative models, like that proposed by Lee and Mumford (2003).
This PDF is available to Subscribers Only