Abstract
Understanding category structure helps us make sense of the world, but little is known about how we use our knowledge about object categories during search. This study used event-related potential (ERP) markers of attentional object selection to investigate differences between search for one or two specific visual objects and category-based search, and to explore how categorical search can overcome working memory limitations. In contrast to most previous ERP studies of visual search which used colored shapes or alphanumeric stimuli, we employed more complex pictorial images of clothing and kitchen utensils. This enabled us to examine attentional selection in a more naturalistic context where participants may try to "find the pants" amongst kitchen utensils (or vice versa). In different blocks, they searched for one single target (e.g., pants), one target that could appear in two different views (e.g., a shirt in one of two possible orientations), two different targets (e.g., either a shoe or a scarf), or a category-defined target (e.g., any of eleven different clothing items). The N2pc component (an ERP marker of attentional object selection) was measured in response to target objects. As expected, this component was largest in the single-target condition where target selection could be based on a perceptual match with a search template. Presenting a single target object from different views had little effect on the N2pc, suggesting that search was object-based rather than view-based. N2pc components were attenuated and delayed in the two-target condition and even more so for category-defined targets, reflecting search efficiency costs when target selection cannot be guided by a single-object template. However, a reliable N2pc was present even during search for one of eleven possible category-defined targets demonstrating that category-based search remained surprisingly efficient. Our results show that category-based attentional guidance is readily available during search for complex naturalistic visual objects.
Meeting abstract presented at VSS 2013