Abstract
Visual attention can be implicitly associated with spatial locations that are informative, in environment-based or viewer-based reference frames (Chun & Jiang, 1998; Jiang & Swallow, 2012). Features can also capture attention differently depending on spatial context (Anderson, 2014). In prior work we suggested that category-selective attentional biases to object parts could develop because of a history of these parts being diagnostic for individuation (Chua, Richler & Gauthier, 2014; Chua, Richler & Gauthier, VSS2014). Here, we tested whether attentional biases could develop for the top or bottom of objects, as a function of object category, when object identity is irrelevant. Subjects performed a visual search task to indicate which direction the target, a valid “T,” is pointing. Targets and distractors were superimposed on several objects from two categories (Greebles): Ploks and Glips, varying in location on the screen. Critically, for the first two blocks of the study, targets appeared in different parts of Ploks and Glips (e.g., the top vs. bottom) 89% of the time. The final two blocks had targets appear in both halves of both object categories equally often. In Experiment 1 (n=21), we found faster search times in the “rich” half of each category persisting in the third block (F1,20=6.91, p=0.016, ηp2=.26), evidence for learned attention in an object-based frame of reference, as a function of category. In Experiment 2 (n=22), we replicated the result and verified it was driven by categories and not exemplars by switching to novel Greebles in block 3. Subjects categorized Ploks and Glips before the search task to make the two categories explicit. Again, search times were shorter in the “rich” half of each category persisting in the third block (F1,23=11.17, p=0.0024, ηp2=.35) blocks. These results demonstrate category-specific learned attentional biases in an object-based frame of reference.
Meeting abstract presented at VSS 2015