Abstract
Models of attentional guidance suggest that the visual system can split heterogeneous items to homogenous subsets and inspect subsets separately. It is implied, however, that this subsets segmentation is based predominantly on preattentive feature selection. As features are unbound at preattentive level, the visual system has no "knowledge" of conjunctions heterogeneity (Treisman, 2006). Four experiments demonstrate that the information about conjunctions is globally available and affects attentional deployment during search. In Experiments 1 and 2, observers searched for orientationĂ—color conjunctions either in variable, or in consistent mapping conditions. The number of concurrently presented distracting conjunctions was 2 or 3 while the alphabet of features remained constant. Proportions of features were manipulated to dissociate between feature- and conjunction-based segmentation strategies. The results suggest that conjunctions can be used for segmentation along with features. In Experiments 3 and 4, observers searched the same orientationĂ—color conjunctions among 2 other distracting conjunctions sharing either color, or orientation with a target. An irrelevant size dimension was added so that (1) all items were the same in size; (2) size completely correlated with other features (congruent condition) making conjunction subsets even more distinguishable; (3) size was orthogonal to other features (incongruent condition) multiplying the number of conjunctive subsets. In both variable and consistent search tasks, congruent displays provided the most efficient search that can be ascribed to redundancy gain. Incongruent condition yielded the most inefficient search. The results correspond to the classical Garnerian pattern of integrality (Garner & Felfoldy, 1970) suggesting global representation of conjunctions rather than separate features. The overall conclusion contradicts purely preattentive framework of subset segmentation. However, it is consistent with the notion of broadly distributed attention (Treisman, 2006) that somehow binds features but fails to locate particular conjunctions and estimate their proportion precisely. However, it can subsequently guide focused attention.
Meeting abstract presented at VSS 2013