Abstract
In Wang, Buetti and Lleras (2017), we recently proposed a technique to predict reaction times in efficient search tasks with heterogeneous displays (displays containing various combinations of different types of objects), based on the performance characteristics observed when participants complete simpler search tasks with homogeneous displays (displays where all non-target elements are identical). Here we explored a related question: in the context of an efficient search task, how do separate visual features (e.g., shape, color) combine to create the signal that differentiates a target from distractors? To address this question, Experiment 1 evaluated search efficiency for a target that differed from distractors only along color (cyan target, blue, orange, and yellow distractors); Experiment 2 evaluated efficiency when the target differed from distractors only along shape (half-disc target, circle, triangle, and diamond distractors). Finally, in three subsequent experiments, we created a target by combining the two previous target features (cyan half-disc) and distractors that were combinations of the different color and shape features. The question is: can we predict the logarithmic search efficiency in the mixed feature conditions, based on the log efficiency observed in Experiments 1 and 2 (single feature conditions)? We compared predictions from a categorical feature guidance model, and our contrast-signal model of parallel search (where contrast signal from orthogonal feature dimensions combine in Euclidian fashion). The results showed an actual improvement in search efficiency that was much larger than predicted by either model, suggesting that the contrast between target and distractors increases in an over-additive fashion when multiple visual features are combined.
Meeting abstract presented at VSS 2018