Abstract
Feature-based attention is critical for exploring both real-world and artificial displays. Prior work (Nothelfer & Franconeri, VSS 2014) shows a substantial benefit for selection of redundant dimensions (color and shape) that specify a set, relative to selection of either dimension alone. We tested whether this redundancy benefit depends on attentional mode - global versus serial search - and whether it leads to response time differences when viewing realistic displays. Objects were grouped in each quadrant of the screen, with targets forming a partial ring across three quadrants, embedded among distractors. In one block (global task), participants indicated toward which quadrant the ring's gap was angled ('ring' trials). Another block (serial task) contained these same 'ring' trials, but also similar 'mixed' trials in which objects from two target-containing quadrants were randomly placed so as to no longer form a target ring, encouraging a serial attentional mode. Participants indicated which quadrant lacked targets. In both blocks, participants quickly pressed the spacebar upon knowing the answer, after which a mask appeared and participants indicated the specific quadrant. Targets (e.g., blue asterisks) were identical to each other within a trial, and differed from distractors in color only (color trials), shape only (shape trials), or both color and shape (redundant trials). Examining only 'ring' trials, we found that redundant feature-selection benefited (40ms) compared to the faster of the single dimensions (color or shape trials) per participant. This did not change depending on attentional mode. Selection of redundant dimensions appears to speed both global and serial selection of a set of objects. This finding has implications for feature encoding guidelines for data visualization (e.g., when graphing software such as Microsoft Excel defaults to redundant shape/color glyphs) for a range of tasks (e.g., judging the shape of a data point set versus searching for outlier points).
Meeting abstract presented at VSS 2016