Abstract
Reading a visualization is like reading a paragraph. Each sentence is a comparison: the left ones are bigger than the right ones, this difference is smaller than that, the red lines are increasing but the yellow lines are decreasing. What determines which comparisons are made first? People often group objects by ‘spatial’ (e.g., proximity) and ‘featural' (e.g., colors, size) cues. When grouping cues compete, spatial cues tend to beat featural cues (Brooks, 2015; Wagemans, Johan, et al., 2012), as they appear to be processed in parallel across a display (Franconeri, Bemis, & Alvarez, 2009), in contrast to featural cues that are argued to be grouped by only one value (a single color, shape, or size) at a time (Huang & Pashler, 2002; Yu, Xiao, Bemis, & Franconeri, 2019). Using bar charts as a case study, we explored how spatial, size, and color similarities impact which relations people tend to extract from visualized data. We showed participants various 2x2 bar charts depicting different main effects and interactions, asked them to generate sentence comparisons about the bar values, and analyzed how often participants compared bars that had similar sizes (bar height), similar colors, or were spatially proximate. We found that participants were approximately 31% more likely to generate sentences comparing spatially proximate bars than spatially separated ones. This tendency was doubled when the spatially proximate bars had similar sizes and halved when they didn’t. Interestingly, participants rarely grouped and compared bars by color, such that varying color mapping in bar charts had a negligible impact on what comparisons a viewer would make.