September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Visual Salience and Grouping Cues Guide Relation Perception in Visual Data Displays
Author Affiliations
  • Cindy Xiong
    University of Massachusetts Amherst
    Northwestern University
  • Chase Stokes
    Northwestern University
  • Steve Franconeri
    Northwestern University
Journal of Vision September 2021, Vol.21, 2095. doi:https://doi.org/10.1167/jov.21.9.2095
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Cindy Xiong, Chase Stokes, Steve Franconeri; Visual Salience and Grouping Cues Guide Relation Perception in Visual Data Displays. Journal of Vision 2021;21(9):2095. https://doi.org/10.1167/jov.21.9.2095.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Reading a visualization is like reading a paragraph. Each sentence is a comparison: the left ones are bigger than the right ones, this difference is smaller than that, the red lines are increasing but the yellow lines are decreasing. What determines which comparisons are made first? People often group objects by ‘spatial’ (e.g., proximity) and ‘featural' (e.g., colors, size) cues. When grouping cues compete, spatial cues tend to beat featural cues (Brooks, 2015; Wagemans, Johan, et al., 2012), as they appear to be processed in parallel across a display (Franconeri, Bemis, & Alvarez, 2009), in contrast to featural cues that are argued to be grouped by only one value (a single color, shape, or size) at a time (Huang & Pashler, 2002; Yu, Xiao, Bemis, & Franconeri, 2019). Using bar charts as a case study, we explored how spatial, size, and color similarities impact which relations people tend to extract from visualized data. We showed participants various 2x2 bar charts depicting different main effects and interactions, asked them to generate sentence comparisons about the bar values, and analyzed how often participants compared bars that had similar sizes (bar height), similar colors, or were spatially proximate. We found that participants were approximately 31% more likely to generate sentences comparing spatially proximate bars than spatially separated ones. This tendency was doubled when the spatially proximate bars had similar sizes and halved when they didn’t. Interestingly, participants rarely grouped and compared bars by color, such that varying color mapping in bar charts had a negligible impact on what comparisons a viewer would make.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×