Abstract
Colormaps represent data by mapping dimensions of color (e.g., lightness) onto quantities (e.g., amplitude in spectrograms, correlation in correlation matrices, and activation in neuroimages). Does the specific assignment between poles of the color dimension (e.g., dark vs. light) and poles of the quantity dimension (e.g., more vs. less) influence observers' ability to interpret colormap data visualizations? There is a robust bias to interpret darker colors as mapping onto larger quantities (Dark+ bias) when no legend specifies the true mapping (Schloss, Gramazio, & Walmsley, VSS-2015), but does the Dark+ bias persist when a legend explicitly defines the color-quantity mapping? We addressed this question by comparing participants' response times (RTs) to correctly interpret colormaps when a legend specified dark+ (greater quantities coded as darker) vs. light+ (greater quantities coded as lighter) mapping. We operationalized the Dark+ bias as faster RTs for dark+ than light+ mappings. Participants were presented with fictitious data matrices where columns represented time, rows represented alien species, and cell color represented how often each species was spotted during each time window. To respond, participants interpreted the legend and then reported whether there were more animal sightings early vs. late. We presented the legend in each of four conditions: 2 color quantity mappings (light+/dark+) x 2 orientations ("greater"-higher/"greater"-lower). Colormaps were constructed from the orthogonal combinations of 5 color scales (ColorBrewer Blue/ColorBrewer Red/Hot/Autumn/Gray/Jet) x 2 background colors (white/black) x 2 sides for darker color (left/right) x 20 replications (total of 1600 trials). Consistent with the Dark+ bias, RTs were significantly faster when the legend specified dark+ mapping (p< .001). This bias is contrary to the common, light+ mapping in neuroimages (Christen et al., 2013). The results indicate that observers have predictions about how colors should map onto quantities, and data visualizations that violate those predictions are more difficult to interpret.
Meeting abstract presented at VSS 2016