Abstract
When we depict our data as visualizations, we care about how effectively our eye can extract the original data values. That effectiveness is limited by Weber’s Law, where error is proportional to a percentage its absolute value (Cleveland & McGill, 1984; Heer & Bostock, 2010). If you can reliably see a 1 pixel change in a 10-pixel dot plot or bar graph, it should take ~10 pixels of change of a 100-pixel bar to match the same level of performance. That absolute value is relative to some baseline value, such as 0-pixels. Little existing work in graphical perception explores how other baselines might affect performance. For example, it’s easier to see the difference between two lines differing by 2° when the values are 89 vs 91° (straddling the 90° categorical boundary) than when they are 15 vs 17° (Bornstein & Korda, 1984). Participants saw with 100 randomly-ordered trials where they saw an initial display with a dot of a particular height (X=1, 2, 3, ...100). Viewers then ‘drew’ the dot again at a new screen location on a subsequent display. When the dot was presented on its own (no y-axis), we replicated the Weber’s Law error effect, though with an intriguing propensity to overestimate small values. However, when the dot was presented near or on a y-axis, the error was mirror-symmetric, presumably because viewers chose the closer end of the range as the baseline. We also observed a repulsive bias from responding at 50%, where viewers were more likely to draw a dot of 49% as ~45%, and a dot of 51% as ~55%, suggesting that the midpoint of the axis serves as a categorical boundary. This finding translates categorical perception effects to data visualization, and points a path to work that creates guidelines for more precise, and less biased, data visualizations.