Abstract
What guides a viewer's attention when she catches a glimpse of a data visualization? What happens when the viewer studies the visualization more carefully, to complete a cognitively-demanding task? In this talk, I will discuss the limitations of computational saliency models for predicting eye fixations on data visualizations (Bylinskii et al., 2017). I will present perception and cognition experiments to measure where people look in visualizations during encoding to, and retrieval from, memory (Borkin, Bylinskii, et al., 2016). Motivated by clues that eye fixations give about higher-level cognitive processes like memory, we sought a way to crowdsource attention patterns at scale. I will introduce BubbleView, our mouse-contingent interface to approximate eye tracking (Kim, Bylinskii, et al., 2017). BubbleView presents participants with blurred visualizations and allows them to click to expose "bubble" regions at full resolution. We show that up to 90% of eye fixations on data visualizations can be accounted for by the BubbleView clicks of online participants completing a description task. Armed with a tool to efficiently and cheaply collect attention patterns on images, which we call "image importance" to distinguish from "saliency", we collected BubbleView clicks for thousands of visualizations and graphic designs to train computational models (Bylinskii et al., 2017). Our models run in real-time to predict image importance on new images. This talk will demonstrate that our models of attention for natural images do not transfer to data visualizations, and that using data visualizations as stimuli for perception studies can open up fruitful new research directions.
Meeting abstract presented at VSS 2018