September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Where do people look on data visualizations?
Author Affiliations
  • Aude Oliva
    Massachusetts Institute of Technology
Journal of Vision September 2018, Vol.18, 1351. doi:10.1167/18.10.1351
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Aude Oliva; Where do people look on data visualizations?. Journal of Vision 2018;18(10):1351. doi: 10.1167/18.10.1351.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

What guides a viewer's attention when she catches a glimpse of a data visualization? What happens when the viewer studies the visualization more carefully, to complete a cognitively-demanding task? In this talk, I will discuss the limitations of computational saliency models for predicting eye fixations on data visualizations (Bylinskii et al., 2017). I will present perception and cognition experiments to measure where people look in visualizations during encoding to, and retrieval from, memory (Borkin, Bylinskii, et al., 2016). Motivated by clues that eye fixations give about higher-level cognitive processes like memory, we sought a way to crowdsource attention patterns at scale. I will introduce BubbleView, our mouse-contingent interface to approximate eye tracking (Kim, Bylinskii, et al., 2017). BubbleView presents participants with blurred visualizations and allows them to click to expose "bubble" regions at full resolution. We show that up to 90% of eye fixations on data visualizations can be accounted for by the BubbleView clicks of online participants completing a description task. Armed with a tool to efficiently and cheaply collect attention patterns on images, which we call "image importance" to distinguish from "saliency", we collected BubbleView clicks for thousands of visualizations and graphic designs to train computational models (Bylinskii et al., 2017). Our models run in real-time to predict image importance on new images. This talk will demonstrate that our models of attention for natural images do not transfer to data visualizations, and that using data visualizations as stimuli for perception studies can open up fruitful new research directions.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×