Journal of Vision Cover Image for Volume 19, Issue 10
September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
ZoomMaps: Using Zoom to Capture Areas of Interest on Images
Author Affiliations & Notes
  • Zoya Bylinskii
    CSAIL, MIT
    Adobe Research
  • Anelise Newman
    CSAIL, MIT
  • Matthew Tancik
    UC Berkeley
  • Spandan Madan
    CSAIL, MIT
  • Fredo Durand
    CSAIL, MIT
  • Aude Oliva
    CSAIL, MIT
Journal of Vision September 2019, Vol.19, 149. doi:https://doi.org/10.1167/19.10.149
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zoya Bylinskii, Anelise Newman, Matthew Tancik, Spandan Madan, Fredo Durand, Aude Oliva; ZoomMaps: Using Zoom to Capture Areas of Interest on Images. Journal of Vision 2019;19(10):149. https://doi.org/10.1167/19.10.149.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Eye movements can help localize the most attention-capturing, interesting, or confusing regions of images or user interfaces (Jacob & Karn, 2003). Consequently, there have been significant efforts to find more scalable alternatives for collecting user attention, including moving-window methodologies (e.g., BubbleView, see Kim, Bylinskii, et al., 2017). In this work, we propose an extension of the moving-window methodology, by taking advantage of user viewing behavior on mobile screens. A mobile screen provides a naturally restricted window to explore multiscale content with the help of the zoom functionality. In this work, we present an approach to capture zoom behavior using a mobile interface, we use the collected spatial zoom patterns as a proxy for attention that we call ZoomMaps (which can be visualized as heatmaps), and we propose applications that can be built on top of zoom data. In one set of experiments, we collected the ZoomMaps of 14 participants using our zoom interface on mobile devices with a set of natural images from the CAT2000 dataset (Borji & Itti, 2015). We found that ZoomMaps are highly correlated with image saliency and ground truth eye fixation maps. In a separate set of experiments on academic posters, 10 participants spent 30 minutes exploring 6 posters. We found that zoom data could be used to generate personalized GIFs of subjects’ viewing patterns that were more effective at eliciting subsequent memory than static thumb-nails. These experiments suggest that ZoomMaps provide an effective and natural interface for collecting coarse attention data that can be harnessed for a variety of applications. For instance, zoom can be used to measure the natural scale of visual content, to summarize visual content, or to act as a debugging tool for designs.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×