October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Introducing the TurkEyes toolbox: UIs for crowdsourcing attention without an eye tracker
Author Affiliations
  • Anelise Newman
    CSAIL, MIT
  • Barry McNamara
    CSAIL, MIT
  • Camilo Fosco
    CSAIL, MIT
  • Yun Bin Zhang
    Harvard
  • Patr Sukhum
    Harvard
  • Matthew Tancik
    University of California, Berkeley
  • Nam Wook Kim
    Boston College
  • Zoya Bylinskii
    Adobe, Inc.
Journal of Vision October 2020, Vol.20, 196. doi:https://doi.org/10.1167/jov.20.11.196
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anelise Newman, Barry McNamara, Camilo Fosco, Yun Bin Zhang, Patr Sukhum, Matthew Tancik, Nam Wook Kim, Zoya Bylinskii; Introducing the TurkEyes toolbox: UIs for crowdsourcing attention without an eye tracker. Journal of Vision 2020;20(11):196. https://doi.org/10.1167/jov.20.11.196.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Eye movements provide insight into what parts of an image a viewer finds most salient, interesting, or relevant to the task at hand. Unfortunately, eye tracking data, a commonly-used proxy for attention, is difficult to collect at scale. Here, we present TurkEyes, a toolbox of crowdsourceable user interfaces to collect attention data without using an eye tracker. The four interfaces in our toolbox represent different interaction methodologies found in the literature for capturing attention. ZoomMaps (introduced here) is a "zoom-based" interface that tracks the viewport on a user's mobile phone while they pan and zoom. CodeCharts (inspired by Rudoy et al., 2012) is a "self-report" technique where participants specify where they gazed using a grid of codes that appears after image presentation. ImportAnnots (O’Donovan et al., 2014) is an "annotation" tool for selecting important image regions, and BubbleView (Kim et al., 2017) is a "cursor-based" moving-window approach that lets viewers click to reveal a small area of an otherwise blurred image. We place these interfaces within a common code and analysis framework to compare their output and develop guidelines for how to use them. We design experiments and validation procedures to capture high-quality data and explain how to convert the output of each method into an attention heatmap. Using Amazon's Mechanical Turk, we collect attention heatmaps on a variety of image types. Although all the interfaces capture some common aspects of attention, we find that they are best suited for different image types and tasks. For example, ZoomMaps is ideal for large, multi-scale visualizations; CodeCharts captures eye movements over time; ImportAnnots works well for graphic designs; and BubbleView is cheap but distorts the stimuli. This toolbox and our analyses facilitate exciting opportunities for gathering attention data at scale without an eye tracker for a diversity of stimuli and task types.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×