September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Modeling perceptual grouping in peripheral vision for information visualization
Author Affiliations
  • Shaiyan Keshvari
    Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology
  • Dian Yu
    Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology
  • Ruth Rosenholtz
    Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology
Journal of Vision September 2018, Vol.18, 441. doi:https://doi.org/10.1167/18.10.441
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shaiyan Keshvari, Dian Yu, Ruth Rosenholtz; Modeling perceptual grouping in peripheral vision for information visualization. Journal of Vision 2018;18(10):441. https://doi.org/10.1167/18.10.441.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Perceptual grouping plays a vital role in peripheral vision. The ability to combine separate measurements into coherent wholes supports real world tasks, such as object segmentation. The field of information visualization, however, is just beginning to apply grouping research. In this direction, we study common visualization grouping techniques using an image-computable model of peripheral vision, known as the Texture Tiling Model (TTM). TTM predicts performance on a wide range of tasks, from search in artificial displays to scene categorization. The model encodes a stimulus image as a rich set of image statistics, pooled over regions that tile the visual field and grow in size with eccentricity. We generate predictions by synthesizing images (called "mongrels") which represent the information encoded by the model but are otherwise random. Prior research shows that difficulty doing a task with mongrels predicts difficulty doing the same task peripherally or at a glance. We examine the task of identifying the orientation of a 0.5 deg tall white "T" at 10 deg eccentricity with four randomly oriented white 0.5 deg "T" cardinal flankers 4 deg away, on a mid-gray background. Flankers are grouped together by either of two cues: connectedness or common region. The mongrels show that connecting flankers with white circle arcs does not prevent them from interfering with the target. Interestingly, placing the flankers in front of an annulus of different gray-level, called the common region, decreases interference, but only when this common region is between the mid-gray background and white in gray-level. Likewise, highlighting only the target with a small square region helps, but only if the region is darker than the background. This suggests that grouping by common region aids visualization, but only when it accentuates the target or camouflages distractors. Further experiments will test these model predictions on existing visualizations.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×