Abstract
Perceptual grouping plays a vital role in peripheral vision. The ability to combine separate measurements into coherent wholes supports real world tasks, such as object segmentation. The field of information visualization, however, is just beginning to apply grouping research. In this direction, we study common visualization grouping techniques using an image-computable model of peripheral vision, known as the Texture Tiling Model (TTM). TTM predicts performance on a wide range of tasks, from search in artificial displays to scene categorization. The model encodes a stimulus image as a rich set of image statistics, pooled over regions that tile the visual field and grow in size with eccentricity. We generate predictions by synthesizing images (called "mongrels") which represent the information encoded by the model but are otherwise random. Prior research shows that difficulty doing a task with mongrels predicts difficulty doing the same task peripherally or at a glance. We examine the task of identifying the orientation of a 0.5 deg tall white "T" at 10 deg eccentricity with four randomly oriented white 0.5 deg "T" cardinal flankers 4 deg away, on a mid-gray background. Flankers are grouped together by either of two cues: connectedness or common region. The mongrels show that connecting flankers with white circle arcs does not prevent them from interfering with the target. Interestingly, placing the flankers in front of an annulus of different gray-level, called the common region, decreases interference, but only when this common region is between the mid-gray background and white in gray-level. Likewise, highlighting only the target with a small square region helps, but only if the region is darker than the background. This suggests that grouping by common region aids visualization, but only when it accentuates the target or camouflages distractors. Further experiments will test these model predictions on existing visualizations.
Meeting abstract presented at VSS 2018