September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Attentional capture by redundant visual information
Author Affiliations
  • Jiyeong Ha
    Department of Psychology, Yonsei University
  • Hee-kyung Park
    Department of Psychology, Yonsei University
  • Yoonjung Lee
    Department of Psychology, Yonsei University
  • Do-Joon Yi
    Department of Psychology, Yonsei University
Journal of Vision September 2018, Vol.18, 469. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jiyeong Ha, Hee-kyung Park, Yoonjung Lee, Do-Joon Yi; Attentional capture by redundant visual information. Journal of Vision 2018;18(10):469.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Visual environments we encounter in our daily life are full of perceptually and conceptually similar information. Thus, understanding how the visual system detects and summarizes redundant information provides clues to the mechanisms in which our minds interact with the environment. According to previous studies (Jiang et al., 2010; Won & Jiang, 2013), visual redundancy enhances the quality of perception and memory representations. Also, we have demonstrated that redundant distractors capture attention in a name-face Stroop task (Lee et al., 2014). In the current experiment, we extended our previous study using object stimuli and a drift diffusion modeling. Participants decided whether a target word at fixation belonged to fruit or clothes. Distractors were the pictures of fruit or clothes, and they could interfere with target responses. The distractor stimuli were from either the same or different category of the target (Congruent vs. Incongruent), and one either on the left or right side or two identical items on the both sides of the target appeared (Single vs. Redundant). We replicated our results that congruency effects in RT were greater in the redundant than the single condition. Specifically, the incongruent/redundant condition produced slower RTs than the incongruent/single condition. In order to understand the underlying mechanism of our results, we fitted the drift diffusion model to the two incongruent conditions using a hierarchical Bayesian estimation technique implemented in 'hBayesDM' package (Ahn, Haines, & Zhang, 2017). Comparing the posterior distributions of hyper-parameters revealed that only β (bias) was significantly increased in the redudant distractor condition relative to the single distractor condition. On the other hand, α (boundary separation), δ (drift rate), ó (non-decision time) were less affected by distractor redundancy. Based on these findings, we conclude that attentional capture triggered by visual redundancy takes place at the early stage where perceptual evidence is built up.

Meeting abstract presented at VSS 2018


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.