Abstract
Visual environments we encounter in our daily life are full of perceptually and conceptually similar information. Thus, understanding how the visual system detects and summarizes redundant information provides clues to the mechanisms in which our minds interact with the environment. According to previous studies (Jiang et al., 2010; Won & Jiang, 2013), visual redundancy enhances the quality of perception and memory representations. Also, we have demonstrated that redundant distractors capture attention in a name-face Stroop task (Lee et al., 2014). In the current experiment, we extended our previous study using object stimuli and a drift diffusion modeling. Participants decided whether a target word at fixation belonged to fruit or clothes. Distractors were the pictures of fruit or clothes, and they could interfere with target responses. The distractor stimuli were from either the same or different category of the target (Congruent vs. Incongruent), and one either on the left or right side or two identical items on the both sides of the target appeared (Single vs. Redundant). We replicated our results that congruency effects in RT were greater in the redundant than the single condition. Specifically, the incongruent/redundant condition produced slower RTs than the incongruent/single condition. In order to understand the underlying mechanism of our results, we fitted the drift diffusion model to the two incongruent conditions using a hierarchical Bayesian estimation technique implemented in 'hBayesDM' package (Ahn, Haines, & Zhang, 2017). Comparing the posterior distributions of hyper-parameters revealed that only β (bias) was significantly increased in the redudant distractor condition relative to the single distractor condition. On the other hand, α (boundary separation), δ (drift rate), ó (non-decision time) were less affected by distractor redundancy. Based on these findings, we conclude that attentional capture triggered by visual redundancy takes place at the early stage where perceptual evidence is built up.
Meeting abstract presented at VSS 2018