Purchase this article with an account.
Davood Gozli, Hira Aslam, Jay Pratt; My Color Singleton: Visual Attention to Learned Action-Effects. Journal of Vision 2015;15(12):923. doi: 10.1167/15.12.923.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
We examined the prioritization of a salient feature immediately after an observer performs an action. Previous work suggests that sensory salience is reduced for a feature that results from the observer's own action (i.e., self-caused) compared to a feature that is independent of the observer's action (i.e., externally caused). Similar to how difficult it is to tickle oneself (Blakemore et al., 1998) or to discriminate a self-caused sensory signal from noise (Cardoso-Leite et al., 2010), we expected reduced salience for self-caused visual features. In an initial acquisition phase, participants learned the perceptual outcome of two actions. One key always generated the color red, and the other generated green. By acquiring action-outcome associations, the appearance of red is coded as a self-caused event after performing the corresponding key, whereas it would be coded as an externally caused event after performing the non-corresponding key. In a following test phase, the two colors were presented as salient singletons in otherwise-white search displays. We compared the attentional impact of self-caused and externally-caused singletons, which could be either relevant (the singleton was the target) or irrelevant (the singleton was a distractor). Contrary to previous work, we found that participants were more efficient at both selecting and ignoring a self-caused singleton compared to an externally caused singleton. Specifically, the effective salience of a self-caused singleton can increase when it is relevant (larger cueing effect for the target) or decrease when it is irrelevant (smaller interference effect for a distractor), whereas no such relevance-based modulation was found with externally-caused singletons. These findings demonstrate how performing an actions prepares visual attention for the most optimal strategy toward the predicted action-outcome, discriminating between self-caused and externally caused events, in a task-appropriate manner.
Meeting abstract presented at VSS 2015
This PDF is available to Subscribers Only