Abstract
When visual features are learned to predict a reward outcome, stimuli possessing these reward-associated features automatically capture attention. This consequence of reward learning on attention has only been examined under conditions in which visual features (e.g., color, orientation) are associated with tangible, extrinsic rewards such as money and food. In the present set of experiments, I examine how an irrelevant reward-associated sound affects visual processing and whether positive social feedback produces attentional biases for associated visual stimuli. In one experiment, participants first performed a training phase comprising a sound identification task in which certain sounds were associated with a monetary reward outcome. In a subsequent test phase, participants completed a visual search task while trying to ignore task-irrelevant sounds, certain of which were previously associated with reward. Visual search performance was impaired in the presence of the previously high-value sound, demonstrating cross-modal attentional capture. In another experiment, participants first performed a training phase comprising visual search for a color-defined target. Trial-by-trial feedback consisted of a face exhibiting either a positive (smile) or neutral expression. Participants were told that these faces would "react to what happened on that trial." One color target was more likely than the other to be followed by a positive expression; the feedback participants received was unrelated to their actual performance. Then, in a subsequent test phase, former-target-color stimuli were presented as distractors in a visual search task for a shape-defined target. Distractors rendered in the color previously associated with a high probability of positive social feedback impaired search. The findings support the domain generality of value-driven attention, with a range of positive outcomes biasing information processing across and between different sensory systems.
Meeting abstract presented at VSS 2015