Abstract
Reward learning has been shown to guide spatial attention to regions of a scene. However, the neural mechanisms that support this bias in spatial orienting is unknown. We adapted an established paradigm for fMRI to identify neural correlates of reward-modulated spatial orienting. From reward feedback, participants learned to orient to a particular quadrant of a scene (high-value quadrant) to maximize gains. This learning was scene-specific, with the high-value quadrant varying across different scenes. During a subsequent test phase, participants were faster at identifying a target if it appeared in the high-value quadrant (valid). On participants we collected eye data for, the first saccades were also more likely to be in the high-value quadrant. fMRI analyses during the test phase revealed learning-dependent priority signals in the bilateral caudate tail and superior colliculus. In addition, scene selective and spatial processing regions (hippocampus, parahippocampal place area, and temporo-occipital cortex) were more strongly activated on valid compared to invalid trials. Other regions that were preferentially activated on valid trials include the frontal eye field, substantia nigra, and insula. Taken together, our results suggest regions that process scenes and space play a role in value-driven attention, extending principles of value-driven attentional priority to such representations. The caudate tail has been frequently linked to value-driven attentional capture by feature-defined stimuli, and here we extend its role to spatial orienting, suggesting a more general role in the value-driven control of attention. Consistent with an automatic and reflexive influence of learning on spatial orienting, the superior colliculus was robustly modulated by spatially-specific scene-reward associations, which given its rich connections with the caudate tail and ventral visual stream may form an integrated network for the value-dependent control of spatial attention. Subsequent analyses will focus on cerebellar contributions to value-driven orienting as well as decoding scene-specific representations from scene-selective activations.