September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Modeling the Mechanisms of Reward Learning that Bias Visual Attention
Author Affiliations
  • Jason Hays
    Department of Psychology, Florida International University
  • Fabian Soto
    Department of Psychology, Florida International University
Journal of Vision August 2017, Vol.17, 1302. doi:https://doi.org/10.1167/17.10.1302
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jason Hays, Fabian Soto; Modeling the Mechanisms of Reward Learning that Bias Visual Attention. Journal of Vision 2017;17(10):1302. https://doi.org/10.1167/17.10.1302.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A body of recent research has shown that visual attention is biased toward rewarded stimuli. Because of the known role of the basal ganglia in reward learning, a potential mechanism for this bias is learning in striatal medium spiny neurons (MSNs), which receive projections from cortex carrying information about visual stimuli and from dopaminergic neurons carrying information about reward. Furthermore, their output can influence visual processing through the closed visual corticostriatal loop, that runs from the MSNs through globus pallidus/substantia nigra (GPi/SNr), thalamus, and back to visual cortex. We propose an implementation for this closed visual loop that includes a biologically plausible model for temporal cortical neurons and striatal MSNs, both simulated through the Adaptive Exponential Leaky Integrate and Fire (LIF) model with parameters constrained with data from the neurophysiological literature. Exponential LIF models were used for the GPi neurons as well as the thalamic neurons. Synapses between visual and striatal neurons are modified through a biologically-plausible reward-driven learning rule. Through association, the model initially adjusts these synapses based on the paired presentations of a particular color and a high reward or a lower reward. Adjustments were made until the reward prediction error was small. Using these acquired cortical-striatal weights while following the setup of a typical experiment in reward-based attentional bias, the model then selected a target shape from among five distractor shapes. One distractor had a previously-rewarded color. The model took significantly longer to make decisions when the distractor associated with a higher reward was present compared to when the distractor associated with lower reward was present. Thus, the model can explain reward-based attentional capture through neurobiologically-plausible learning mechanisms. Furthermore, the model is in line with results from the neurophysiological and neuroimaging literatures that implicate the visual corticostriatal loop in reward-based visual learning.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×