Abstract
A neural network model of a simultaneous visual discrimination task is presented. The model explains how incentive motivational learning enhances the salience of visual stimuli, biasing the activity of visually-sensitive neurons in anterior inferotemporal cortex and visually and motivationally-sensitive neurons in orbitofrontal cortex. The performance of the visual discrimination task relies on interactions between several brain regions. These brain regions can be divided into four functional classes: (1) Perceptual- registering visual or gustatory inputs (inferotemporal and rhinal); (2) Drive- calculating the value of anticipated outcomes using hunger and satiety inputs (amygdala and hypothalamus); (3) Incentive- resolving the value of competing stimuli (orbitofrontal); and (4) Adaptive Timing- detecting the omission or delivery of rewards (basal ganglia and SNc/VTA). Simulations of the interactions between these brain regions demonstrate that a feedback signal from orbitofrontal to inferotemporal cortex underlies the attentional modulations of inferotemporal neurons that has been observed in previous studies of the visual discrimination task. Model mechanisms are tested further with simulations of a variant of the visual discrimination task with reinforcer devaluation wherein hunger and satiety inputs influence cue preference. In this task two cues are simultaneously presented as before, but each cue is associated with a different food reward. Changes in the drive inputs to the model are seen to influence cue preference, visual cue salience, and saccadic reaction time. The same model mechanisms and parameters are shown to be capable of replicating behavioral responses and the electrophysiological activations of neurons reported in a number of studies employing a Pavlovian conditioning task.
Partially supported by an NSF Science of Learning grant to CELEST