August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Statistical Characterization of Attention Effects on the Contrast Tuning Functions Of Neuronal Populations of a Convolutional Neural Network
Author Affiliations
  • Sudhanshu Srivastava
    University of California, Santa Barbara
  • Miguel P. Eckstein
    University of California, Santa Barbara
Journal of Vision August 2023, Vol.23, 5941. doi:https://doi.org/10.1167/jov.23.9.5941
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sudhanshu Srivastava, Miguel P. Eckstein; Statistical Characterization of Attention Effects on the Contrast Tuning Functions Of Neuronal Populations of a Convolutional Neural Network. Journal of Vision 2023;23(9):5941. https://doi.org/10.1167/jov.23.9.5941.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Introduction: Covert attention results in contrast-dependent influences on neuronal activity with three distinct signatures (Carrasco 2011): an increase with contrast (response gain), or a peak effect at middle contrast and low effects at extreme contrasts (contrast gain), and a fixed effect across contrasts (baseline shift). We trained convolutional neural networks (CNNs) on a Posner cueing task with varying signal contrasts to study the emergent neuronal gain functions and relate them to the tuning properties of the neurons. Methods: We trained the models (three convolution layers, a fully-connected layer, and an output layer, ReLU & Tanh activations) to detect the presence/absence of a target (tilted line) in one of two locations with otherwise vertical lines. A central cue was 80% predictive of target location, when present. On each trial, one of eight contrasts was randomly sampled and independent white noise was added to the stimulus. We classified neurons (n=150K) into target-neurons, cue-neurons, cue-target/local-integration-neurons, and global-integration-neurons. We evaluated the effect of the cue on the contrast response function of each neuron. Results: Cue-target/local and global integration neurons (3rd convolution and 4th (fully-connected) layer) showed excitatory gain but not the cue-only and target-only neurons (1st and 2nd layers). We found (for ReLU and Tanh respectively) that 5.8±1.1% (stdev across 20 replication networks) and 10.7±1.9% of neurons show contrast gain, 52.4±3.4% and 49.0±2.6% show response gain and 41.8±2.9% and 40.3±3.9% show baseline shift. Contrast gain neurons show higher target sensitivity (AUROC: 0.82±0.04) and lower cue sensitivity (0.66±0.11) relative to response gain (0.77±0.06, 0.74±0.05, for target and cue respectively) and baseline shift (0.68±0.03, 0.78±0.02) neurons. Conclusion: A CNN trained to optimize task performance results in subpopulations of neurons with the three prototypical signatures of attentional gains. Contrast gain neurons were more sensitive to the target and response/baseline shift neurons were more sensitive to the cues.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×