September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Neuronal Population Tuning Statistics to Target and Cues for a feed-Forward Convolutional Neural Network that Learns to Covertly Attend
Author Affiliations
  • Sudhanshu Srivastava
    University of California at Santa Barbara
  • Miguel P. Eckstein
    University of California at Santa Barbara
Journal of Vision September 2021, Vol.21, 2885. doi:https://doi.org/10.1167/jov.21.9.2885
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sudhanshu Srivastava, Miguel P. Eckstein; Neuronal Population Tuning Statistics to Target and Cues for a feed-Forward Convolutional Neural Network that Learns to Covertly Attend. Journal of Vision 2021;21(9):2885. https://doi.org/10.1167/jov.21.9.2885.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Introduction: Attentional effects on perception, once thought to be exclusive to primates, have been more recently measured in simpler organisms such as fruit flies, dragonflies, and honeybees (Nityananda 2016). We investigate whether a simple convolutional neural network (CNN) trained to maximize the detection of a target in a 2-location yes-no task (Posner cueing) results in human-like cueing effects. We analyze individual neurons in the network to understand how they extract and integrate target and cue information. Methods: We trained a CNN on 6000 images containing oriented lines embedded in noise. A box cue co-occurred with the target on 80 % of the trials. The CNN consisted of two convolution layers, each followed by max-pooling, followed by a dense layer, and an output layer with 2 neurons. We evaluated human and CNN performance for a Posner cueing task with varying contrast of a peripheral box cue. We then extracted responses of each neuron to these images and calculated neuron-specific cueing effects (areas under the ROC for valid vs. invalid cues). Results: The CNN trained to optimally detect the target results in a cueing effect comparable to humans and an optimal Bayesian model. Nineteen percent of the neurons in the dense layer showed positive and negative cueing effects with varying degrees (mean = 0.02; standard deviation = 0.10). Neurons with cueing effects detect both the target and the cue in isolation. The weights between the dense neurons and the two output neurons are correlated (r=-0.64, p<0.0001; r=0.58, p=<0.0001) with the neuron-specific cueing effects. The convolution layer neurons are retinotopic, while the dense layer neurons are not. Conclusion: Our results show that a simple neural network can learn to utilize cues in a similar manner as humans and establish biologically-plausible architectures and neuronal population properties to integrate target and cue information.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×