September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Possible Optimal Strategies for Orientation Coding in Macaque V1 Revealed with a Self-Attention Deep Neural Network (SA-DNN) Model
Author Affiliations & Notes
  • Xin Wang
    Peking University
  • Cai-Xia Chen
    Peking University
  • Sheng-Hui Zhang
    Peking University
  • Dan-Qing Jiang
    Peking University
  • Shu-Chen Guan
    Justus-Liebig Universität
  • Shi-Ming Tang
    Peking University
  • Cong Yu
    Peking University
  • Footnotes
    Acknowledgements  This work was supported by the National Science and Technology Innovation 2030 Major Program (2022ZD0204600)
Journal of Vision September 2024, Vol.24, 433. doi:https://doi.org/10.1167/jov.24.10.433
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xin Wang, Cai-Xia Chen, Sheng-Hui Zhang, Dan-Qing Jiang, Shu-Chen Guan, Shi-Ming Tang, Cong Yu; Possible Optimal Strategies for Orientation Coding in Macaque V1 Revealed with a Self-Attention Deep Neural Network (SA-DNN) Model. Journal of Vision 2024;24(10):433. https://doi.org/10.1167/jov.24.10.433.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The orientation tuning bandwidths of individual V1 neurons are not sufficiently narrow to support fine psychophysical orientation discrimination thresholds. Here we explore the possibility that V1 neurons as a population may apply optimal orientation coding strategies to achieve superb orientation tuning. We trained a self-attention deep neural network (SA-DNN) model to reconstruct a Gabor stimulus image from neuronal responses obtained through two-photon calcium imaging in five awake macaques. Each response field of view (FOV) contains 1400-1700 neurons, and their responses to a Gabor stimulus are used as the model inputs. The SA-DNN model consists of a self-attention mechanism followed by a feedforward layer. The self-attention mechanism can reveal cooperative coding by neurons activated by the Gabor stimulus, yielding attention maps that display two-way connections among neurons. The results suggest: (1) Neurons tuned to the stimulus orientation tend to have higher attention scores with all other neurons. The top 25% of orientation-tuned neurons with the highest mean attention scores can best reconstruct the stimulus images, while the bottom 50% neurons are unable to do so. (2) The responses of the top 25% neurons, after self-attention transformation, generate significantly sharpened population orientation tuning functions, with the amplitude increased by 3-5 times and bandwidth narrowed by approximately 30%. (3) After excluding the self-attention component, the forward propagation of the model would only reconstruct very coarse stimulus images. (4) The tuning sharpening displays an oblique effect: attention maps have higher variabilities at cardinal than at oblique orientations, producing more sharpened orientation tuning functions at cardinal orientations. These modeling results suggest that the self-attention mechanisms optimize orientation coding in macaque V1, reweighting responses to accentuate neurons based on attention scores. The results provide new insights into V1 neuronal connectivity, elaborating how self-attention refines neuronal interactions and reweights responses to process orientation information.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×