December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Decoding Visual Feature Versus Visual Spatial Attention Control with Deep Neural Networks
Author Affiliations
  • Yun liang
    J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL
  • Sreenivasan Meyyappan
    Center for Mind and Brain, University of California, Davis, CA
  • Mingzhou Ding
    J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL
Journal of Vision December 2022, Vol.22, 3258. doi:https://doi.org/10.1167/jov.22.14.3258
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yun liang, Sreenivasan Meyyappan, Mingzhou Ding; Decoding Visual Feature Versus Visual Spatial Attention Control with Deep Neural Networks. Journal of Vision 2022;22(14):3258. https://doi.org/10.1167/jov.22.14.3258.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Multivoxel pattern analysis (MVPA) examines the differences in multivoxel patterns evoked by different cognitive conditions using machine learning methods such as logistic regression and support vector machine. These methods are linear methods. It is possible that there are nonlinear relationships in the data that are not readily detected by these methods. We attempted to apply deep neural networks to address this problem. FMRI data were recorded from humans (n=20) performing a cued visual spatial/feature attention task in which an auditory cue instructed the subject to attend either left or right visual field (spatial trials), or attend either red or green color (feature trials). Following a random delay, two rectangular stimuli appeared, one in each visual field, and the subjects reported the orientation of the rectangle in the attended location (spatial trials) or with the attended color (feature trials). A deep neural network (DNN) was trained to take cue-evoked fMRI data as input features to predict trial labels. For feature (spatial) attention control, feature (spatial) trial data from 19 subjects were used to train a DNN model, which was then tested on the remaining subject. This process was repeated 20 times and the 20 decoding accuracies were averaged. We found that using the whole brain, the accuracy for decoding feature attention (cue red vs cue green) and spatial attention control (cue left vs cue right) is 59% and 61% respectively, both significantly above chance level of 50%. The heatmaps derived from the DNN models revealed important regions that contribute to both feature and spatial attention control as well as regions that contribute mainly to feature attention control or spatial attention control. In sum, DNNs can yield insights underlying attention control that complement other methods and provide a new approach for uncovering more complex relations between cognitive conditions and neural activities.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×