August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
A Computational Biased Competition Model of Visual Attention using Deep Neural Networks
Author Affiliations
  • Hossein Adeli
    Department of Psychology
  • Gregory Zelinsky
    Department of Psychology
Journal of Vision September 2016, Vol.16, 193. doi:https://doi.org/10.1167/16.12.193
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hossein Adeli, Gregory Zelinsky; A Computational Biased Competition Model of Visual Attention using Deep Neural Networks. Journal of Vision 2016;16(12):193. https://doi.org/10.1167/16.12.193.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

"Biased competition theory" proposes that visual attention reflects competition among bottom-up signals at multiple stages of processing, and the biasing of this competition by top-down spatial, feature, and object-based modulations. Our work advances this theory in two key respects: by instantiating it as a computational model having an image-based "front-end", thereby enabling predictions using real-world stimuli, and by using an 8-layer deep neural network to model ventral pathway visual processing. A categorical cue (object name) activates a specific frontal node (goal state; layer 8), which feeds activation back to modulate Inferior Temporal (IT; layers 7-6) and V4 (layer 5) using the same feedforward weights trained for object classification. This feedback is multiplied by the feedforward bottom-up activation, biasing the competition in favor of target features (feature-based attention). Reentrant connectivity between V4 and FEF selects a spatial location (spatial attention), causing the selective routing (attentional gating) of object information at that location. This routing constricts receptive fields of IT units to a single object and makes possible its verification as a member of the cued category. Biased retinotopic V4 activation and spatial biases from FEF and LIP (maintaining an Inhibition-of-Return map) project to the superior colliculus, where they integrate to create a priority map used to direct movements of overt attention. We tested our model using a categorical search task (15 subjects, 25 categories of common objects, 5 set sizes), where it predicted almost perfectly the number of fixations and saccade-distance travelled to search targets (attentional guidance) as well as recognition accuracy following target fixation. In conclusion, this biologically-plausible biased competition model, built using a deep neural network, not only can predict attention and recognition performance in the context of categorical search, it can also serve as a computational framework for testing predictions of brain activity throughout the cortico-collicular attention circuit.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×