September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Learning to Attend in a Brain-inspired Deep Neural Network
Author Affiliations & Notes
  • Gregory J. Zelinsky
    Department of Psychology, Stony Brook University
    Department of Computer Science, Stony Brook University
  • Hossein Adeli
    Department of Psychology, Stony Brook University
Journal of Vision September 2019, Vol.19, 282d. doi:https://doi.org/10.1167/19.10.282d
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gregory J. Zelinsky, Hossein Adeli; Learning to Attend in a Brain-inspired Deep Neural Network. Journal of Vision 2019;19(10):282d. https://doi.org/10.1167/19.10.282d.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How is attention control learned? Most neuro-cognitive models avoid asking this question, focusing instead on how the prioritization and selection functions of attention affect neural and behavioral responses. Recently, we introduced ATTNet, an image-computable deep network that combines behavioral, neural, and machine-learning perspectives into a working model of the broad ATTention Network. ATTNet also has coarse biological plausibility; it is inspired by biased-competition theory, trained using deep reinforcement learning, and has a foveated retina. Through the application of reward during search, ATTNet learns to shift its attention to locations where there are features of a rewarded object category. We tested ATTNet in the context of two different “microwave oven” and “clock” search tasks using images of kitchen scenes (Microsoft COCO) depicting both a microwave and a clock (target present) or neither a microwave nor a clock (target absent). This design therefore perfectly controls for the visual input; any difference in the model’s behavior could only be due to target-specific applications of reward. Similar to the eye movements of our behavioral participants (n=60) searching the same scenes for the same target categories, ATTNet preferentially fixated clocks but not microwaves when previously rewarded for clocks, and preferentially fixated microwaves but not clocks when previously rewarded for microwaves. Analysis of target-absent search behavior also revealed clear scene context effects; ATTNet and participants looked at locations along walls when searching for a clock, and looked at locations along countertops when searching for a microwave. We therefore suggest a computational answer to one fundamental question; that the simple pursuit of reward causes, not only the prioritization of space in terms of expected reward signals, but the use of these signals to control the shift of what the literature has come to know as spatial attention.

Acknowledgement: This work was funded by NSF IIS award 1763981. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×