September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Deep neural network features decoded from fMRI responses to scenes predict eye movements
Author Affiliations
  • Thomas O'Connell
    Department of Psychology, Yale University
  • Marvin Chun
    Department of Psychology, Yale University
    Department of Neuroscience, Yale School of Medicine
Journal of Vision August 2017, Vol.17, 1273. doi:https://doi.org/10.1167/17.10.1273
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Thomas O'Connell, Marvin Chun; Deep neural network features decoded from fMRI responses to scenes predict eye movements. Journal of Vision 2017;17(10):1273. https://doi.org/10.1167/17.10.1273.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Neural representations in visually responsive brain regions are predicted well by features within deep hierarchical convolutional neural networks (HCNN's) trained for visual recognition (Yamins et al. 2014, Khaligh-Razavi & Kriegeskorte 2014, Cichy et al. 2016). Additionally, salience maps derived from HCNN-features produce state-of-the-art prediction of human eye movements in natural images (Kümmerer et al. 2015, Kümmerer et al. 2016). Thus, we explored whether HCNN models might support representation of spatial attention in the human brain. We computed salience maps from HCNN features reconstructed from functional magnetic resonance imagining (fMRI) activity and then tested whether these fMRI-decoded salience maps predicted eye movements. We measured brain activity evoked by natural scenes using fMRI while participants (N=5) completed an old/new continuous recognition task and in a separate session measured eye movements for the same natural scenes. Partial least squares regression (PLSR) was then used to reconstruct from BOLD activity features derived from five layers of the VGG-19 network trained for scene-recognition (Simonyan & Zisserman 2015, Zhou et al. 2016). Spatial activity in the reconstructed VGG-features was then averaged across channels (filters) within each layer and across all layers to compute an fMRI-decoded salience map for each image. Group-average fMRI-decoded salience maps from regions in occipital, temporal, and parietal cortex predicted eye movements (p< 0.001) from an independent group of observers (O'Connell & Walther 2015). Within-participant prediction of eye movements was significant for fMRI-decoded salience maps from V2 (p< 0.05). These results show that representation of spatial attention priority in the brain may be supported by features similar to those found in HCNN models. Our findings also suggest a new method for evaluating the biological plausibility of computational salience models.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×