September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Modeling feedback representations in ventral visual cortex using a generative adversarial autoencoder
Author Affiliations & Notes
  • Yalda Mohsenzadeh
    University of Western Ontario
    Vector Institute for Artificial Intelligence
  • Haider Al-Tahan
    University of Western Ontario
  • Footnotes
    Acknowledgements  We would like to thank Western BrainsCAN for the generous support of this research. Computational modeling was conducted on Compute Canada resources.
Journal of Vision September 2021, Vol.21, 2746. doi:https://doi.org/10.1167/jov.21.9.2746
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yalda Mohsenzadeh, Haider Al-Tahan; Modeling feedback representations in ventral visual cortex using a generative adversarial autoencoder. Journal of Vision 2021;21(9):2746. https://doi.org/10.1167/jov.21.9.2746.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In less than the blink of an eye, the human brain processes visual sensory input, interprets the visual scene, identifies faces, and recognizes objects. Decades of neurophysiological studies have demonstrated that the brain accomplishes these complicated tasks through a dense network of feedforward and feedback neural processes in the ventral visual cortex. So far, these visual processes are primarily modeled with feedforward hierarchical neural networks, and the computational role of feedback processes is poorly understood. In this study, we developed a generative autoencoder neural network model and adversarially trained it on a large categorically diverse data set of images (Objects, scenes, faces, and animates). We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. To test the hypothesis, we compared representational similarity of the activity patterns in the internal layers of the proposed model with magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) data acquired while participants (N=15) viewed a set of 156 images organized in four categories of objects, scenes, faces, and animates. Our proposed model identified two segregated neural dynamics in the ventral visual pathway. The representational comparison with MEG data revealed a temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally subsequent dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Further, representational comparison of model encoder and decoder layers with two fMRI regions of interests, namely early visual cortex (EVC) and inferior temporal area (IT), revealed a growing categorical representation (similar to IT) along the encoder layers (feedforward sweep) and a progression in detail visual representations (akin to EVC) along the decoder layers (feedback sweep).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×