August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
A variational autoencoder provides novel, data-driven features that explain functional brain representations in a naturalistic navigation task
Author Affiliations & Notes
  • Cheol Jun Cho
    UC Berkeley
  • Tianjiao Zhang
    UC Berkeley
  • Jack L. Gallant
    UC Berkeley
  • Footnotes
    Acknowledgements  This work is funded by grants from Ford URP, the NIH, ONR, and an NSF GRFP
Journal of Vision August 2023, Vol.23, 5728. doi:https://doi.org/10.1167/jov.23.9.5728
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Cheol Jun Cho, Tianjiao Zhang, Jack L. Gallant; A variational autoencoder provides novel, data-driven features that explain functional brain representations in a naturalistic navigation task. Journal of Vision 2023;23(9):5728. https://doi.org/10.1167/jov.23.9.5728.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Navigation in the real world is a complex task that engages several cognitive systems, brain regions and networks. Current models of brain systems mediating navigation reflect relatively simple psychological theories and so may miss important aspects of cognitive function in this complex task. Here we develop an alternative, data-driven approach that uses a variational autoencoder to generate novel hypotheses about brain representation during navigation. The key idea is to generate features from a trained autoencoder to create novel encoding models that successfully model brain activity. As a proof of concept, we applied this method to fMRI data acquired from three participants who performed a taxi-driver task in a large virtual environment. A spatiotemporal variational autoencoder was trained on the visual stimulus seen by the participants while they performed the task, and ridge regression was used to estimate voxelwise encoding models based on the latent features learned by the autoencoder. Inspection of the fit voxelwise encoding models shows that the latent autoencoder features explain variance in brain activity broadly across the cerebral cortex. To interpret the fit encoding models a new cluster analysis method called model connectivity (MC) was used to recover functional networks by grouping voxels according to their encoding model weights. MC recovers several different networks from the data, encompassing motor (M1 and S1), vision (V1-4), navigation (RSC, OPA, PPA, and PFC), and theory-of-mind (TPJ and PFC) ROIs and other regions of the cerebral cortex. Finally, to facilitate interpretation the average weights obtained within each identified cluster were decoded. This procedure revealed specific visual-motor features--such as approaching vehicles and destination instructions--that are preferentially represented in distinct functional networks. In sum, these preliminary data suggest that a variational autoencoder can reveal novel aspects of cortical representation during naturalistic navigation.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×