September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
A voxel-wise encoding model for VR-navigation maps view-direction tuning at 7T-fMRI
Author Affiliations & Notes
  • Matthias Nau
    Kavli Institute for Systems Neuroscience, NTNU, Trondheim, Norway
  • Tobias Navarro Schröder
    Kavli Institute for Systems Neuroscience, NTNU, Trondheim, Norway
  • Markus Frey
    Kavli Institute for Systems Neuroscience, NTNU, Trondheim, Norway
  • Christian F. Doeller
    Kavli Institute for Systems Neuroscience, NTNU, Trondheim, Norway
    Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
Journal of Vision September 2019, Vol.19, 162b. doi:https://doi.org/10.1167/19.10.162b
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Matthias Nau, Tobias Navarro Schröder, Markus Frey, Christian F. Doeller; A voxel-wise encoding model for VR-navigation maps view-direction tuning at 7T-fMRI. Journal of Vision 2019;19(10):162b. https://doi.org/10.1167/19.10.162b.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The brain derives cognitive maps from visual inputs, posing the important question how information is transformed and communicated across the neural systems involved. Inspired by the prior success of encoding models at characterizing the tuning and topography of responses in the visual cortex, we developed voxel-wise and multivariate encoding models to examine networks supporting spatial orienting during active navigation. We used 7T-fMRI to monitor brain activity at submillimeter resolution while participants freely navigated in a virtual environment. We combined their virtual view direction (vVD) with a variety of circular-gaussian vVD-basis functions that differed in kernel spacing and width. For each parameter set, we then estimated model weights using an iterative training procedure that maximized predictability to find the optimal parameters explaining each voxel’s time course. Using the resulting model weights, we examined fMRI-responses in held-out data to show that vVD predicts activity in early visual, medioparietal and parahippocampal cortices involved in self-motion- and scene processing, as well as in mediotemporal regions known to support navigation, like the entorhinal cortex. The activity in each region was best predicted with distinct vVD-tuning widths that increased anteriorally from medioparietal to entorhinal cortices. Inverting the encoding model reconstructed vVD from fMRI responses also in regions with high-level mnemonic function, like the hippocampus, and revealed a vVD-tuning topology within the entorhinal cortex, akin to the topology of head direction representations previously observed in rodents. Our approach demonstrates the feasibility of using encoding models to study visual tuning during naturalistic behaviors, and sheds new light on how the currently viewed scene is encoded in the brain during active navigation.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×