September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
fMRI encoding model of virtual navigation
Author Affiliations & Notes
  • Zhengang Lu
    Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
  • Joshua B Julian
    Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
    Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Norwegian University of Science and Technology, Trondheim, Norway
  • Russell A Epstein
    Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
Journal of Vision September 2019, Vol.19, 246a. doi:https://doi.org/10.1167/19.10.246a
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhengang Lu, Joshua B Julian, Russell A Epstein; fMRI encoding model of virtual navigation. Journal of Vision 2019;19(10):246a. https://doi.org/10.1167/19.10.246a.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Neurophysiological recording studies of freely-moving rodents have identified neurons that represent spatial quantities such as location, heading, and distances to environmental boundaries. We explored the possibility that voxel-wise encoding modelling of fMRI data obtained during virtual navigation could be used to identify similar representations in humans. To test this idea, a participant performed a “taxi-cab” task within two large (201 vm * 120 vm) virtual reality cities (city 1A for 144 min and city 1B for 48 min). The cities had identical spatial layouts and buildings, but different surface textures on the buildings and roads. On each trial, the participant searched for a passenger at a random location and took him to an indicated destination. fMRI responses during navigation were regressed against predictor variables generated from a variety of navigation-related feature spaces, corresponding to location within the city, virtual head direction, egocentric distances to boundaries, and allocentric distances to boundaries. Thus, each feature space quantified a specific hypothesis about how navigation-related information might be represented in the brain, and the resulting beta weights revealed how specific feature spaces were represented in each voxel. To validate the encoding models estimated from city 1A data, we examined model predictions using the held-out brain activation during navigation in both city 1A and 1B. The encoding models significantly predicted activity of voxels distributed across a wide range of brain regions, with consistent networks of significant predictive voxels for both cities. These results suggest that these networks encode spatial information that is at least partially invariant to the visual appearance of the environment. More generally, our results suggest that voxel-wise encoding models can be used to investigate the neural basis of spatial coding during unconstrained dynamic navigation.

Acknowledgement: NIH R21 EY022751 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×