Abstract
Head direction cells, which code facing direction in the environment, have been extensively studied in freely-moving rodents. fMRI studies have identified heading codes in humans; however, these are typically assessed during tasks that involve spatial memory recall in response to static stimuli. A recent neuroimaging study showed head direction codes in visual, retrosplenial, parahippocampal and medial temporal cortices during dynamic navigation in a small circular arena (Nau et al., 2020). However, it is unclear whether these head direction codes observed in a small-scale, simple environment would extend to more realistic, large-scale spaces, or whether they would be tolerant to changes in visual cues. Here we address these issues by using voxel-wise encoding modelling of fMRI data obtained during dynamic navigation in two complex virtual environments. 15 participants performed a “taxi-cab” task in two large virtual reality cities (201 vm x 120 vm), which had identical spatial layout and buildings but different surface textures on the buildings and roads. We modeled participants’ virtual head direction using circular-gaussian functions with width = 8° centered on preferred head directions sampled at 8° intervals on the full 360° range. Using the estimated model weights from data in one version of the city, we examined fMRI responses in the other version of the city (cross-city-validation). Our encoding model of head direction significantly predicts activity of voxels in the early visual areas (EVC) and retrosplenial cortex (RSC), with significant predictive voxels in all of the participants. The presence of head direction codes in the EVC and RSC suggest that the visual system could encode heading information that is invariant to appearance changes of the environment during dynamic, visually-guided navigation.