Abstract
As we navigate our world, we create a representation of the spatial structure of our environment that we store as visual memories. Yet, how visual experience is integrated across multiple views of an environment to create continuous, spatial representations is not well understood. Here, we created an experimental paradigm using head-mounted virtual reality (VR) to study how humans build visuospatial representations of large-scale, real-world environments. Specifically, we recreated a photorealistic town in VR that people could navigate using a controller, as well as a series of tasks that assess various aspects of visuospatial memory. To validate this paradigm, participants (N=13) navigated through a novel real-world town in VR (3 study sessions) and were tested on their memory of the town in four memory tasks: scene recognition, panoramic memory, judging-relative-direction (JRD), and cognitive map. During the scene recognition task, participants performed an old/new judgment on scene views from the studied town (vs. visually similar scenes from elsewhere in the world). In the panoramic memory task, participants were tested on their memory of the panoramic structure of a local environment. In the JRD task, participants were tested on their ability to point across town to different buildings. Finally, in the map task, participants matched scene views from the town to their corresponding location on a map of the town. Participants’ accuracy was significantly above chance across all four memory tasks (all p<0.001), which suggest that participants were able to build a representation of the spatial structure of a newly learned environment. Significant across-subject variability in each task suggests they may be productive tools for studying individual differences in visuospatial memory. Overall, our tool offers an opportunity to assess the components of visuospatial memories humans form when encoding novel real-world environments.