Abstract
This study tested the effect of providing a vista view of an environment on spatial learning. During normal navigation, we can only perceive limited regions at a time. To learn the spatial layout, we would have to integrate information from multiple views. If an unoccluded view of the full environment was displayed (vista view), would this improve spatial learning? We tested this experimentally using VR. In the normal condition, subjects navigated through a virtual urban environment to learn the locations of eight target buildings. In the vista condition, the task was the same except that most non-target buildings were compressed in height, allowing subjects to see the whole space and the configuration of target buildings. We tested these conditions in two preregistered experiments. In Experiment 1, the vista view was presented during travel to a target. In Experiment 2, the vista view was presented for 15s before each trial. After the learning phase, they were tested on judgments of relative direction (JRD) to measure survey knowledge, and wayfinding to measure route knowledge. We hypothesized that the vista views would improve survey knowledge by allowing subjects to directly observed the locations of targets relative to each other and global landmarks, but there might be little or no benefit for route knowledge. Surprisingly, we observed no detectable benefit from vista view in either experiment or task. Even though the vista view allowed subjects to see the configuration of targets at once, there was no improvement in JRD accuracy or wayfinding performance compared to the normal condition where visibility is limited to local regions. Subjects with high and low sense of direction showed comparable results. Our results suggest that a street-level unoccluded view may have limited benefit for learning an environment through navigation.