Abstract
When navigating through large-scale spaces visual cues play an important role for localization and path selection. In order for these cues to be useful, the navigator must associate the cues with specific states (positions and orientations) within the environment. The current studies investigate the channel capacity of the association between the visual information and the states within the environment.
To investigate the issue of spatial channel capacity we trained and tested subjects in six computer generated environments with different amounts of information. This manipulation was achieved by varying the number of states that had unique views within an environment. Visual landmarks were placed within the environment, such that each state generated a unique visual signature. The smallest environment consisted of two hallways with 12 states (3.5 bits of information). The largest environment consisted of 40 hallways with a 132 states (7.04 bits of information). To measure the mutual information between the visual input and spatial layout, participants were shown a single view from one of the states within the environment. We calculated the mutual information for each participant in each environment as a function of the participant's state response and the view presented. We found no evidence of a capacity limitation for up to 7.04 bits of information. However, we did find that humans consistently lose about 1.25 bits of information regardless of the size of the environment. We have replicated this study with naïve participants who were not trained in any of the environments and found similar results.
AFOSR FA9550, DOD 5710001865, NIH EY 016089