Purchase this article with an account.
Sahar Nadeem, Brian Stankiewicz; How much can vision tell us about where we are? Measuring the channel capacity between visual perception and spatial layout. Journal of Vision 2007;7(9):285. doi: https://doi.org/10.1167/7.9.285.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
When navigating through large-scale spaces visual cues play an important role for localization and path selection. In order for these cues to be useful, the navigator must associate the cues with specific states (positions and orientations) within the environment. The current studies investigate the channel capacity of the association between the visual information and the states within the environment.
To investigate the issue of spatial channel capacity we trained and tested subjects in six computer generated environments with different amounts of information. This manipulation was achieved by varying the number of states that had unique views within an environment. Visual landmarks were placed within the environment, such that each state generated a unique visual signature. The smallest environment consisted of two hallways with 12 states (3.5 bits of information). The largest environment consisted of 40 hallways with a 132 states (7.04 bits of information). To measure the mutual information between the visual input and spatial layout, participants were shown a single view from one of the states within the environment. We calculated the mutual information for each participant in each environment as a function of the participant's state response and the view presented. We found no evidence of a capacity limitation for up to 7.04 bits of information. However, we did find that humans consistently lose about 1.25 bits of information regardless of the size of the environment. We have replicated this study with naïve participants who were not trained in any of the environments and found similar results.
This PDF is available to Subscribers Only