Abstract
The puzzle of how we achieve a sense of space has a long history. Lotze (1884) addressed how a sense of location (local sign) is translated into a sense of a spatial dimension, within which geometric properties such as distance and angle could be expressed. Retinal receptors are laid out in an orderly pattern which is subsequently reflected in spatial maps in the brain. However, it is clear that distance on the retina or on a cortical topographic map is extrinsic to the visual system and must be mapped onto an intrinsic neural representation of perceptual space for spatial extent to be experienced. The key question is, how do we achieve a sense of spatial dimension from a sense of location, which is considered a given. There are three predominant ideas about how we achieve this; spatial isomorphism, in which what we see reflects differences in distance or size in the brain; that spatial extent depends upon motor sensations or intentions related to eye movements; and that distance is computed from the correlation in cell activity between adjacent locations, with distance inversely proportional to the correlation. There are problems with each of these approaches, for example, neural correlation may depend more on image structure than adjacency - consider the case of images containing repeating lines or sine gratings. Here a new computational strategy is outlined and assessed. A image brightness gradient approach to computing retinal disparity is re-purposed to compute the separation of points in a single image, rather than computing image displacement from measures of image gradients and brightness differences at the same point in the left and right eye’s images. This strategy allows us to compute a spatial separation on the basis of a non-spatial measure, the image brightness difference, and a local spatial brightness gradient.