Abstract
The early stages of vision are often represented as parallel arrays of filters tuned to different attributes, such as orientation, spatial frequency or color. The receptive fields of different filters vary in size, but are typically localised to a region of the photoreceptor matrix. But how should we account for the human ability to compare two objects with respect to a particular attribute when discriminanda fall at arbitrary positions in the visual field? For some attributes (e.g. luminance, binocular disparity, temporal phase) discrimination is known to deteriorate as spatial separation increases: These discriminations may depend on dedicated local comparator neurons. But for other attributes (e.g. spatial frequency, orientation, hue, saturation) we have found that thresholds are constant as spatial separation increases, even when stimuli fall in opposite hemifields. We can add speed to the attributes of this latter group.
Our observers discriminated the speeds of two brief patches of random moving dots that were juxtaposed or were spatially separated by up to 10° (while remaining at constant eccentricity).
Thresholds were measured by 2AFC. Reference speed was jittered. Thresholds varied little with spatial separation. This was the case whether motion was in the same direction, in opposite directions, or in orthogonal directions. A combinatorial explosion would arise from dedicated comparators for every pair of retinal positions and every combination of directions. Rather we propose that information from local regions of primary cortex reaches the decision stage via a shared ‘cerebral bus’ that carries different information from moment to moment.