Abstract
To examine whether the visual system performs motion computations after extracting shape, we measured accuracy for detecting direction of rotation defined by shape axes. Random shapes consisting of 20 dots along evenly spaced radial spokes extending from a center were generated by distorting a 4 deg circle with Gaussian noise of Mean 0.0 and SD 3.8 or 12.7 min along the spokes. Half of the shapes were bilaterally symmetric. Shapes were rotated at 12.5, 5.6 or 3.6 Hz in ten sequential 18° turns. Hence the dots always fell on the same spokes with each dot moving inward or outward. The observers' task was to indicate the direction of rotation. To quantify sensitivity to global motion in the presence of noisy local motion, Gaussian perturbations were added after each turn to each dot's eccentricity along its spoke with Mean 0.0 and SD 0.0, 2.5, 5, 10, 20 or 40 min. On half of the trials, the dynamic noise was constrained to be bilaterally symmetric. At low noise levels, discrimination of rotation direction was almost perfect, ruling out decisions based on pooling local motion signals extracted from nearest correspondences. With increasing asymmetric noise, accuracy decreased monotonically for all shapes and speeds. This performance could be explained by a winner-take-all choice between templates for global rotation, expansion, contraction and translation applied in parallel to the candidate local motions of all dots. This model, however, could not account for the effect of symmetric noise on accuracy, which formed a U-shaped function: decreasing accuracy for small noise levels but increasing accuracy for larger levels. Since a different symmetric shape was presented on each frame, the advantage for symmetry could only arise if the observer extracted each axis of symmetry and perceived the direction of rotation of the inferred axes.