Abstract
Purpose: to characterize the degree to which the visual system regularizes stimulus inputs during encoding. We show that this regularization can be so strong that identical inputs do not give rise to the best recognition performance, contrary to almost all findings in the literature. Method: we used face images in a same-different matching task, a task that strongly encourages template matching, i.e., identical stimuli giving rise to the best recognition. In all experiments, no trials were repeated. Exp.1: stimuli were created by randomly occluding using red pixels either 60%, 50%, …, or 10% of a gray face image. We found that, after presentation of a 60% occluded face, the same face that was 50% occluded gave rise to the highest hit (p=0.02, n=25). In contrast, in the control when faces were inverted, a 60% occluded face was best recognized by the identical image. Exp.2: we parametrically distorted a face sideways so that faces of different degrees of asymmetry were created as stimuli. We looked at trials when the 1st image was most distorted. When the faces were inverted, the results show a strong pattern of template matching. Namely, recognition was best (97% hit) when the 2 images were identical, and was worst when the 2nd image was least distorted (50% hit). In contrast, when the faces were upright, the results suggest that the visual system reduces the distortion during encoding. Namely, when the two images were identical the hit was reduced to 77%. When the 2nd image was least distorted, the hit was increased to 67%. Discussion: in the literature, the more similar 2 images are, the better the recognition performance will be, regardless of the task. Here we demonstrate the opposite with a task that least expects it. We suggest that it is more fruitful to consider the degree to which the visual system abstracts away from a stimulus input, rather than to assume that a stimulus's internal representation is encoded as is as an image-based template.