Abstract
Our ability to identify a visual object in clutter is far worse than predicted by the eyes' optics and nerve fiber density. Although the ubiquity of such visual impairment, referred to as crowding, is generally well accepted, the appearance of crowded stimuli is debated due in part to the fact that the patterns of perceptual errors made under crowded conditions depend somewhat on the specific task. For example, using stimuli that do not easily combine to form a unique symbol (e.g. letters or objects), observers typically confuse the source of objects and report either the target or a distractor. Alternatively, when continuous features are used (e.g. orientated gratings or line positions), observers often report a feature matching the average of target and distractor features. To help reconcile these empirical differences, we developed a method of adjustment that allows detailed analysis of multiple error categories occurring within the one task. We presented a Landolt C target oriented randomly at 10° eccentricity in the right peripheral visual field in one of several distractor conditions. To report the target orientation, an observer adjusted an identical foveal target. We converted each perceptual report into angular distances from the target orientation and from the orientations of the various distractor elements. We applied new analyses and modelling to these data to quantify whether perceptual reports show evidence of positional uncertainty, source confusion, and featural averaging on a trial-by-trial basis. Our results show that observers reported a distractor orientation instead of the target in more than 50% of trials in some conditions. Our data also reveal a heterogeneous distribution of perceptual reports that depends on target-distractor distance. We conclude that aggregate performance in visual crowding cannot be neatly labelled, and the appearance of a crowded display is probabilistic.
Meeting abstract presented at VSS 2017