Purchase this article with an account.
David Remus, Kalanit Grill-Spector; Discrimination training builds position tolerant object representations. Journal of Vision 2010;10(7):953. doi: https://doi.org/10.1167/10.7.953.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Studies of perceptual learning have demonstrated that when observers are trained to discriminate low-level image features, such as orientation or contrast, in a single retinal position, performance improvements are specific to the trained stimuli and position. However, it is unknown whether perceptual learning of objects is similarly specific to both the trained stimuli and position. If perceptual learning of objects occurs at lower-level stages of visual processing it may display position sensitivity. However, if learning of objects occurs in higher-level visual regions, which show decreased retinotopic sensitivity, learning effects may generalize across retinal positions. We investigated whether learning to discriminate among novel objects in a single retinal position improves performance in the trained position, untrained positions, or in cases where the objects to be discriminated appear in two separate in positions (swap). 14 observers were trained with feedback to discriminate among 24 exemplars from a single category of novel objects, each of which was shown in one of two possible retinal positions over the course of 5 days (8640 total exposures per observer). After training, observers' discrimination performance significantly increased (mean d′ increase = 1.1±0.12 SEM) for the trained but not untrained objects. Training improvements were not significantly different across the trained positions, untrained positions, or swap conditions. Generalization across positions occurred despite the fact that a given object was only observed in one retinal position during training. 17 additional observers participated in an identical experiment but were not given feedback during training. Learning improvements were smaller without feedback (mean d′ increase = 0.70±0.13 SEM), but resulted in the same category-specific, position-general profile. Our results suggest that discrimination training on objects is mediated by high-level visual regions with large receptive fields, and that building position invariant representations of objects does not necessitate experience with these objects in many retinal positions.
This PDF is available to Subscribers Only