Purchase this article with an account.
P. Zaenen, J. Wagemans, R. Vogels; Learning to discriminate highly similar three-dimensional objects: Qualitative versus quantitative differences and viewpoint (in)dependency. Journal of Vision 2001;1(3):101. doi: 10.1167/1.3.101.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The two aims of this study were (1) to assess whether discriminating between three-dimensional (3-D) objects that differed in a qualitative way was easier than discriminating between objects that differed in a quantitative way, even in a difficult subordinate-level classification task, and (2) to examine viewpoint dependency in both conditions. Four sets of simple 3-D objects consisting of two connected parts were constructed in the following way. Each original object was deformed slightly (on average about 14.5 luminance values per pixel) in either a qualitative (L) or a quantitative (N) way. For example, the distinction in the L-condition was between a straight and a curved object, whereas two different degrees of curvature were used in the N-condition. The parametric variation used to deform the original objects was equated in both conditions and the resulting pixel-wise differences were identical too. The experiment consisted of two phases. In the first phase, participants learned to classify three highly similar objects in three distinct categories (left, middle, right response buttons), while feedback was provided. Four different 3-D orientations (0, 45, 90 and 135 deg) were trained separately for each object. In the test phase, eight new rotations (differing 9 or 18 deg from the learned views) were added and feedback was no longer provided. Performance was always higher in the L-condition than in the N-condition. In the test phase, many of the novel views yielded better performance than the learned views, especially in the L-condition. These results are in conflict with the suggestion that within-category classification should be highly viewpoint-dependent. Moreover, qualitative differences seem to play a role in such tasks too. Importantly, several simulations based on pixel-wise differences between stimuli were unable to explain the effects.
This PDF is available to Subscribers Only