Purchase this article with an account.
Yena Han, Gemma Roig, Gadi Geiger, Tomaso Poggio; On the Human Visual System Invariance to Translation and Scale. Journal of Vision 2017;17(10):471. doi: 10.1167/17.10.471.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Humans are able to recognize objects presented at different scales and positions. Numerous behavioral studies on object recognition of translation transformations provided inconsistent results. It was argued that this may be due to differences in the nature of the stimuli used in their experiments, such as spatial frequency or shapes of the objects. Recognition of objects at different scales and positions can take place trivially because of previous experience and memorization of several transformed images of the object. It is however likely that we can also recognize specific objects seen only once at different positions and scales. To characterize this "single-shot" invariance, we use letter-like stimuli that are unknown to the tested human subjects, and we also use known letters for comparison. We analyze the recognition performance in a same/different task presenting the characters on a display for 33ms at different scales and positions. This allows us to compare the recognition performance using familiar and unfamiliar letters that are of similar nature in terms of spatial frequency and shape. Our data suggest that the feedforward path of the human visual system computes a representation of objects that are scale invariant. We also observe limited position invariance, the extent of which increases linearly with scale. The recognition accuracy is higher in the set-ups when the unknown characters are first shown at the fovea and tested at the periphery, and when shown and tested at opposite sites of the visual field, compared to when shown first at the periphery and tested in the fovea.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only