September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
On the Human Visual System Invariance to Translation and Scale
Author Affiliations
  • Yena Han
    Center for Brains, Minds and Machines, Massachusetts Institute of Technology
  • Gemma Roig
    Center for Brains, Minds and Machines, Massachusetts Institute of Technology
    LCSL, Istituto Italiano di Tecnologia at MIT
  • Gadi Geiger
    Center for Brains, Minds and Machines, Massachusetts Institute of Technology
  • Tomaso Poggio
    Center for Brains, Minds and Machines, Massachusetts Institute of Technology
Journal of Vision August 2017, Vol.17, 471. doi:https://doi.org/10.1167/17.10.471
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yena Han, Gemma Roig, Gadi Geiger, Tomaso Poggio; On the Human Visual System Invariance to Translation and Scale. Journal of Vision 2017;17(10):471. https://doi.org/10.1167/17.10.471.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans are able to recognize objects presented at different scales and positions. Numerous behavioral studies on object recognition of translation transformations provided inconsistent results. It was argued that this may be due to differences in the nature of the stimuli used in their experiments, such as spatial frequency or shapes of the objects. Recognition of objects at different scales and positions can take place trivially because of previous experience and memorization of several transformed images of the object. It is however likely that we can also recognize specific objects seen only once at different positions and scales. To characterize this "single-shot" invariance, we use letter-like stimuli that are unknown to the tested human subjects, and we also use known letters for comparison. We analyze the recognition performance in a same/different task presenting the characters on a display for 33ms at different scales and positions. This allows us to compare the recognition performance using familiar and unfamiliar letters that are of similar nature in terms of spatial frequency and shape. Our data suggest that the feedforward path of the human visual system computes a representation of objects that are scale invariant. We also observe limited position invariance, the extent of which increases linearly with scale. The recognition accuracy is higher in the set-ups when the unknown characters are first shown at the fovea and tested at the periphery, and when shown and tested at opposite sites of the visual field, compared to when shown first at the periphery and tested in the fovea.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×