September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
One shot learning of novel object classes
Author Affiliations
  • Yaniv Morgenstern
    Justus-Liebig-University Giessen
  • Filipp Schmidt
    Justus-Liebig-University Giessen
  • Roland Fleming
    Justus-Liebig-University Giessen
Journal of Vision September 2018, Vol.18, 556. doi:https://doi.org/10.1167/18.10.556
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yaniv Morgenstern, Filipp Schmidt, Roland Fleming; One shot learning of novel object classes. Journal of Vision 2018;18(10):556. https://doi.org/10.1167/18.10.556.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

One of our most remarkable visual abilities is the capacity to learn novel object classes from very little data. Given just a single novel object, we usually have certain intuitions about what other class members are likely to look like. Such 'one-shot learning' presumably leverages knowledge from previously learned objects, particularly: (1) by providing a feature space for representing shapes and their relationships and (2) by learning how classes are typically distributed in this space. To test this, we synthesized 20 shape classes based on unique unfamiliar 2D base shapes. Novel exemplars were created by transforming the base shape's skeletal representation to produce new shapes with limbs varying in length, width, position, and orientation. Using crowdsourcing, we then obtained responses from 500 human observers on 20 trials (1 response for each base shape). On each trial, observers judged whether a target shape was in the same class as 1 or 16 context shape(s) (transformed samples with similar characteristics). Targets came from the same class as the context shape(s), but differed in their similarity. The results reveal that participants only perceive objects to belong to the same class when they differed from one another by a limited amount, confirming that observers have restricted generalization gradients around completely novel stimuli. We then compared human responses to a computational model in which the similarity between target and context shapes was computed from >100 image-computable shape descriptors (e.g., area, compactness, shape context, Fourier descriptors). The findings reveal a surprisingly consistent distance around each base shape in the feature space, beyond which objects are deemed to belong to different classes. Thus, the model predicts one-shot learning surprisingly well with only one free parameter describing how different objects in the same class tend to be from one another.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×