August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Selectivity for non-accidental properties emerges from learning object transformation sequences
Author Affiliations
  • Sarah Parker
    Department of Cognitive, Linguistic and Psychological Sciences, Brown University
  • David Reichert
    Department of Cognitive, Linguistic and Psychological Sciences, Brown University
  • Thomas Serre
    Department of Cognitive, Linguistic and Psychological Sciences, Brown University
Journal of Vision August 2014, Vol.14, 910. doi:10.1167/14.10.910
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sarah Parker, David Reichert, Thomas Serre; Selectivity for non-accidental properties emerges from learning object transformation sequences. Journal of Vision 2014;14(10):910. doi: 10.1167/14.10.910.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Behavioral and electrophysiology studies of shape processing have demonstrated greater sensitivity to differences in non-accidental properties (NAPs) than metric properties (MPs; see Biederman, 2007 for review). NAPs correspond to image properties that are invariant to changes in out-of-plane rotation (e.g., straight vs. curved contours) and are distinguished from metric properties (MPs) that can change continuously with variations over depth orientation (e.g., aspect ratio, degree of curvature, etc). Previous work has shown that such sensitivity is incompatible with hierarchical models of object recognition such as HMAX (Riesenhuber & Poggio, 1999; Serre et al, 2007), which assume that shape processing is based on broadly tuned neuronal populations with distributed symmetric bell-shaped tuning: Shape-tuned units in these models are modulated at least as much by differences in MPs as in NAPs (Amir, Biederman & Hayworth, 2012). Here we test the hypothesis that simple mechanisms for learning transformation sequences may increase sensitivity to differences in NAPs vs. MPs in HMAX. We created a database of video sequences of objects rotated in depth in an attempt to mimic sequences viewed during object manipulation by infants during early developmental stages. We adapted a version of slow feature analysis (Wiskott & Sejnowski, 2002) to learning in HMAX: Unit responses in intermediate processing stages were scaled according to how stable they remained during the presentation of common objects undergoing various transformations. We show that this simple learning rule leads to shape tuning in higher stages with greater sensitivity to differences in NAPs vs. MPs consistent with monkey IT data (Kayaert et al, 2003). Overall we propose a simple learning mechanism to extend hierarchical models of object recognition to exhibit greater sensitivity for NAPs than MPs, as observed both behaviorally and electrophysiologically. Our results suggest that greater sensitivity for NAPs may result from unsupervised learning mechanisms from transformation sequences of common objects.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×