July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Combining form and motion - an integrated approach for learning biological motion representations
Author Affiliations
  • Georg Layher
    Institute of Neural Information Processing, Ulm University
  • Heiko Neumann
    Institute of Neural Information Processing, Ulm University
Journal of Vision July 2013, Vol.13, 194. doi:https://doi.org/10.1167/13.9.194
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Georg Layher, Heiko Neumann; Combining form and motion - an integrated approach for learning biological motion representations. Journal of Vision 2013;13(9):194. https://doi.org/10.1167/13.9.194.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recognition of biological motion in primates appears to be effortless even in the case of impoverished input, such as point-light stimuli (PLS; no form; Johansson, Perc. & Psych., 1971) or static implied motion stimuli (no motion; Kourtzi & Kanwisher, J. Cogn. Neurosci., 2000). Recent investigations (Peuskens et al. Eur. J. Neurosci., 2005; Giese & Poggio, Nat. Rev. Neurosci., 2003) support the notion that biological motion is processed in parallel, largely independent form and motion pathways, which are integrated at the intermediate cortical level of STS. None of these models yet successfully explains how missing information in one processing channel can be substituted by the complementary channel and enhance neural activation. We aim at proposing a single integrated model that consists of parallel form and motion processing pathways but incorporates an activity transfer between them. Prototypical form and motion pattern representations (IT/MST) are established using a competitive Hebbian learning scheme. An automatic selection of articulated postures is enabled through motion-form interaction during learning. Convergent temporally correlated input to sequence-selective cells in STS is learned by combined bottom-up and top-down learning (Layher et al., LNCS 7552, 2012). Top-down weights strengthen feedback prediction signals which allow STS neurons to prime afferent cells by expected spatio-temporal signatures. Simulation results obtained using the same previously learned representations are shown for both, implied motion and PLS. Responses of form prototypes to static articulated images drive STS cells which, in turn, send feedback signals to corresponding motion prototype representations, giving a possible explanation for increased fMRI responses in human MT+ to implied motion displays. Likewise, motion pattern prototypes probed with PLS reach activation levels comparable to fully textured animated motion sequences. Here, feedback from STS enhances the activities of corresponding prototype cells in area IT, possibly explaining the activation of form templates during PLS presentation (Lange & Lappe, J. Neurosci., 2006).

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×