September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Neural model for the recognition of agency and interaction from motion
Author Affiliations
  • Mohammad Hovaidi Adestani
    Section Computational Sensomotorics, Department of Cognitive Neurology, CIN&HIH, University Clinic Tuebingen, GermanyIMPRS for Cognitive and Systems Neuroscience, Univ. of Tuebingen, Germany
  • Nitin Saini
    Section Computational Sensomotorics, Department of Cognitive Neurology, CIN&HIH, University Clinic Tuebingen, GermanyIMPRS for Cognitive and Systems Neuroscience, Univ. of Tuebingen, Germany
  • Martin Giese
    Section Computational Sensomotorics, Department of Cognitive Neurology, CIN&HIH, University Clinic Tuebingen, Germany
Journal of Vision September 2018, Vol.18, 430. doi:https://doi.org/10.1167/18.10.430
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mohammad Hovaidi Adestani, Nitin Saini, Martin Giese; Neural model for the recognition of agency and interaction from motion. Journal of Vision 2018;18(10):430. https://doi.org/10.1167/18.10.430.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

INTRODUCTION: Humans are highly skilled at interpreting intent or social behavior from strongly impoverished stimuli (Heider & Simmel, 1944). It has been hypothesized that such functions might be based on high-level cognitive processes, such as probabilistic reasoning. We demonstrate that several classical observations on animacy and interaction perception can be accounted for by simple and physiologically plausible neural mechanisms, using an appropriately extended hierarchical (deep) model of the visual pathway. METHODS: Building on classical biologically-inspired models for object and action perception (Riesenhuber & Poggio, 1999; Giese & Poggio, 2003), we propose a learning-based hierarchical neural network model that analyzes shape and motion features from video sequences. The model has largely a simple feed-forward architecture and comprises two processing streams for form and object motion in a retinal frame of reference. We try to account with this model simultaneously for a number of experimental observations on the perception of animacy and social interaction. RESULTS: Based on input video sequences, the model reproduces results of Tremoulet and Feldman (2000) on the dependence of perceived animacy on changes in speed and direction of moving objects, on its dependence on the alignment of motion and body axis, and the influence of contact with static barriers along the motion path (Hernik et al. 2013). In addition it accounts for results on the detection of chasing behavior (Scholl & McCarthy, 2012) and of fighting (Heider & Simmel, 1944). CONCLUSION: Since the model accounts simultaneously for a variety of effects related to animacy and interaction perception using physiologically plausible mechanisms, without requiring complex computational inference and optimization processes, it might serve as starting point for the search of neurons that are forming the core circuit of the perceptual processing of animacy and interaction.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×