Purchase this article with an account.
Émilien Tlapale, Guillaume S. Masson, Pierre Kornprobst; A dynamical neural model of motion integration. Journal of Vision 2010;10(7):843. doi: https://doi.org/10.1167/10.7.843.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
We propose a dynamical model of 2D motion integration where diffusion of motion information is modulated by luminance information. This model incorporates feedforward, feedback and inhibitive lateral connections and is inspired by the neural architecture and dynamics of motion processing cortical areas in the primate (V1, V2, and MT). The first aspect of our contribution is to propose a new anisotropic integration model where motion diffusion through recurrent connectivity between cortical areas working at different spatial scales is gated by the luminance distribution in the image. This simple model offers a competitive alternative to less parsimonious models based on a large set of cortical layers implementing specific form or motion features detectors. A second aspect that is often ignored by many 2D motion integration models is that biological computation of global motion is highly dynamical. When presented with simple lines, plaids or barberpoles stimuli, the perceived direction reported by human observers, as well as the response of motion sensitive neurons, will shift over time. We demonstrate that the proposed approach produces results compatible with several psychophysical experiments concerning not only the resulting global motion perception, but also concerning the oculomotor dynamics Our model can also explain several properties of MT neurons regarding the dynamics of selective motion integration, a fundamental property of object motion disambiguation and segmentation. As a whole, we present an improved motion integration model, which is numerically tractable and reproduces key aspect of cortical motion integration in primate.
This PDF is available to Subscribers Only