Abstract
We propose a dynamical model of 2D motion integration where diffusion of motion information is modulated by luminance information. This model incorporates feedforward, feedback and inhibitive lateral connections and is inspired by the neural architecture and dynamics of motion processing cortical areas in the primate (V1, V2, and MT). The first aspect of our contribution is to propose a new anisotropic integration model where motion diffusion through recurrent connectivity between cortical areas working at different spatial scales is gated by the luminance distribution in the image. This simple model offers a competitive alternative to less parsimonious models based on a large set of cortical layers implementing specific form or motion features detectors. A second aspect that is often ignored by many 2D motion integration models is that biological computation of global motion is highly dynamical. When presented with simple lines, plaids or barberpoles stimuli, the perceived direction reported by human observers, as well as the response of motion sensitive neurons, will shift over time. We demonstrate that the proposed approach produces results compatible with several psychophysical experiments concerning not only the resulting global motion perception, but also concerning the oculomotor dynamics Our model can also explain several properties of MT neurons regarding the dynamics of selective motion integration, a fundamental property of object motion disambiguation and segmentation. As a whole, we present an improved motion integration model, which is numerically tractable and reproduces key aspect of cortical motion integration in primate.
This research work has received funding from the European Community's Seventh Framework Program under grant agreement N°215866, project SEARISE and the Région Provence-Alpes-Côte d'Azur. GSM was supported by the CNRS, the European Community (FACETS, IST-FET, VIh Framework, N°025213) and the Agence Nationale de la Recherche (ANR, NATSTATS).