September 2011
Volume 11, Issue 11
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Motion transparency and spatial integration size – a modeling study
Author Affiliations
  • Florian Raudies
    Department of Cognitive and Neural Systems, Boston University, USA
  • Ennio Mingolla
    Department of Cognitive and Neural Systems, Boston University, USA
  • Heiko Neumann
    Institute of Neural Information Processing, Ulm University, Germany
Journal of Vision September 2011, Vol.11, 751. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Florian Raudies, Ennio Mingolla, Heiko Neumann; Motion transparency and spatial integration size – a modeling study. Journal of Vision 2011;11(11):751. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Problem. Motion transparency is the perception of motions in more than one direction at the same spatial location, as in random dot kinematograms (RDKs) with groups of dots moving differently. Direction or speed differences between dot groups control the appearance of transparency. How do factors such as stimulus size or motion coherence (via spatial integration) affect the perception of transparency? Methods. We propose a computational model of visual motion detection and integration. Model V1 detects motions that are spatially integrated in model MT. We define velocity-sensitive MT cells with local on-center/off-surround selectivity in direction and speed with parameters fitted to monkey MT data (Treue et al., Nature Neurosci., 3, 2000). Model MST integrates signals from model MT to achieve selectivity for motion patterns. Top-down signals from MST to MT and MT to V1 disambiguate and stabilize local motion estimates. Model area LIP temporally integrates motion signals and applies thresholds in order to simulate a decision. Results. In computer simulations of two 2AFC experiments, the model's perceptual thresholds for perception of transparency are measured. Two “bull's eye” configurations of RDKs appear to either side of a simulated “fixation point”. One central disk region contains transparent motion (via an overlay of clockwise and counterclockwise rotations, surrounded by an annulus region of random flicker), the other only opaque motion, with a comparable annulus. In one set of simulations, the radii of the central disks of the bull's eyes were varied and detection rates are calculated. A second experiment varied the motion coherency of dots in disks of constant radius. Results from both experiments indicate a minimum spatial integration of motion directions and speeds is necessary to distinguish motion transparency from opaque motion by disambiguating local motions in MT and global motion patterns in MST. The model's predictions are testable by human psychophysics.

Supported in part by CELEST (NSF SBE-0354378 and OMA-0835976) and the 7th framework program ICT-project no. 215866-SEARISE. 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.