October 2003
Volume 3, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   October 2003
Object motion from cortical form-motion interaction between V1, V2, MT and MST
Author Affiliations
  • Julia Berzhanskaya
    Department of Cognitive and Neural Systems, Boston University
Journal of Vision October 2003, Vol.3, 796. doi:https://doi.org/10.1167/3.9.796
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Julia Berzhanskaya, Stephen Grossberg, Ennio Mingolla; Object motion from cortical form-motion interaction between V1, V2, MT and MST. Journal of Vision 2003;3(9):796. https://doi.org/10.1167/3.9.796.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

To process object motion in cluttered environments, visual cortex has to solve multiple problems such as aperture ambiguity, integration across apertures, and motion segmentation. A cortical model of motion integration and segmentation suggests that form and motion processing streams must interact to generate coherent percepts of object motion from spatially distributed and ambiguous visual information. An earlier model (Chey et al, 1997, Grossberg et al, 2001) based on properties of V1, V2, MT and MST was used to solve both the motion aperture and correspondence problems and to explain motion capture, barberpole illusion, plaid motion and motion transparency. Here the model is further developed to explain more complex percepts, such as the motion of rotating shapes observed through apertures (Lorenceau and Alais, 2001), the chopsticks illusion (Anstis, 1990) and formation of illusory form boundaries from motion signals. First, form-based figure-ground properties, such as occlusion, influence which motion signals determine the percept. For invisible apertures, a line's intrinsic terminators create veridical but sparse feature tracking signals. These signals can be amplified before they propagate across position. For visible apertures, motion of extrinsic line terminators provides weak competition to ambiguous motion signals within line interiors. Spatially anisotropic directional grouping filters integrate motion signals over space and determine the global motion percept. The model hereby explains the Lorenceau and Alais results without appealing to a “veto” on motion integration. Second, top-down MT-to-V1 signals initiate the separation of ambiguous overlapping moving forms when static information required for separation in depth is not available. Finally, we suggest how mechanisms of motion segmentation can lead to kinetic boundary sensitivity in V2 but not MT (Marcar et al., 2000).

Berzhanskaya, J., Grossberg, S., Mingolla, E.(2003). Object motion from cortical form-motion interaction between V1, V2, MT and MST [Abstract]. Journal of Vision, 3( 9): 796, 796a, http://journalofvision.org/3/9/796/, doi:10.1167/3.9.796. [CrossRef]

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.