August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
From Motion to Object: How Visual Cortex Does Motion Vector Decomposition to Create Object-Centered Reference Frames
Author Affiliations
  • Jasmin Leveille
    Department of Cognitive and Neural Systems, Center for Adaptive Systems, and Center of Excellence for Learning in Education, Science, and Technology, Boston University
  • Stephen Grossberg
    Department of Cognitive and Neural Systems, Center for Adaptive Systems, and Center of Excellence for Learning in Education, Science, and Technology, Boston University
  • Massimiliano Versace
    Department of Cognitive and Neural Systems, Center for Adaptive Systems, and Center of Excellence for Learning in Education, Science, and Technology, Boston University
Journal of Vision August 2010, Vol.10, 803. doi:https://doi.org/10.1167/10.7.803
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jasmin Leveille, Stephen Grossberg, Massimiliano Versace; From Motion to Object: How Visual Cortex Does Motion Vector Decomposition to Create Object-Centered Reference Frames. Journal of Vision 2010;10(7):803. https://doi.org/10.1167/10.7.803.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How do spatially disjoint and ambiguous local motion signals in multiple directions generate coherent and unambiguous representations of object motion? Various motion percepts have been shown to obey a rule of vector decomposition, where global motion appears to be subtracted from the true motion path of localized stimulus components (Johansson, 1950). This results in striking percepts wherein objects and their parts are seen as moving relative to a common reference frame. While vector decomposition has been amply confirmed in a variety of experiments, no neural model has explained how it may occur in neural circuits. The current model shows how vector decomposition results from multiple-scale and multiple-depth interactions within and between the form and motion processing streams in V1-V2 and V1-MT. These interactions include form-to-motion interactions from V2 to MT which ensure that precise representations of object motion-in-depth can be computed, as demonstrated by the 3D Formotion model (e.g., Grossberg, Mingolla and Viswanathan, 2001, Vis. Res.; Berzhanskaya, Grossberg and Mingolla, 2007, Vis. Res.) and supported by recent neurophysiological data of Ponce, Lomber, & Born, 2008, Nat. Neurosci.). The present work shows how these interactions also cause vector decomposition of moving targets, notably how form grouping, form-to-motion capture, and figure-ground separation mechanisms may work together to simulate classical Duncker (1929) and Johansson (1950) percepts of vector decomposition and coherent object motion in a frame of reference. Supported in part by CELEST, an NSF Science of Learning Center (SBE-0354378) and the SyNAPSE program of DARPA (HR001109-03-0001, HR001-09-C-0011).

Leveille, J. Grossberg, S. Versace, M. (2010). From Motion to Object: How Visual Cortex Does Motion Vector Decomposition to Create Object-Centered Reference Frames [Abstract]. Journal of Vision, 10(7):803, 803a, http://www.journalofvision.org/content/10/7/803, doi:10.1167/10.7.803. [CrossRef]
Footnotes
 Supported in part by CELEST, an NSF Science of Learning Center (SBE-0354378) and the SyNAPSE program of DARPA (HR001109-03-0001, HR001-09-C-0011).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×