September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Intelligent motion integration across multiple stimulus dimensions
Author Affiliations
  • Shin'ya Nishida
    NTT Communication Science Laboratories
  • Kaoru Amano
    The University of Tokyo
  • Kazushi Maruya
    NTT
  • Mark Edwards
    Australian National University
  • David R. Badcock
    University of Western Australia
Journal of Vision September 2011, Vol.11, 34. doi:https://doi.org/10.1167/11.11.34
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Shin'ya Nishida, Kaoru Amano, Kazushi Maruya, Mark Edwards, David R. Badcock; Intelligent motion integration across multiple stimulus dimensions. Journal of Vision 2011;11(11):34. https://doi.org/10.1167/11.11.34.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In human visual motion processing, image motion is first detected by one-dimensional (1D), spatially local, direction-selective neural sensors. Each sensor is tuned to a given combination of position, orientation, spatial frequency and feature type (e.g., first-order and second-order). To recover the true 2-dimensional (2D) and global direction of moving objects (i.e., to solve the aperture problem), the visual system integrates motion signals across orientation, across space and possibly across the other dimensions. We investigated this multi-dimensional motion integration process, using global motion stimuli comprised of numerous randomly-oriented Gabor (1D) or Plaid (2D) elements (for the purpose of examining integration across space, orientation and spatial frequency), as well as diamond-shape Gabor quartets that underwent rigid global circular translation (for the purpose of examining integration across spatial frequency and signal type). We found that the visual system adaptively switches between two spatial integration strategies — spatial pooling of 1D motion signals and spatial pooling of 2D motion signals — depending on the ambiguity of local motion signals. MEG showed correlated neural activities in hMT+ for both 1D pooling and 2D pooling. Our data also suggest that the visual system can integrate 1D motion signals of different spatial frequencies and different feature types, but only when form conditions (e.g., contour continuity) support grouping of local motions. These findings indicate that motion integration is a complex and smart computation, and presumably this is why we can properly estimate motion flows in a wide variety of natural scenes.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×