Purchase this article with an account.
Duje Tadin; Symposium Summary. Journal of Vision 2011;11(11):29. doi: https://doi.org/10.1167/11.11.29.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Since Adelson and Movshon's seminal 1982 paper on the phenomenal coherence of moving patterns, a large literature has accumulated on how the visual system integrates local motion estimates to represent true object motion. Although this research topic can be traced back to the early 20th century, a number of key questions remain unanswered. Specifically, we still have an incomplete understanding of how ambiguous and unambiguous motions are integrated and how local motion estimates are grouped and segmented to represent global object motions. A key problem for motion perception involves establishing the appropriate balance between integration and segmentation of local motions. Local ambiguities require motion integration, while perception of moving objects requires motion segregation. These questions form the core theme for this workshop that includes both psychophysical (Tadin, Nishida, Badcock and Johnston) and neurophysiological research (Pack and Huang).
Presentations by Huang and Tadin will show that the center-surround mechanisms play an important role in adaptively adjusting the balance between integration and segmentation. Huang reached this conclusion by studying area MT and the effects of unambiguous motion presented to the receptive field surround on the neural response to an ambiguous motion in the receptive field. Tadin reports that the degree of center-surround suppression increases with stimulus visibility, promoting motion segregation at high-contrast and spatial summation at low-contrast. More recently, Tadin investigated the neural correlates of centre-surround interactions and their role in figure-ground segregation.
Understanding how we perceive natural motion stimuli requires an understating of how the brain solves the aperture problem. Badcock showed that spatial vision plays an important role in solving this motion processing problem. Specifically, he showed that oriented motion streaks and textural cues play a role in early motion processing. Pack approached this question by recoding single-cell responses at various stages along the dorsal pathway. Results with plaid stimuli show a tendency for increased motion integration that does not necessarily correlate with the perception of the stimulus. Data from local field potentials recorded simultaneously suggest that the visual system solves the aperture problem multiple times at different hierarchical stages, rather than serially.
Finally, Nishida and Johnston will report new insights into integration of local motion estimates over space. Nishida developed a global Gabor array stimulus, which appears to cohere when the local speeds and orientation of the Gabor are consistent with a single global translation. He found that the visual system adopts different strategies for spatial pooling over ambiguous (Gabor) and unambiguous (plaid) array elements. Johnston investigated new strategies for combining local estimates, including the harmonic vector average, and have demonstrated coherence in expanding a rotating motion Gabor arrays displays – implying only a few local interactions may be all that is required to solve the aperture problem in complex arrays.
The symposium will be of interest to faculty and students working on motion, who will benefit from an integrated survey of new approaches to the current central question in motion processing, and a general audience interested in linking local and global processing in perceptual organization.
This PDF is available to Subscribers Only