Abstract
The perceived position of objects is often distorted in the presence of motion. Multiple examples include flash-lag (Nijhawan, 1994) and flash-drag (Whitney and Cavanagh, 2000) effects and illusory boundary distortion ( Anderson and Barth, 1999). A unified cortical model of motion integration and segmentation has been developed to clarify how brain mechanisms of form and motion processing interact to generate coherent percepts of object motion from spatially distributed and ambiguous visual information (Berzhanskaya, Grossberg and Mingolla, 2003). This model uses feedforward and feedback circuits involving areas V1, V2, MT and MST to solve both the motion aperture and correspondence problems and to explain data such as motion capture phenomena, barberpole illusion, plaid motion, and integration of object motion across apertures. Here model mechanisms that explained separation of ambiguous boundaries in depth (chopsticks illusion) and motion transparency are extended to explain data about boundary distortion. Interactions between motion and form can happen on many levels. The model clarifies how MT-to-V1 feedback as well as local V1 excitatory and inhibitory connections interact with other model mechanisms to explain these positional effects. We show that weak boundaries (briefly flashed, illusory, or low contrast) are susceptible to shifts caused by the motion projection to the form stream. Asymmetric and opponent-direction inhibition used by visual circuits to enhance direction selectivity and feature-tracking can help to explain the difference between leading and trailing edge position shifts (Whitney et al., 2003). Similar mechanisms can explain not only position shifts when a stimulus is perceived as static, but also an illusory motion of static stimuli (motion capture and motion induction).
Supported in part by AFOSR and ONR.