Abstract
The perception of visual motion plays a pivotal role in interpreting the world around us. To interpret visual scenes, local motion features need to be selectively integrated and segmented into distinct objects. Integration helps to overcome motion ambiguity in the visual image by spatial pooling, whereas segmentation identifies differences between adjacent moving objects. In this talk we will summarize our recent findings regarding how motion integration and segmentation may be achieved via “surround modulation” in visual cortex and will discuss the remaining challenges. Neuronal responses to stimuli within the classical receptive field (CRF) of neurons in area MT (V5) can be modulated by stimuli in the CRF surround. Previous investigations have reported that the directional tuning of surround modulation in area MT is mainly antagonistic and hence consistent with segmentation. We have found that surround modulation in area MT can be either antagonistic or integrative depending upon the visual stimulus. Furthermore, we have found that the direction tuning of the surround modulation is related to the response magnitude: stimuli eliciting the largest responses yield the strongest antagonism and those eliciting the smallest responses yield the strongest integration. We speculate that input strength is, in turn, linked with the ambiguity of the motion present within the CRF - unambiguously moving features usually evoke stronger neuronal response than do ambiguously moving features. Our modeling study suggests that changes in MT surround modulation result from shifts in the balance between directionally tuned excitation and inhibition mediated by changes in input strength.