Abstract
Identical local motion signals can arise from an infinite number of object motions in the world. Here we address the question of how the visual system constructs percepts of moving objects in the face of this fundamental ambiguity. In order to do this, the visual system must somehow integrate motion signals arising from different locations along an object's contour. Difficulties arise, however, because contours present in the visual scene can derive from multiple objects and from occlusion. Thus, correctly integrating respective objects' motion signals presupposes the specification of what counts as an object. Depending on how this form analysis problem is solved, dramatically different object motion percepts can be constructed from the same set of local motion signals. In the present study, we applied fMRI to investigate the mechanisms underlying the segmentation and integration of motion signals that are critical to motion perception in general. Methods: In a block design fMRI experiment we held the number of image objects constant, but varied whether these objects were perceived to move independently or not. Results: BOLD signal in V3v, V4v, V3A, V3B and MT varies with the perceived number of distinct sources of motion information present in the visual scene. Conclusion: These data support the hypothesis that these areas integrate form and motion information in order to segment motion into independent sources (i.e. objects) thereby overcoming ambiguities that arise at the earliest stages of motion processing.