Purchase this article with an account.
Finnegan Calabro, Lucia-Maria Vaina; Detection of object motion during self-motion: psychophysics and neuronal substrate. Journal of Vision 2011;11(11):722. doi: 10.1167/11.11.722.
Download citation file:
© 2017 Association for Research in Vision and Ophthalmology.
The extraction of object motion from a visual scene is critical for planning direct interactions with one's surroundings, and is of particular interest and difficulty when the observer is moving. To investigate the visual processes underlying object motion detection during self-motion, we presented observers (n = 23) with a stimulus containing nine objects, eight of which moved consistent with forward observer translation, and one of which (the target) had independent motion within the scene. Results showed that observers' abilities to detect the target depended significantly on the speed of the object within the scene (Exp 1), but that performance was independent of observer speed, and therefore retinal velocity (Exp 2, n = 7). Results were compared to predicted performances for target selection based on relative differences in speed and direction among the objects, and were not consistent with either strategy. Instead, these data suggest that observer performance used a flow parsing mechanism in which self-motion is estimated and subtracted from the flow field. In an event-related fMRI paradigm using the task from Exp 1, we found a distributed pattern of activations of occipito-temporal, posterior parietal and parieto-frontal areas. Granger causality analysis among these activated regions revealed two major highly connected networks. One network involved a set of interconnected early, bilateral, visually responsive areas (including KO, hMT+ and VIPS). We posit that these regions underlie the perception and formation of a visual representation of the stimulus. The second network was comprised of primarily higher-level, left hemisphere areas (including DIPSM, FEF, subcentral sulcus and postcentral gyrus) that have been reported to be involved in the use of sensory inputs for preparing motor commands. We suggest that these networks provide a link between the perceptual representation of the visual stimulus and its interpretation for action.
This PDF is available to Subscribers Only