Purchase this article with an account.
Philip N. Sabes; Sensory integration during motor planning. Journal of Vision 2003;3(12):5. doi: https://doi.org/10.1167/3.12.5.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
When planning visually guided reaches, we must estimate the position of our arm by integrating visual and proprioceptive signals from the sensory periphery. These integrated position estimates are required at two stages of motor planning: first to determine the desired movement vector, and second to transform the movement vector into a joint-based motor command. We quantified the contributions of each sensory modality to the position estimate formed at each planning stage. Subjects made reaches in a virtual reality environment in which vision and proprioception were dissociated by displacing the location of visual feedback. The relative weighting of vision and proprioception at each stage was then determined using computational models of feedforward motor control. We found that the position estimate used for movement vector planning relies mostly on visual input, whereas the estimate used to compute the joint-based motor command relies more on proprioceptive signals. This suggests that when estimating the position of the arm, the brain selects different combinations of sensory input based on the computation in which the resulting estimate will be used. In order to further test this idea, we have quantified the effects on multisensory integration of altering the computational demands of the task by having subjects reach to visually- versus proprioceptively-defined targets.
This PDF is available to Subscribers Only