Purchase this article with an account.
Erik J Schlicht, Paul R Schrater; Bayesian model for reaching and grasping peripheral and occluded targets. Journal of Vision 2003;3(9):261. doi: https://doi.org/10.1167/3.9.261.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
To make a successful reach, the visual system must take into account the accuracy of its knowledge of the location and size of an object. The spatial certainty of a target's location with respect to the hand is limited by the eccentricity of viewing (Hess & Hayes, 1994), and by the ability to convert a target in retinal coordinates to arm-centered coordinates in the presence of noisy transformations. In previous work (Schlicht, et al., VSS 2001), we found that for visible targets, max. grip aperture was at a minimum near the target location and increased linearly away from the target. More surprisingly, max. grip apertures for reaches to occluded targets show dependence on eye position, even though there is no visual information specifying target location. This dependence takes the form of a U-shaped grip aperture function centered around forward-view eye-position, irrespective of the target location. We developed a Bayesian model that can account for these changes in grip aperture. We assume that max grip aperture is a measure of spatial uncertainty in the observer's estimate of the target location (Paulignon, 1991). Spatial uncertainty is modeled as increasing with target eccentricity, and with the amount of noise in the eye-to-arm coordinate transformations. We postulate that eye-position noise increases with deviations from the forward-view direction and that target locations are computed in eye-centered coordinates. By making both of these assumptions, we are able to account for both of the visual and eye-position effects observed in our previous findings. This modeling effort suggests that both the quality of visual information and the noise in sensori-motor transformations are taken into account when planning reaching movements. In addition, we suggest that target locations are converted and stored in eye-centered coordinates, even when the information about target location is not visual.
This PDF is available to Subscribers Only