October 2003
Volume 3, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   October 2003
Bayesian model for reaching and grasping peripheral and occluded targets
Author Affiliations
  • Erik J Schlicht
    University of Minnesota, USA
Journal of Vision October 2003, Vol.3, 261. doi:https://doi.org/10.1167/3.9.261
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Erik J Schlicht, Paul R Schrater; Bayesian model for reaching and grasping peripheral and occluded targets. Journal of Vision 2003;3(9):261. https://doi.org/10.1167/3.9.261.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

To make a successful reach, the visual system must take into account the accuracy of its knowledge of the location and size of an object. The spatial certainty of a target's location with respect to the hand is limited by the eccentricity of viewing (Hess & Hayes, 1994), and by the ability to convert a target in retinal coordinates to arm-centered coordinates in the presence of noisy transformations. In previous work (Schlicht, et al., VSS 2001), we found that for visible targets, max. grip aperture was at a minimum near the target location and increased linearly away from the target. More surprisingly, max. grip apertures for reaches to occluded targets show dependence on eye position, even though there is no visual information specifying target location. This dependence takes the form of a U-shaped grip aperture function centered around forward-view eye-position, irrespective of the target location. We developed a Bayesian model that can account for these changes in grip aperture. We assume that max grip aperture is a measure of spatial uncertainty in the observer's estimate of the target location (Paulignon, 1991). Spatial uncertainty is modeled as increasing with target eccentricity, and with the amount of noise in the eye-to-arm coordinate transformations. We postulate that eye-position noise increases with deviations from the forward-view direction and that target locations are computed in eye-centered coordinates. By making both of these assumptions, we are able to account for both of the visual and eye-position effects observed in our previous findings. This modeling effort suggests that both the quality of visual information and the noise in sensori-motor transformations are taken into account when planning reaching movements. In addition, we suggest that target locations are converted and stored in eye-centered coordinates, even when the information about target location is not visual.

Schlicht, E. J., Schrater, P. R.(2003). Bayesian model for reaching and grasping peripheral and occluded targets [Abstract]. Journal of Vision, 3( 9): 261, 261a, http://journalofvision.org/3/9/261/, doi:10.1167/3.9.261. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×