December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Sensorimotor learning of depth cue combination for reach-to-grasp actions
Author Affiliations
  • James Wilmott
    Brown University
  • Fulvio Domini
    Brown University
Journal of Vision December 2022, Vol.22, 4264. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      James Wilmott, Fulvio Domini; Sensorimotor learning of depth cue combination for reach-to-grasp actions. Journal of Vision 2022;22(14):4264.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

An observer’s planned grip size during reach-to-grasp actions is a combination of the estimates derived from visual depth cues like binocular disparity and texture gradients. Here, we investigate how sensorimotor learning mechanisms dynamically adjust cue combination based on experience with haptically-derived estimates of object shape, termed 3D cue remapping. Classic models of depth perception predict that 3D cue remapping occurs when the visual system detects a mismatch between visual and haptic information (i.e., ‘cue reweighting’). According to these models, a cue conflict between visual cue estimates is required to determine which estimator should be adjusted so that the combined-cue estimate aligns with haptic feedback. A recently developed alternative model named Intrinsic Constraint proposes that cue combination is approximated as a vector sum, resulting in larger depth estimates for stimuli that have more cues (e.g., disparity only vs. disparity and texture). Accordingly, we reasoned that if an observer grasps an unchanging shape but is shown virtual cue consistent renderings with alternately one and more cues, the visuomotor system should detect sensorimotor errors for stimuli with additional cues and attenuate the influence of those cues in subsequent depth judgements. We tested this prediction psychophysically. Participants repeatedly grasped a physical object of unchanging size while viewing virtual objects with one, two or three cues, which were confirmed to produce different depth estimates in a pre-test. Importantly, the physical object and all cues were rendered to specify consistent depth. Across the experiment we observed a change in motor planning consistent with 3D cue remapping. Planned grip size was initially larger for stimuli with more cues, but these differences decreased with repeated grasps. This learning is predicted by a computational model that dynamically adjusts cue combination based on the pattern of sensorimotor error signals obtained across grasps.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.