August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Conscious perception and grasping rely on a shared depth encoding
Author Affiliations
  • Carlo Campagnoli
    Department of Cognitive, Linguistic & Psychological Sciences, Brown University
  • Fulvio Domini
    Department of Cognitive, Linguistic & Psychological Sciences, Brown University
Journal of Vision September 2016, Vol.16, 449. doi:https://doi.org/10.1167/16.12.449
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Carlo Campagnoli, Fulvio Domini; Conscious perception and grasping rely on a shared depth encoding. Journal of Vision 2016;16(12):449. https://doi.org/10.1167/16.12.449.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Accurate estimation of object depth is widely believed to underlie the successful execution of grasping movements. Specifically, it has been proposed that retinal disparities play a critical role in grip pre-shaping and stable finger placement on the object surface, under the assumption that stereo information produces veridical depth estimates. If this were the case, then grasp kinematics should be accurately attuned to the veridical 3D structure of objects. Moreover, they should not be affected by the systematic biases that are known to distort perceptual judgements of object depth. Here we tested these predictions by presenting participants with 3D virtual objects defined by various combinations of stereo, motion and texture information at two egocentric distances. In separate sessions, participants were asked to (a) manually estimate the perceived front-to-back extent of the targets or (b) naturally reach-to-grasp the targets. In the perceptual task, we found that stereo-only objects presented at the far distance (45 cm) appeared most shallow; perceptual depth increased (1) by bringing the objects closer to the observer (30 cm) and (2) by adding either motion or texture information to the baseline stereo objects. These findings are consistent with the idea that visual space is compressed and also that depth estimation is non-veridical. Remarkably, the results of the grasping task revealed that the anticipatory opening of the hand followed the same patterns as the manual depth estimates – the grip aperture was smallest when grasping the distant, stereo-only objects and largest when grasping near, multiple-cue objects. These findings show that visual processing of shape for action control shares the same intrinsic biases known to influence depth perception, providing further evidence for common coding of object properties for perception and action.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×