September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Predicting how we grasp arbitrary objects
Author Affiliations
  • Lina Klein
    Department of Psychology, Justus-Liebig University Giessen
  • Guido Maiello
    Department of Psychology, Justus-Liebig University Giessen
  • Daria Proklova
    Department of Psychology, Brain and Mind Institute, University of Western Ontario
  • Juan Chen
    Department of Psychology, Brain and Mind Institute, University of Western Ontario
  • Vivian Paulun
    Department of Psychology, Justus-Liebig University Giessen
  • Jody Culham
    Department of Psychology, Brain and Mind Institute, University of Western Ontario
  • Roland Fleming
    Department of Psychology, Justus-Liebig University Giessen
Journal of Vision September 2018, Vol.18, 179. doi:10.1167/18.10.179
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lina Klein, Guido Maiello, Daria Proklova, Juan Chen, Vivian Paulun, Jody Culham, Roland Fleming; Predicting how we grasp arbitrary objects. Journal of Vision 2018;18(10):179. doi: 10.1167/18.10.179.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We investigated the computations underlying visually guided grasp selection for three-dimensional objects of non-uniform materials, and the brain areas involved in this process. In behavioral experiments, 26 participants picked up objects composed of 10 cubes (individual cubes made of wood or brass, side length 2.5 cm) in various configurations to test how an object's visually perceived three-dimensional shape and material properties affect grasp locations. We built 16 objects (4 shapes x 4 material configurations) which we presented to participants in 2 orientations. The results reveal that grasping is highly regular, constrained, and consistent across participants. Specifically, grasp locations are systematically affected by overall weight as well as mass distribution, the length of the reach trajectory, the subject's natural grip axis, and shape properties such as the presence of obvious handles. We employed these findings to develop a generalized grasp selection model which predicts human grasp locations strikingly well (essentially as well as individuals predict one another). Based on the model's predictions we created a new set of shapes and a pre-selected subset of grasp positions designed to tease apart the different components of visual grasp selection. For example, some grasps were optimal with respect to the human natural grip axis, but suboptimal with respect to minimizing net torque acting on the object, and vice versa. In a functional magnetic resonance imaging (fMRI) experiment, we recorded BOLD activity while participants planned and executed grasping with the new set of objects at the pre-selected grasp points. We used representational similarity analysis on the voxel activation patterns to test how well different model components accounted for the activation patterns in various brain regions. Thus, by combining behavioral data, computational modelling, and fMRI we can predict how humans grasp objects.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×