Abstract
We investigated the computations underlying visually guided grasp selection for three-dimensional objects of non-uniform materials, and the brain areas involved in this process. In behavioral experiments, 26 participants picked up objects composed of 10 cubes (individual cubes made of wood or brass, side length 2.5 cm) in various configurations to test how an object's visually perceived three-dimensional shape and material properties affect grasp locations. We built 16 objects (4 shapes x 4 material configurations) which we presented to participants in 2 orientations. The results reveal that grasping is highly regular, constrained, and consistent across participants. Specifically, grasp locations are systematically affected by overall weight as well as mass distribution, the length of the reach trajectory, the subject's natural grip axis, and shape properties such as the presence of obvious handles. We employed these findings to develop a generalized grasp selection model which predicts human grasp locations strikingly well (essentially as well as individuals predict one another). Based on the model's predictions we created a new set of shapes and a pre-selected subset of grasp positions designed to tease apart the different components of visual grasp selection. For example, some grasps were optimal with respect to the human natural grip axis, but suboptimal with respect to minimizing net torque acting on the object, and vice versa. In a functional magnetic resonance imaging (fMRI) experiment, we recorded BOLD activity while participants planned and executed grasping with the new set of objects at the pre-selected grasp points. We used representational similarity analysis on the voxel activation patterns to test how well different model components accounted for the activation patterns in various brain regions. Thus, by combining behavioral data, computational modelling, and fMRI we can predict how humans grasp objects.
Meeting abstract presented at VSS 2018