Abstract
Successfully interacting with objects requires choosing appropriate grasp locations. By taking into account an object's shape, material properties, and the desired action, humans identify stable, comfortable grasp points that minimize slippage and torsion. The "contact point selection model" (Kleinholdermann et al., 2013) successfully predicts precision grip grasping for a range of two-dimensional objects. However, the rules determining grasp point selection for three-dimensional objects remain unclear. In this study, we tested how an object's visually perceived shape affects precision grip grasp locations. We created four differently shaped objects each out of 10 wooden cubes (2.5 cm³), which were presented to right-handed participants in either of two orientations. Starting at one of two different start locations, the participants' task was simply to pick each object up with finger and thumb, and place it on an elevated plate. An Optotrak system allowed us to track participants' fingertips as they reached for, grasped, and handled the objects. Results showed that the object's center of mass (COM), partial object occlusions by the subject's hand, and shape properties like the presence of handles played a role when choosing grasp locations. We found that timing of movements towards and while holding the objects was influenced by the objects' particular shape. Results showed that grasp locations were influenced by object shape, e.g. where a handle (cubes stacked on top of each other) was conveniently placed, it would be grasped without regard for large deviations from the COM. Objects without such handles showed grasp points located further toward the COM. These findings provide initial constraints on a generalized contact point selection model for 3D objects. In turn, such a model also has implications for the visual representation of 3D shape.
Meeting abstract presented at VSS 2017