Abstract
When we see objects, we immediately know how to interact with them. Little research has been performed to understand what information people glean from objects about the interactions they support. Here, we first used language as a means to tap into humans’ knowledge of what actions can be performed with an object. Using a large database of ~1850 object categories (THINGS database) and ~5000 verbs, we identified applications of each verb to each object in a large text corpus. We then used this data to embed each object in a space where dimensions correspond to verbs that apply to similar objects. We showed, in behavioral experiments, that these extracted embedding dimensions are meaningful to human observers. Next, to reveal people’s understanding of potential actions towards objects, we conducted online behavioral experiments in which we presented images of individual objects from the THINGS database and asked people about the actions they associate with the objects and body parts they use to interact with the objects. Many objects, including both tool and non-tool items, had a strong action association. Although hand was the most common body part implicated, other body parts were also reported to be heavily involved in interacting with objects. Together, these results indicate strong object-action associations evident in both text corpora and in people’s reports from viewing pictures of objects. They uncover the richness of object interactions and argue for moving beyond simple hand grasps and beyond the specific category of tools in future behavioral and neuroscientific experiments.