September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Object Affordances through the window of Verb Usage Patterns and Behavior
Author Affiliations
  • Maryam Vaziri-Pashkam
    Department of Psychological and Brain Sciences, University of Delaware
  • Ka-Chin Lam
    National Institute of Mental Health
  • Natalia Pallis-hassani
    National Institute of Mental Health
  • Aida Mirebrahimi
    Carnegie Mellon University
  • Aryan Zoroufi
    Massachusetts Institute of Technology
  • Francisco Pereira
    National Institute of Mental Health
  • Chris Baker
    National Institute of Mental Health
Journal of Vision September 2024, Vol.24, 1462. doi:https://doi.org/10.1167/jov.24.10.1462
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Maryam Vaziri-Pashkam, Ka-Chin Lam, Natalia Pallis-hassani, Aida Mirebrahimi, Aryan Zoroufi, Francisco Pereira, Chris Baker; Object Affordances through the window of Verb Usage Patterns and Behavior. Journal of Vision 2024;24(10):1462. https://doi.org/10.1167/jov.24.10.1462.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When we see objects, we immediately know how to interact with them. Little research has been performed to understand what information people glean from objects about the interactions they support. Here, we first used language as a means to tap into humans’ knowledge of what actions can be performed with an object. Using a large database of ~1850 object categories (THINGS database) and ~5000 verbs, we identified applications of each verb to each object in a large text corpus. We then used this data to embed each object in a space where dimensions correspond to verbs that apply to similar objects. We showed, in behavioral experiments, that these extracted embedding dimensions are meaningful to human observers. Next, to reveal people’s understanding of potential actions towards objects, we conducted online behavioral experiments in which we presented images of individual objects from the THINGS database and asked people about the actions they associate with the objects and body parts they use to interact with the objects. Many objects, including both tool and non-tool items, had a strong action association. Although hand was the most common body part implicated, other body parts were also reported to be heavily involved in interacting with objects. Together, these results indicate strong object-action associations evident in both text corpora and in people’s reports from viewing pictures of objects. They uncover the richness of object interactions and argue for moving beyond simple hand grasps and beyond the specific category of tools in future behavioral and neuroscientific experiments.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×