September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Perceiving social events in a physical world
Author Affiliations & Notes
  • Tianmin Shu
    Massachusetts Institute of Technology
  • Aviv Netanyahu
    Massachusetts Institute of Technology
  • Marta Kryven
    Massachusetts Institute of Technology
  • John Muchovej
    Massachusetts Institute of Technology
    Harvard University
  • Nakul Shenoy
    Massachusetts Institute of Technology
  • Boris Katz
    Massachusetts Institute of Technology
  • Andrei Barbu
    Massachusetts Institute of Technology
  • Tomer Ullman
    Harvard University
  • Josh Tenenbaum
    Massachusetts Institute of Technology
  • Footnotes
    Acknowledgements  This work was supported by NSF STC award CCF-1231216 (the Center for Brains, Minds and Machines), ONR MURI N00014-13-1-0333, the MIT-Air Force AI Accelerator, Toyota Research Institute, the DARPA GAILA program, and the ONR Science of Artificial Intelligence program.
Journal of Vision September 2021, Vol.21, 2463. doi:https://doi.org/10.1167/jov.21.9.2463
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tianmin Shu, Aviv Netanyahu, Marta Kryven, John Muchovej, Nakul Shenoy, Boris Katz, Andrei Barbu, Tomer Ullman, Josh Tenenbaum; Perceiving social events in a physical world. Journal of Vision 2021;21(9):2463. https://doi.org/10.1167/jov.21.9.2463.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How can we tell that a falling leaf is an object? That a ball hitting a basket was likely launched by an intelligent mind? The spontaneity of social perception hides a rich complexity of inferred agency: agents' intentions and plans; states of mind, and relationships; reasoning about physical forces and constraints (Heider and Simmel 1944). Many studies examined aspects of perceived agency, and proposed models of joint belief-desire inference. However, most are limited to simple displays. Here, we introduce a system for generating Heider-Simmel-like animations of social interactions in a physical world. The animations can be synthesized automatically, using a hierarchical planner and a physics engine, and via an online interface, where humans control geometric shapes to enact social interactions. The resulting animations depict agents and objects in a continuous physical world, with landmarks and obstacles. Agents have a limited field of view, and can interact in ways such as helping, fighting, chasing, cooperating, carrying, etc. Our system enables a procedural generation of hundreds of unique animations, which can be used for human studies, or for benchmarking machine perception. The system provides a record of trajectories of all entities, forces exerted by agents, as well as agents' goals, relationships, and strengths. Experimental evaluation shows that humans describe the depicted scenarios as a wide range of real-life social interactions, rate the simulated agent behaviors as highly human-like, and infer the agents' goals and relationships accurately. While human inferences of the agents' goals and relationships are predicted with a high accuracy by a Bayeisan inverse planning-based method, state-of-the-art DNN models fail to achieve similar results. In addition, we are also able to train a DNN to detect animacy using synthesized stimuli and probe what visual cues about animacy it can learn and whether they would match with well-known cues used by humans.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×