August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Neural Representation of Human-Object Interactions
Author Affiliations
  • Christopher Baldassano
    Computer Science Department, Stanford University
  • Diane M. Beck
    Psychology Department and Beckman Institute, University of Illinois Urbana-Champaign
  • Li Fei-Fei
    Computer Science Department, Stanford University
Journal of Vision August 2012, Vol.12, 1111. doi:https://doi.org/10.1167/12.9.1111
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Christopher Baldassano, Diane M. Beck, Li Fei-Fei; Neural Representation of Human-Object Interactions. Journal of Vision 2012;12(9):1111. https://doi.org/10.1167/12.9.1111.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Identifying the relationships between objects in a scene is a fundamental goal in scene understanding. It is known that two interacting objects are perceptually grouped (Green & Hummel, 2006), and that interacting objects evoke greater activity in the lateral occipital complex (LOC) compared to noninteracting objects (Kim & Biederman, 2011). However, critical questions remain about the neural representations of interacting objects. Do object-sensitive regions such as LOC actually encode interactions between objects? How do scene-sensitive regions such as the parahippocampal place area (PPA) respond to simple two-object interactions?

In our fMRI experiment, subjects performed a one-back task while viewing three types of images: humans and objects in isolation, a human and object overlapping without a meaningful interaction, and a human interacting with an object in a familiar way. We then used MVPA methods to determine which regions are sensitive to the meaningful interactions.

We have obtained preliminary results from an experiment investigating two types of human-object interactions: riding a horse and playing a guitar. For each subject, we trained a classifier to discriminate between the responses to the object pairs person+horse and person+guitar, and then tested our classifier on a held-out run. We find qualitatively different results in object-sensitive areas and scene-sensitive areas. In LOC, both interacting and noninteracting object pairs can be decoded with similar accuracies. In PPA, however, decoding was better for interacting that noninteracting objects, despite the fact that both conditions contain the same human/object pairs. These results suggest that PPA’s sensitivity to scenes extends to a simple interaction of a single human and object, whereas LOC can represent object pairs regardless of whether the objects are interacting in a meaningful way or not.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×