September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
How are spatial relations among object parts represented? Evidence from a shape recall experiment
Author Affiliations & Notes
  • Thitaporn Chaisilprungraung
    Cognitive Science Department, Johns Hopkins University
  • Gillian Miller
    Cognitive Science Department, Johns Hopkins University
  • Michael McCloskey
    Cognitive Science Department, Johns Hopkins University
Journal of Vision September 2019, Vol.19, 30b. doi:https://doi.org/10.1167/19.10.30b
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Thitaporn Chaisilprungraung, Gillian Miller, Michael McCloskey; How are spatial relations among object parts represented? Evidence from a shape recall experiment. Journal of Vision 2019;19(10):30b. doi: https://doi.org/10.1167/19.10.30b.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Successful recognition and interaction with objects requires the ability to perceive and represent how object parts are internally related (e.g., a tea pot’s handle is attached to its body, at a location opposite to the spout). Despite an abundance of research on object cognition, little is understood about how the brain represents spatial relations among object parts. The types of information required for representing how two object parts are related (i.e., the relative locations and orientations at which the parts are connected), are the same as those required for representing how an entire object is related to an environment (e.g., the location and orientation of a pen on a table). We investigated representation of relations among object parts by extending a theoretical framework developed for explaining how locations and orientations are represented for whole objects (e.g., McCloskey, 2009). We analyzed the patterns of errors participants made when recalling the arrangements of parts in artificial objects. The objects consisted of a large and a small part that could be joined in different ways to create multiple part configurations (Fig.1). On each trial participants viewed a target object at three different orientations, and then attempted to reproduce the arrangement of parts within the object (Fig.2). We observed an interesting pattern of co-occurrence between certain types of orientation and location errors. Particularly, when participants reflected the location of a smaller part across the elongation axis of the larger part, they also tended to reflect its orientation across the same axis (Fig.3). This error pattern is readily explained by a theoretical framework which assumes that locations and orientations of parts are represented in a unified manner. Together, we suggest a new model for object shape representations, one that is adapted from the framework for representing whole objects’ location and orientation.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×