Abstract
Interacting with objects is a complex task that requires us to mentally represent spatial configurations in our environment. It is well established that we encode objects for action in both egocentric and allocentric reference frames, i.e. relative to our own body or to other objects, respectively. Known factors that determine the influence of allocentric information are contextual factors such as scene configuration (e.g., spatial object clusters) and task-relevance. While the former concerns aspects of the spatial layout, the latter concerns higher level factors. In this study, we investigated whether objects that are not spatially but semantically clustered also influence the extent to which humans rely on allocentric information. To this end, we conducted a memory-guided reaching task in virtual reality. We used six objects from two different semantic clusters and placed them on a table. Participants encoded the objects while they were allowed to freely explore the scene. After a brief mask and a delay, the scene was shown again (test scene) for a short duration with one object missing. Participants were asked to reach to the location of the missing object (reaching target) from memory on an empty table. In the test scene, two objects which either belonged to the same (congruent) or a different (incongruent) semantic cluster as the reaching target were shifted horizontally. In the baseline condition, no object shift occurred. The results show that reaching endpoints deviated in the direction of object shifts. More importantly, these errors were larger when semantically congruent as opposed to incongruent objects were shifted. We argue that humans integrate higher-level information when interacting with objects. Semantic clustering could be an efficient mechanism of representing objects for action. Further experiments should investigate different levels of semantics and similarities between objects.
Meeting abstract presented at VSS 2018