September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Semantics determine the influence of allocentric information in memory-guided reaching
Author Affiliations
  • Harun Karimpur
    Experimental Psychology, Justus Liebig University Giessen
  • Katja Fiehler
    Experimental Psychology, Justus Liebig University Giessen
Journal of Vision September 2018, Vol.18, 67. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Harun Karimpur, Katja Fiehler; Semantics determine the influence of allocentric information in memory-guided reaching. Journal of Vision 2018;18(10):67. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Interacting with objects is a complex task that requires us to mentally represent spatial configurations in our environment. It is well established that we encode objects for action in both egocentric and allocentric reference frames, i.e. relative to our own body or to other objects, respectively. Known factors that determine the influence of allocentric information are contextual factors such as scene configuration (e.g., spatial object clusters) and task-relevance. While the former concerns aspects of the spatial layout, the latter concerns higher level factors. In this study, we investigated whether objects that are not spatially but semantically clustered also influence the extent to which humans rely on allocentric information. To this end, we conducted a memory-guided reaching task in virtual reality. We used six objects from two different semantic clusters and placed them on a table. Participants encoded the objects while they were allowed to freely explore the scene. After a brief mask and a delay, the scene was shown again (test scene) for a short duration with one object missing. Participants were asked to reach to the location of the missing object (reaching target) from memory on an empty table. In the test scene, two objects which either belonged to the same (congruent) or a different (incongruent) semantic cluster as the reaching target were shifted horizontally. In the baseline condition, no object shift occurred. The results show that reaching endpoints deviated in the direction of object shifts. More importantly, these errors were larger when semantically congruent as opposed to incongruent objects were shifted. We argue that humans integrate higher-level information when interacting with objects. Semantic clustering could be an efficient mechanism of representing objects for action. Further experiments should investigate different levels of semantics and similarities between objects.

Meeting abstract presented at VSS 2018


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.