October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Semantic and syntactic anchor object information interact to make visual search in immersive scenes efficient
Author Affiliations & Notes
  • Jason Helbing
    Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
  • Dejan Draschkow
    Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford
  • Melissa L.-H. Vo
    Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
  • Footnotes
    Acknowledgements  This work was supported by SFB/TRR 26 135 project C7 to Melissa L.-H. Võ.
Journal of Vision October 2020, Vol.20, 573. doi:https://doi.org/10.1167/jov.20.11.573
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jason Helbing, Dejan Draschkow, Melissa L.-H. Vo; Semantic and syntactic anchor object information interact to make visual search in immersive scenes efficient. Journal of Vision 2020;20(11):573. https://doi.org/10.1167/jov.20.11.573.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual search in naturalistic scenes is highly efficient. One crucial reason for this is attentional guidance by our knowledge of the regularities that govern those scenes—their “scene grammar”. Our study investigated the hierarchical organization of this scene grammar, focusing on “anchor objects”, which we hypothesize are essential building blocks of environmental scenes that predict the locations of other objects (e.g., the sink predicting the soap on top, the shower predicting the shampoo inside). In a virtual reality eye tracking study, we had 24 participants search for targets in navigable, immersive scene environments that were manipulated with respect to the presence of anchor objects as well as their syntactic composition. We found that concealing the semantic identity of anchors (by replacing them with grey cuboids of matching dimensions) slowed the process of locating targets in syntactically consistent scenes (i.e., objects placed in expected locations). This was apparent in the time to the first target fixation, the number of fixations, and scanpath length of search trials. Furthermore, our motion tracking data shows that subjects exhibited more extended movement in consistent scenes without intact anchors, suggesting a greater need for costly body movements in the absence of anchor context. In scenes with inconsistent syntax (i.e., objects’ arrangement violates expectations), where search was overall much slower, we found the opposite effect with respect to manipulating anchors: Replacing anchors with cuboids speeded search and decreased movement, indicating that anchors lost their guiding function and became useless clutter in such inconsistent scenes. This shows how semantic and syntactic anchor object information are vital components of a scene’s grammar which interact to efficiently guide both eye and body movements when we search for objects, bringing us a step closer to uncovering the hierarchical nature of scene priors and its role in efficient real-world search.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×