September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Scene grammar guidance affects both visual search and incidental object memory
Author Affiliations & Notes
  • Julia Beitner
    Department of Psychology, Scene Grammar Lab, Goethe University Frankfurt, Germany
  • Jason Helbing
    Department of Psychology, Scene Grammar Lab, Goethe University Frankfurt, Germany
  • Dejan Draschkow
    Department of Psychiatry, Brain & Cognition Laboratory, University of Oxford, UK
  • Melissa Le-Hoa Vo
    Department of Psychology, Scene Grammar Lab, Goethe University Frankfurt, Germany
  • Footnotes
    Acknowledgements  This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – project number 222641018 – SFB/TRR 135, sub-project C7 to MLV., and by the Main-Campus-doctus scholarship of the Stiftung Polytechnische Gesellschaft Frankfurt a. M. to JB.
Journal of Vision September 2021, Vol.21, 2150. doi:https://doi.org/10.1167/jov.21.9.2150
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Julia Beitner, Jason Helbing, Dejan Draschkow, Melissa Le-Hoa Vo; Scene grammar guidance affects both visual search and incidental object memory. Journal of Vision 2021;21(9):2150. https://doi.org/10.1167/jov.21.9.2150.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previously acquired semantic and syntactic knowledge about scenes – so-called scene grammar – guides visual search and supports incidental encoding of objects. However, it is still unclear how scene grammar shapes our interactions with the environment and influences the resulting representations during natural behavior. To investigate this question, participants performed a repeated visual search task through a 3D virtual environment. They had to successively search for ten out of 20 possible target objects in ten realistic scenes. Crucially, half of the scenes were inverted to impede access to scene grammar guidance. Upright and inverted scenes were randomly interleaved. After searching, participants engaged in a surprise old/new object recognition task to assess incidental object memory. First results show that while participants searched descriptively longer in inverted scenes, learning between conditions did not differ. Using eye-tracking, we found that search initiation time was unaffected by scene inversion, but time to first target fixation and decision time were longer during search through inverted scenes. Importantly, time to first target fixation was affected by incidental gaze duration on the target object, but decision time was not. In the subsequent recognition task, we replicated previous findings observed in 2D according to which target objects were remembered substantially better than distractors. We found no main effect between search conditions, but an interaction effect. That is, targets searched in the upright condition were remembered better than targets of the inverted condition. Conversely, distractors were remembered better when they appeared in an inverted than in an upright scene. Moreover, decision time and incidental gaze durations on objects during search – but not search time – predicted memory performance in both conditions. Our findings demonstrate that during natural behavior scene grammar interacts with task relevance to guide search but results in a trade-off by affecting incidentally emerging object memories.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×