August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Investigating the effects of a virtual reality vs. screen-based testing setup on incidental memory after visual search through scenes
Author Affiliations & Notes
  • Julia Beitner
    Scene Grammar Lab, Goethe University Frankfurt
  • Jason Helbing
    Scene Grammar Lab, Goethe University Frankfurt
  • Erwan J. David
    Scene Grammar Lab, Goethe University Frankfurt
  • Melissa L.-H. Vo
    Scene Grammar Lab, Goethe University Frankfurt
  • Footnotes
    Acknowledgements  This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – project number 222641018 – SFB/TRR 135, sub-project C7 to MLV., and by the Main-Campus-doctus scholarship of the Stiftung Polytechnische Gesellschaft Frankfurt a. M. to JB.
Journal of Vision August 2023, Vol.23, 5126. doi:https://doi.org/10.1167/jov.23.9.5126
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Julia Beitner, Jason Helbing, Erwan J. David, Melissa L.-H. Vo; Investigating the effects of a virtual reality vs. screen-based testing setup on incidental memory after visual search through scenes. Journal of Vision 2023;23(9):5126. https://doi.org/10.1167/jov.23.9.5126.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Many experiments investigating visual search are performed on a computer screen under highly controlled settings. However, the generalizability of the investigated search mechanisms to the real world is unclear. Here, we tested whether the formation of incidental object memories during visual search follows similar patterns in virtual reality (VR) and on a computer screen. We present two studies that were identical in the administered tasks, but differed in the setup they were tested in. In both experiments, participants searched for ten out of 20 objects in indoor scenes, either in full illumination or with constrained visual input via a controller-contingent 8-degree window (flashlight condition). After the search task, participants’ incidental memory was tested with two surprise tasks: an object recognition and a location memory task (scene rebuilding). Critically, the first experiment took place in an immersive virtual environment, while the second experiment displayed the same interactive 3D environment on a computer screen. In VR, we found that participants relied more on memory when searching through scenes with a flashlight: Search times decreased more strongly over trials and non-targets were recognized better than ones from illuminated scenes. Both results indicate stronger memorization of objects when only limited visual input was available. Additionally, participants were able to reallocate target and non-target objects from the flashlight condition just as accurately as from illuminated scenes, despite never having seen them in full view, indicating the formation of a similarly holistic scene representation in both conditions. In contrast, when administered on a computer screen, preliminary analyses suggest no benefit for object recognition in the flashlight condition, and a detrimental effect on location memory. Our results indicate that visual search on a computer screen elicits weaker scene memory than in VR. Sensorimotor signals engaged in the search process might play a role for strengthening scene memory.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×