August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Facilitation of visual search from object-to-scene binding in an immersive virtual environment
Author Affiliations
  • Chia-Ling Li
    The Institute of Neuroscience, University of Texas at Austin
  • M Pilar Aivar
    Department of Psychology, Universidad Autónoma de Madrid
  • Dmitry M Kit
    Department of Computer Science, University of Bath
  • Matthew H Tong
    Center for Perceptual Systems, University of Texas at Austin
  • Mary M Hayhoe
    Center for Perceptual Systems, University of Texas at Austin
Journal of Vision August 2014, Vol.14, 709. doi:https://doi.org/10.1167/14.10.709
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chia-Ling Li, M Pilar Aivar, Dmitry M Kit, Matthew H Tong, Mary M Hayhoe; Facilitation of visual search from object-to-scene binding in an immersive virtual environment. Journal of Vision 2014;14(10):709. https://doi.org/10.1167/14.10.709.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In a familiar environment, memory representations may reduce attentional demands for visual information to guide behavior. Despite earlier evidence for sparse memory representations of complex scenes, more recent evidence has revealed the existence of more extensive memory representations (Castelhano & Henderson, 2007; Hollingworth, 2009). Experiments using 2D images have shown that visual search is facilitated by a scene preview, whether or not future search targets are present, suggesting that both memory for context and object-to-scene binding facilitate visual search in 2D images. However, memory representations during ongoing daily behaviors in the real world are a consequence of a very different exposure history than in typical experiments. In this study we examined visual search in an immersive virtual environment to determine (1) whether pre-exposure to the scene context and targets play a role in visual search, and (2) what aspects of the context might benefit visual search. The virtual environment was composed of two rooms; one room was explored for 1 minute before the search trials while the other was not. Early search targets were geometric objects, while later trials featured previously present realistic apartment objects. Pre-exposure to one of the rooms facilitated search for geometric objects (by approximately 25%), but only when the targets were present during exploration and only during early searches. When apartment objects are continuously visible, with opportunities for incidental fixations, further experience in the environment led to small search facilitation for those objects nearby previous targets. However, there was no facilitation for more distant objects despite having been present in the environment for over 5 minutes of immersive experience. Together the results suggest that prior exposure in immersive 3D environments leads to a small advantage in subsequent search but only when future search targets are also present, and the primary determinant of search time is context-relevant experience.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×