October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Modeling visual search in naturalistic virtual reality environments
Author Affiliations
  • Angela Radulescu
    Facebook Reality Labs
    Princeton University Psychology
  • Bas van Opheusden
    Facebook Reality Labs
    Princeton University Computer Science
  • Fred Callaway
    Princeton University Psychology
  • Thomas Griffiths
    Princeton University Psychology
    Princeton University Computer Science
  • James Hillis
    Facebook Reality Labs
Journal of Vision October 2020, Vol.20, 1401. doi:https://doi.org/10.1167/jov.20.11.1401
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Angela Radulescu, Bas van Opheusden, Fred Callaway, Thomas Griffiths, James Hillis; Modeling visual search in naturalistic virtual reality environments. Journal of Vision 2020;20(11):1401. https://doi.org/10.1167/jov.20.11.1401.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual search is a ubiquitous human behavior and canonical example of selectively sampling sensory information to attain a goal. Previous research has studied optimality in visual search with artificial laboratory tasks (Najemnik and Geisler, 2005; Yang et al. 2016). To understand how people search in naturalistic environments, we conducted a study of visual search in virtual reality. Participants (N=21) viewed scenes generated with the Unity game engine through a head-mounted display equipped with an eye-tracker. On each of 300 trials, participants were shown a target object and teleported into a virtual cluttered room where they searched for the item from a fixed viewpoint. They had 8 seconds to identify the target object among 60-100 distractors. Participants had a 76% success rate of finding the target with a median response time on successful trials of 2.89s (IQR: 1.99-4.44s). To understand what features drive people’s search, we annotated gaze samples with semantic scene information such as the identity, shape, color, and texture of the object at the center of gaze. Concretely, we used the object asset (3D mesh and texture) to compute low-dimensional shape and color representations of each object. We found that people’s gaze is primarily directed to task-relevant objects (i.e. targets or distractors), and that the distractors that people look at are close in representational space to the target. Furthermore, this distance decreased over time, suggesting that representational similarity guides eye movements. We discuss these results in the context of a meta-level Markov Decision Process model (Callaway et al. 2018), which frames visual search as optimal information sampling under computational constraints.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×