September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
When is stereopsis useful in visual search?
Author Affiliations
  • Emilie Josephs
    Visual Attention Lab, Brigham and Women's Hospital
  • Matthew Cain
    Visual Attention Lab, Brigham and Women's Hospital US Army Natick Soldier Research, Development & Engineering Center
  • Barbara Hidalgo-Sotelo
    University of Texas, Dept of Communication Sciences and Disorders Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology
  • Gregory Cook
    Hewlett-Packard Labs
  • Nelson Chang
    Hewlett-Packard Labs
  • Krista Ehinger
    Visual Attention Lab, Brigham and Women's Hospital
  • Aude Oliva
    Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology
  • Jeremy Wolfe
    Visual Attention Lab, Brigham and Women's Hospital Departments of Ophthalmology & Radiology, Harvard Medical School
Journal of Vision September 2015, Vol.15, 1361. doi:https://doi.org/10.1167/15.12.1361
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Emilie Josephs, Matthew Cain, Barbara Hidalgo-Sotelo, Gregory Cook, Nelson Chang, Krista Ehinger, Aude Oliva, Jeremy Wolfe; When is stereopsis useful in visual search?. Journal of Vision 2015;15(12):1361. https://doi.org/10.1167/15.12.1361.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Does stereoscopic information improve visual search? We know that attention can be guided efficiently by stereopsis, for example, to the near target among far distractors (Nakayama and Silverman, 1986), but, when searching through a real scene, does it help if that scene is presented stereoscopically? Certainly, scenes appear to be more vividly real in 3D. However, we present three experiments in which the addition of stereo did not alter scene search very much. In Experiment 1, 12 observers searched twice for 10 target objects in each of 18 photographic scenes (a ‘repeated’ search task). Note that observers were not cued to search for a target at a specific depth, stereopsis simply added to the vividness of the scene. Reaction time and accuracy did not differ significantly in stereoscopic and monoscopic conditions. In Experiment 2, using similar images, we found no differences between 2D and 3D conditions on time to first fixation on the target or on the average length of saccades. However, gaze durations were significantly shorter in 3D scenes. Since gaze durations are typically taken to measure processing time, this may suggest that it was easier to disambiguate surfaces and/or objects in 3D, although this advantage did not translate to benefits in search times. In a final experiment, we reduced the stimulus array to a set of colored rendered objects distributed in depth against a plain background. In this task, the addition of stereo produced shorter reaction times, even though stereo information was not predictive of target location. Our real scenes may have contained such a rich array of cues to target location that the addition of stereo may not have contributed much additional information. It may be in more difficult searches, including more challenging real world tasks, that stereopsis will be an asset.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×