Purchase this article with an account.
Bochao Zou, Yue Liu, Jeremy Wolfe; Effects of prior knowledge on visual search in depth. Journal of Vision 2017;17(10):84. doi: 10.1167/17.10.84.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
In visual search, if observers know the target has a certain feature in advance, they can restrict search to potential elements that share this feature. A limited set of features (e.g. color and motion) can support this phenomenon. What about knowledge of target depth, signaled by binocular disparity? Can observers restrict search to disparity-defined depth planes? Previous studies have come out with somewhat different results in reaction time and search efficiency. We hypothesized more stable guidance by disparity might be seen if more time was provided for observers to resolve depth in the stereoscopic displays. In experiment 1, observers searched for 2 among 5s. In order to provide the time prior to search, figure-eight placeholders were used. Placeholders were presented in depth from the beginning of each trial and then changed into 2s or 5s after a 1500msec delay. Three conditions were compared: one depth plane or two depth planes with targets cued to be in the front or back plane. Searches in both two-depth conditions were significantly more efficient than in the one-depth condition (34 vs. 49ms/item). Perfect guidance would predict that two-depth RTxSetSize slopes should be half of the one-depth slopes. However, they were not that shallow. In Experiment 2, placeholders were presented in a cloud of 12 possible depths. Observers were cued toward the front, back, or middle of the depth cloud or no cue was given. The cue was probabilistic, meaning that a front cue indicated that the cue was most likely in the closest depth plane with lower probabilities at further depth planes. With these more heterogeneous displays, no benefits of prior knowledge of depth were observed. In a control, we confirmed that guidance by color works with our paradigm. Depth guidance works, but may be limited to near and far.
Meeting abstract presented at VSS 2017
This PDF is available to Subscribers Only