August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Precise Guided Search
Author Affiliations
  • Matthew Cain
    U.S. Army Natick Soldier RD&E Center
  • Jeremy Wolfe
    Brigham & Women's Hosptial
Journal of Vision September 2016, Vol.16, 1284. doi:https://doi.org/10.1167/16.12.1284
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Matthew Cain, Jeremy Wolfe; Precise Guided Search. Journal of Vision 2016;16(12):1284. https://doi.org/10.1167/16.12.1284.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Models of visual search usually content themselves with predicting average results (e.g., the mean and, perhaps, distribution of response times). If they are precise in predictions about single trials, it is under very reduced conditions (e.g., few items, only one or two of which are salient). Using a hybrid foraging search paradigm, we have attempted to predict the specific targets that will be selected in a complex display, over the course of many seconds. In hybrid foraging, observers search for multiple instances of several types of target. 22 participants performed three search tasks, always foraging for two target types: Feature search (e.g., blue and green squares among yellow and red squares), Conjunction search (e.g., green circles and blue squares among green squares and blue circles), and Spatial Configuration search (e.g., "p" and "d" among "b" and "q"). Each display held 80140 moving items, 20-30 of these were target items. Observers were instructed to maximize the rate at which they clicked on targets. Targets disappeared when clicked and observers could switch to new displays at any time. A version of Guided Search was developed that could predict the identity and location of the next item to be selected. It had two basic rules: If there was another target of the type just collected within 400 pixels, the closest of those was selected; if not, the closest target of another type was selected. With these, we could predict the specific next item found for 60 of target collections in the Feature search, 46 in the Conjunction search and 39 in the Spatial Configuration search. This outperforms a baseline model that simply selects the closest target of any type, suggesting that search follows very stereotyped behavior especially when feature guidance is high.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×