September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Uh oh: Does 40 years of visual search research actually tell us about visual search in the world?
Author Affiliations & Notes
  • Jeremy Wolfe
    Brigham and Women's Hospital
    Harvard Medical School
  • Eduard Objio, Jr
    Boston Latin School, Boston, MA
  • Hula Khalifa
    UMass, Amherst, MA
  • Ava Mitra
    Brigham and Women's Hospital
  • Footnotes
    Acknowledgements  NEI EY017001, NSF 2146617, NCI CA207490
Journal of Vision September 2024, Vol.24, 457. doi:https://doi.org/10.1167/jov.24.10.457
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jeremy Wolfe, Eduard Objio, Jr, Hula Khalifa, Ava Mitra; Uh oh: Does 40 years of visual search research actually tell us about visual search in the world?. Journal of Vision 2024;24(10):457. https://doi.org/10.1167/jov.24.10.457.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We have decades of visual search data from experiments where observers look for targets among distractors. Typically, observers are tested in blocks of several hundred trials, and conclusions about underlying mechanisms are inferred from Reaction Time X Set Size functions and errors. The introductions to the subsequently published papers then declare that we are studying how you find your keys or the toaster in the real world. However, in the real world, you never search for your keys 100 times in a row. You search for keys, then a coat, then the doorknob, etc. Maybe the rules, gleaned from blocks of trials, apply only in the lab, with different rules for realistic mixtures of tasks? We used four feature search tasks (easy color, moderate lighting direction, moderate cube orientation, hard vernier offset). Observers completed 400 trials either in blocks of 100 trials or with all four tasks randomly intermixed. Mixing tasks did NOT destroy the standard patterns of RT or accuracy data. We obtained similar pattern of results when all four tasks had the same green O target but different distractors, ranging from easy (blue O) to harder (color x shape conjunction) to very hard (circle among vertical and horizontal ovals). Performance was similar under mixed and blocked conditions. Again, this is good news. The results suggest that rules, established in the lab, should apply in more realistic, mixed conditions. However, at least one important theoretical puzzle appears. Guided Search and other models have long proposed that target absent "quitting times" are established by an adaptive mechanism operating over multiple trials. Our experiments showed no evidence for adaptive learning in the mixed condition. Nevertheless, target-absent responses were not impaired. Observers did not need to learn when to quit. The implication is that standard accounts of search termination may be incorrect.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×