December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Search templates for real-world objects in natural scenes
Author Affiliations
  • John Emmanuel Kiat
    University of California-Davis
  • Brett Bahle
    University of California-Davis
  • Steven John Luck
    University of California-Davis
Journal of Vision December 2022, Vol.22, 4477. doi:https://doi.org/10.1167/jov.22.14.4477
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      John Emmanuel Kiat, Brett Bahle, Steven John Luck; Search templates for real-world objects in natural scenes. Journal of Vision 2022;22(14):4477. https://doi.org/10.1167/jov.22.14.4477.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Decades of research highlight the importance of bottom-up stimulus-driven guidance (e.g., saliency) and top-down user-driven factors (e.g., target surface features, prior knowledge of likely target locations) in visual search. While these factors can be experimentally manipulated in simple abstract search arrays, it has been difficult to empirically derive unique predictions for distinct top-down factors in real-world scenes. As a first step toward addressing this issue, we developed two new approaches based on convolutional neural network models. The first extends the class activation mapping (CAM) approach (Zhou, Khosla, Lapedriza, Oliva & Toraralba, 2015) to compute “Average-CAM” maps which capture variability in the diagnostic value of scene regions in predicting scene category membership. The second approach, which we term “Patch-Match”, maps the relatedness of scene regions to category-level (e.g. any clock) target and visual search target template (e.g. a specific clock) activations. We demonstrate the value of these approaches by showing these maps explain unique variance (beyond that explained by saliency models) in human gaze patterns in visual search tasks for real-world targets embedded in natural scenes.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×