August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Feature integration in visual search for real-world scenes
Author Affiliations & Notes
  • Gaeun Son
    University of Toronto
  • Michael L. Mack
  • Dirk B. Walther
  • Footnotes
    Acknowledgements  Natural Sciences and Engineering Research Council (NSERC) Discovery Grants (RGPIN-2017-06753 to MLM and RGPIN-2020-04097 to DBW) and Canada Foundation for Innovation and Ontario Research Fund (36601 to MLM).
Journal of Vision August 2023, Vol.23, 4832. doi:https://doi.org/10.1167/jov.23.9.4832
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gaeun Son, Michael L. Mack, Dirk B. Walther; Feature integration in visual search for real-world scenes. Journal of Vision 2023;23(9):4832. https://doi.org/10.1167/jov.23.9.4832.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Feature integration theory (FIT) provides a framework for parsing the visual input into basic features and for binding those features into integral percepts. The idea of feature parsing and integration is still central to mechanistic explanations of visual search: search performance becomes less efficient when a search target is defined by multiple features compared to a single feature. However, this framework has been empirically tested with basic, localized features (e.g., color, orientation). In our study, we expand the FIT to ecologically more realistic scene features. We conducted a series of visual search experiments where participants searched for a target scene among distractor scenes. The target and distractor scenes were defined from a 2-dimensional parametric feature space of indoor scenes. In particular, we manipulated complex high-level features, such as indoor lighting and scene layout, using Generative adversarial networks. Along each axis of the space, we generated target and distractor scenes. The target scenes can be discriminated from distractors either based on a single feature or the conjunction of two features. When participants performed this task across different search array set sizes, we observed that search RT and accuracy became inefficient when the target was defined by feature conjunction. The effect survived after luminance and RMS contrast were ruled out as potential confounds. These results extend the idea of the FIT framework to ecologically realistic scene features.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×