August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
The dominance of color in guiding visual search: Evidence from mismatch effects
Author Affiliations
  • Robert Alexander
    Department of Psychology, Stony Brook University
  • Gregory Zelinsky
    Department of Psychology, Stony Brook University
Journal of Vision August 2014, Vol.14, 218. doi:https://doi.org/10.1167/14.10.218
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Robert Alexander, Gregory Zelinsky; The dominance of color in guiding visual search: Evidence from mismatch effects. Journal of Vision 2014;14(10):218. https://doi.org/10.1167/14.10.218.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We quantified the features used in a visual search task in terms of mismatch costs—the decrement in search performance caused by a difference between target appearance at preview and target appearance in the search display. In nine experiments using real-world objects arranged into an eight-item search display, we tested the role of hue, shape, and orientation mismatch (25 levels per feature dimension) on search guidance and target verification. Results showed that shape and orientation guide search only when color is not available (i.e. grayscale target cues and search displays). Even in those grayscale cases, shape and orientation mismatch effects emerged only later during search (after fixation on a distractor). However, color mismatch effects appeared early (in the first fixated object) and were large in magnitude compared to orientation and shape, demonstrating that hue dominates search from the very first eye movements. Effects of mismatch on target verification followed a similar pattern, suggesting that similar comparison processes underlie guidance and verification. Additionally, mismatch effects were larger when participants were uncertain of what feature dimension would change and when the mismatching dimension was valid on a larger proportion of trials, demonstrating that participants weight features based on their expectancies (although this does not appear to happen on a trial-by-trial basis, contrary to the conclusions of studies using simple stimuli). We also found that when more than one feature dimension was mismatched on a given trial, the mismatch cost was superadditive, which contradicts most models of search assuming a linear summation across feature dimensions. Lastly, the fact that guidance was largely unaffected by orientation and shape mismatch suggests that surprisingly little information from these features is extracted from the target preview and used in search, perhaps reflecting the use of a categorical template and features retrieved from visual long-term memory.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×