September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
How do differences across visual features combine to determine visual search efficiency in parallel search?
Author Affiliations
  • Alejandro Lleras
    Psychology Department, LAS, University of Illinois
  • Jing Xu
    Psychology Department, LAS, University of Illinois
  • Simona Buetti
    Psychology Department, LAS, University of Illinois
Journal of Vision September 2018, Vol.18, 282. doi:https://doi.org/10.1167/18.10.282
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alejandro Lleras, Jing Xu, Simona Buetti; How do differences across visual features combine to determine visual search efficiency in parallel search?. Journal of Vision 2018;18(10):282. https://doi.org/10.1167/18.10.282.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In Wang, Buetti and Lleras (2017), we recently proposed a technique to predict reaction times in efficient search tasks with heterogeneous displays (displays containing various combinations of different types of objects), based on the performance characteristics observed when participants complete simpler search tasks with homogeneous displays (displays where all non-target elements are identical). Here we explored a related question: in the context of an efficient search task, how do separate visual features (e.g., shape, color) combine to create the signal that differentiates a target from distractors? To address this question, Experiment 1 evaluated search efficiency for a target that differed from distractors only along color (cyan target, blue, orange, and yellow distractors); Experiment 2 evaluated efficiency when the target differed from distractors only along shape (half-disc target, circle, triangle, and diamond distractors). Finally, in three subsequent experiments, we created a target by combining the two previous target features (cyan half-disc) and distractors that were combinations of the different color and shape features. The question is: can we predict the logarithmic search efficiency in the mixed feature conditions, based on the log efficiency observed in Experiments 1 and 2 (single feature conditions)? We compared predictions from a categorical feature guidance model, and our contrast-signal model of parallel search (where contrast signal from orthogonal feature dimensions combine in Euclidian fashion). The results showed an actual improvement in search efficiency that was much larger than predicted by either model, suggesting that the contrast between target and distractors increases in an over-additive fashion when multiple visual features are combined.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×