September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Evaluating the contributions of top-down and bottom-up processing on eye movements during parallel visual search
Author Affiliations & Notes
  • Howard Jia He Tan
    University of Illinois Urbana-Champaign
  • Jia He Tan
    University of Illinois Urbana-Champaign
  • Zoe (Jing) Xu
    University of Illinois Urbana-Champaign
  • Alejandro Lleras
    University of Illinois Urbana-Champaign
  • Simona Buetti
    University of Illinois Urbana-Champaign
  • Footnotes
    Acknowledgements  This material is based upon work supported by the National Science Foundation under Grant No BCS1921735 to SB.
Journal of Vision September 2024, Vol.24, 1330. doi:https://doi.org/10.1167/jov.24.10.1330
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Howard Jia He Tan, Jia He Tan, Zoe (Jing) Xu, Alejandro Lleras, Simona Buetti; Evaluating the contributions of top-down and bottom-up processing on eye movements during parallel visual search. Journal of Vision 2024;24(10):1330. https://doi.org/10.1167/jov.24.10.1330.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Most models of visual attention refer to the interplay between top-down and bottom-up processing, emphasizing factors like attentional templates and salience. Prior modeling work in the field has often used complex, real-world scenes, that do not generalize well to the more simplified and controlled stimuli used in most lab studies on visual search. In the current study, we used an efficient visual search paradigm in a pseudo-realistic environment, with well-controlled search stimuli that allows a simultaneous evaluation of the impact of top-down and bottom-up factors on eye movement patterns. Our stimuli varied along the color dimension to manipulate target-distractor similarity. In addition, our displays contained a salient stimulus of higher salience than target and other distractor stimuli. Eye gaze was tracked using an Eyelink 1000 Plus and fixations were categorized by the item they landed closest to. We manipulated task instructions, introducing a free-viewing instruction condition to serve as a baseline for how bottom-up contrast guided eye movements in one group of participants, and a top-down search instruction in a second group, where subjects were asked to find the red target in the scene. Experiment 1 assessed the impact of set size of less-salient distractors across both instructions. Experiment 2 examined target-distractor similarity effects for the less-salient distractors. We computed the ratio of fixations selective towards the target (top-down) versus the high-salience singleton (bottom-up) across all fixations made and how selectivity evolves throughout a trial across all conditions tested. Results across free-viewing conditions showed selectivity for the high-salience item during the first fixation was 14% higher than for the stimulus used as target stimulus in the search task, yet it was 30% lower in the goal-driven group. Selectivity towards bottom-up factors decreased throughout a trial in free-viewing conditions across both experiments and decreased as target-distractor similarity increased across both instructions.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×