June 2007
Volume 7, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   June 2007
Visual Search with selective tuning
Author Affiliations
  • Evgueni Simine
    Dept. of Computer Science & Engineering, and Centre for Vision Research, York University, Toronto, Canada
  • Antonio J. Rodriguez-Sanchez
    Dept. of Computer Science & Engineering, and Centre for Vision Research, York University, Toronto, Canada
  • John K. Tsotsos
    Dept. of Computer Science & Engineering, and Centre for Vision Research, York University, Toronto, Canada
Journal of Vision June 2007, Vol.7, 951. doi:https://doi.org/10.1167/7.9.951
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Evgueni Simine, Antonio J. Rodriguez-Sanchez, John K. Tsotsos; Visual Search with selective tuning. Journal of Vision 2007;7(9):951. doi: https://doi.org/10.1167/7.9.951.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Visual attention involves much more than simply the selection of next fixation for the eyes or for a camera system. Selective Tuning (ST) (Tsotsos et al. 1995;2005) presents a framework for modeling this broader view of attention and in this work we show how it performs in covert visual search tasks by comparing its performance to the same input visual displays that human subjects have seen and qualitatively comparing the model's performance to human performance. Two implementations of ST have been developed. The Motion Model (MM) recognizes and attends to motion patterns and the Object Recognition Model (ORM) recognizes and attends to simple objects formed by the conjunction of various features. Two experiments were carried out in the motion domain. A simple odd-man-out search for CCW rotating octagon among identical CW rotating octagons produced linear increase in search time with the increase of set size. The second experiment was similar to one described in Thornton and Gilden(2001) paper and produced qualitatively similar results. The validity of the ORM was first tested by successfully duplicating the results of Nagy and Sanchez(1990). Our second experiment aimed at an evaluation of the model's performance for the feature-conjunction-inefficient continuum search slopes (Wolfe, 1998). For conjunction search we followed Bichot and Schall(1999) (find a red circle among green circles and red crosses). For feature search ORM looked for a circle among crosses and for inefficient search we simulated Egeth and Dagenbach(1991). Inefficient search produced a slope of 0.49, followed by conjunction search (slope of 0.36) and feature search was practically flat (slope of 0.00), these results show the same kind continuum of search slopes as described by Wolfe(1998). We conclude that ST provides a valid explanatory mechanism for human covert visual search performance, an explanation going far beyond the conventional saliency map based explanations.

Simine, E. Rodriguez-Sanchez, A. J. Tsotsos, J. K. (2007). Visual Search with selective tuning [Abstract]. Journal of Vision, 7(9):951, 951a, http://journalofvision.org/7/9/951/, doi:10.1167/7.9.951. [CrossRef]

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.