September 2005
Volume 5, Issue 8
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2005
Modeling feature sharing between object detection and top-down attention
Author Affiliations
  • Dirk Walther
    Computation and Neural Systems Program, California Institute of Technology, Pasadena, CA, USA
  • Thomas Serre
    Center for Biological and Computation Learning, Brain and Cognitive Sciences, and Mc Govern Institute, Massachusetts Institute of Technology, Cambridge, MA, USA
  • Tomaso Poggio
    Center for Biological and Computation Learning, Brain and Cognitive Sciences, and Mc Govern Institute, Massachusetts Institute of Technology, Cambridge, MA, USA
  • Christof Koch
    Computation and Neural Systems Program, California Institute of Technology, Pasadena, CA, USA
Journal of Vision September 2005, Vol.5, 1041. doi:https://doi.org/10.1167/5.8.1041
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Dirk Walther, Thomas Serre, Tomaso Poggio, Christof Koch; Modeling feature sharing between object detection and top-down attention. Journal of Vision 2005;5(8):1041. https://doi.org/10.1167/5.8.1041.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual search and other attentionally demanding processes are often guided from the top down when a specific task is given (e.g. Wolfe et al. Vision Research 44, 2004). In the simplified stimuli commonly used in visual search experiments, e.g. red and horizontal bars, the selection of potential features that might be biased for is obvious (by design). In a natural setting with real-world objects, the selection of these features is not obvious, and there is some debate which features can be used for top-down guidance, and how a specific task maps to them (Wolfe and Horowitz, Nat. Rev. Neurosci. 2004).

Learning to detect objects provides the visual system with an effective set of features suitable for the detection task, and with a mapping from these features to an abstract representation of the object.

We suggest a model, in which V4-type features are shared between object detection and top-down attention. As the model familiarizes itself with objects, i.e. it learns to detect them, it acquires a representation for features to solve the detection task. We propose that by cortical feedback connections, top-down processes can re-use these same features to bias attention to locations with higher probability of containing the target object. We propose a model architecture that allows for such processing, and we present a computational implementation of the model that performs visual search in natural scenes for a given object category, e.g. for faces. We compare the performance of our model to pure bottom-up selection as well as to top-down attention using simple features such as hue.

Walther, D. Serre, T. Poggio, T. Koch, C. (2005). Modeling feature sharing between object detection and top-down attention [Abstract]. Journal of Vision, 5(8):1041, 1041a, http://journalofvision.org/5/8/1041/, doi:10.1167/5.8.1041. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×