August 2012
Volume 12, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2012
Integrating Bottom-up and Top-down Visual Attention for Object Segmentation
Author Affiliations
  • Zhengping Ji
    T-5, Los Alamos National Laboratory\nCNLS, Los Alamos National Laboratory
  • Steven P. Brumby
    ISR-2, Los Alamos National Laboratory
  • Garrett Kenyon
    P-21, Los Alamos National Laboratory
  • Luis M. A. Bettencourt
    T-5, Los Alamos National Laboratory\nCNLS, Los Alamos National Laboratory
Journal of Vision August 2012, Vol.12, 926. doi:https://doi.org/10.1167/12.9.926
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhengping Ji, Steven P. Brumby, Garrett Kenyon, Luis M. A. Bettencourt; Integrating Bottom-up and Top-down Visual Attention for Object Segmentation. Journal of Vision 2012;12(9):926. https://doi.org/10.1167/12.9.926.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual attention is a ubiquitous mechanism found in sensory perception, especially in humans and other primates. While several computational models of attention are derived from the bottom-up sensory processing to extract saliency maps, shifts of attention are also thought to be generated from the top-down, through feature dependent weighting of the various feature dictionaries. In this abstract, we proposed an object segmentation framework by integrating bottom-up saliency maps with top-down attention, triggered by a sparse, hierarchical model of visual cortex called PANN (Petascale Artificial Neural Network). Similar to the previous HMAX/Neocognitron model, PANN is composed of coupled simple cell and complex cell layers. While simple cells build representations of features over successively larger receptive fields, complex cell layers associate the outputs of simple cells within the same layer to create representations of features that are increasingly viewpoint invariant. These representations are learned through a combination of unsupervised learning - which allows the system to acquire statistically features in its realm of experience - and supervised learning, which associates labels to specific clusters of features at the top layer. Through the hierarchical network, the spatial support of a classifier's decision can be traced down to input to create an informative map of which low-level image features were associated with the positive object foreground. As a result, this object relevance map becomes a useful measure of top-down attention. This object-based attention mechanism is applied to detect specific objects in an aerial video from a low-flying aircraft, and further to segment the whole objects within the same video by fusing bottom-up saliency maps. In that integrated manner, the bottom-up saliency maps can assist top-down attention to better segment the intact objects and top-down attention can assist saliency map to filter out unnecessary backgrounds.

Meeting abstract presented at VSS 2012

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×