September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Task-specific saliency from sparse, hierarchical models of visual cortex compared to eye-tracking data for object detection in natural video sequences
Author Affiliations
  • Michael Ham
    Los Alamos National Laboratory, Los Alamos, New Mexico, USA
  • Steven Brumby
    Los Alamos National Laboratory, Los Alamos, New Mexico, USA
  • Zhengping Ji
    Los Alamos National Laboratory, Los Alamos, New Mexico, USA
  • Karissa Sanbonmatsu
    Los Alamos National Laboratory, Los Alamos, New Mexico, USA
  • Garrett Kenyon
    Los Alamos National Laboratory, Los Alamos, New Mexico, USA
  • John George
    Los Alamos National Laboratory, Los Alamos, New Mexico, USA
  • Luis Bettencourt
    Los Alamos National Laboratory, Los Alamos, New Mexico, USA
    Santa Fe Institute, Santa Fe, New Mexico, USA
Journal of Vision September 2011, Vol.11, 1281. doi:https://doi.org/10.1167/11.11.1281
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael Ham, Steven Brumby, Zhengping Ji, Karissa Sanbonmatsu, Garrett Kenyon, John George, Luis Bettencourt; Task-specific saliency from sparse, hierarchical models of visual cortex compared to eye-tracking data for object detection in natural video sequences. Journal of Vision 2011;11(11):1281. https://doi.org/10.1167/11.11.1281.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

HMAX/Neocognitron models of visual cortex use learned hierarchical (sparse) representations to describe visual scenes. These models have reported state-of-the-art accuracy on whole-image labeling tasks using natural still imagery (Serre et al., PNAS 2007). Generalizations of these models (e.g., Brumby et al., AIPR, 2009) allow localized detection of objects within a scene. Itti and Koch (Nature Reviews Neuroscience, 2001) have proposed non-task specific models of visual attention (“saliency maps”), which have been compared to human and animal data using eye-tracking systems. Chikkerur et al. (Vision Research, 2010) have reported using eye-tracking to compare visual fixations on objects in detection tasks within still images (finding pedestrians and vehicles in urban scenes), compared to an extension of an HMAX model that adds a model of attention in parietal cortex.

Here, we describe new work comparing human eye-tracking data for object detection in natural video sequences to task-specific saliency maps generated by a sparse, hierarchical model of the ventral pathway of visual cortex called PANN (Petascale Artificial Neural Network), our high-performance implementation of an HMAX/Neocognitron type model. We explore specific object detection tasks including vehicle detection in aerial video from a low-flying aircraft, for which we collect eye-tracking data from several human subjects.We train our model using hand-marked training data on a few frames, and compare our results to eye-tracking data over an independent set of test video sequences. We also compare our task-specific saliency maps to non-task specific saliency maps (Itti et al., PAMI, 1998; Harel et al., NIPS, 2006). We conclude that activity in model IT cortex projects back to a spatial distribution that correlates well to task-specific visual attention recorded by the eye tracker throughout the video sequences.

Department of Energy LDRD funding. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×