August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
Application of a Bottom-Up Visual Surprise Model for Event Detection in Dynamic Natural Scenes
Author Affiliations
  • Randolph Voorhies
    Department of Computer Science, University of Southern California
  • Lior Elazary
    Department of Computer Science, University of Southern California
  • Laurent Itti
    Department of Computer Science, University of Southern California
    Neuroscience Graduate Program, University of Southern California
Journal of Vision August 2010, Vol.10, 215. doi:10.1167/10.7.215
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Randolph Voorhies, Lior Elazary, Laurent Itti; Application of a Bottom-Up Visual Surprise Model for Event Detection in Dynamic Natural Scenes. Journal of Vision 2010;10(7):215. doi: 10.1167/10.7.215.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We present an application of a neuromorphic visual attention model to the field of large-scale video surveillance and show that it outperforms a state-of-the-art method at the task of event detection. Our work extends Itti and Baldi's Surprise framework as described by “A Principled Approach to Detecting Surprising Events in Video” in CVPR 2005. The Surprise framework is a biologically plausible and validated model of primate visual attention which uses a new Bayesian model of information to detect unexpected changes in feature detectors modeled after those in the mammalian primary visual cortex. We extend this model to cover extremely large fields of view, and present methods for processing and aggregating such large amounts of visual data. Our system is tested on real-world data in which events containing both pedestrians and vehicles are staged in an outdoor environment and are shot on a 16 mega-pixel camera at 3 frames per second. In these tests, we show that our system is able to provide a greater than 12.5% gain in an ROC AUC analysis over a reference (OpenCV) algorithm (“Foreground Object Detection from Videos Containing Complex Background,” Li, et al, 2003). Furthermore, our system is rigorously tested and compared against the same algorithm on artificially generated target events in which image noise and target size is independently controlled. In these tests, we show an approximately 27% improvement in noise invariance, and an approximately 10% improvement in scale invariance over the comparison algorithm. The results from these tests suggest the importance of strong collaboration between the neuroscience and computer science communities in developing the next generation of vision algorithms.

Voorhies, R. Elazary, L. Itti, L. (2010). Application of a Bottom-Up Visual Surprise Model for Event Detection in Dynamic Natural Scenes [Abstract]. Journal of Vision, 10(7):215, 215a, http://www.journalofvision.org/content/10/7/215, doi:10.1167/10.7.215. [CrossRef]
Footnotes
 DARPA CT2WS Project.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×