August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Ideal Observer Analysis of Fused Multispectral Imagery
Author Affiliations
  • Jennifer L. Bittner
    Air Force Research Laboratory
  • M. Trent Schill
    Air Force Research Laboratory
  • Leslie M. Blaha
    Air Force Research Laboratory
  • Joseph W. Houpt
    Ball Aerospace
Journal of Vision August 2014, Vol.14, 1359. doi:https://doi.org/10.1167/14.10.1359
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jennifer L. Bittner, M. Trent Schill, Leslie M. Blaha, Joseph W. Houpt; Ideal Observer Analysis of Fused Multispectral Imagery . Journal of Vision 2014;14(10):1359. https://doi.org/10.1167/14.10.1359.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Components (i.e. features) of an image are highlighted differently given enhancement with varying spectral bands. For example, a thermal (long-wave infrared) image of a scene may show a 'glowing' human but provide little other detail, whereas a visible image may show the scene detail but the human is much less apparent. Image fusion aims to find the optimal balance of emphasis by producing an image combination that is more informative and is more suitable for perception to ultimately enhance human visual performance (e.g. Toet, et al. 2010). The current project uses ideal observer analysis (Geisler 1989) to directly test the aims of image fusion. For a 1-of-8 identification task, we derived ideal performance and human processing efficiencies for viewing a set of rotated Landolt C images taken using individual sensor cameras and combined across a series of 7 fusion algorithms (e.g. Laplacian, PCA, discrete wavelet transformation, averaging). Contrary to the assumption that image fusion always produces a more informative combination image, both ideal observer and human efficiency results showed that the individual sensor imagery chosen can be just as influential as the fusion-enhanced images. Ideal observer results showed that the amount of information available (i.e. ideal performance) was influenced not by the fusion algorithm chosen but more often by the individual sensor combinations. Additionally, human efficiencies were found in similar ranges (~10-15%) for both individual spectral and fused imagery with humans performing better at times with the images from the individual sensors over those that were fused. As image fusion can be applied to a variety of image content, our current application of ideal observer analysis provides not only a thorough assessment of human performance with image fusion for simple letter-like features but sets up a framework for future evaluation for more complex stimuli and tasks.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×