August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
DeepGaze II: A big step towards explaining all information in image-based saliency
Author Affiliations
  • Matthias Kammerer
    Werner-Reichardt-Centre for Integrative Neuroscience, University Tübingen
  • Matthias Bethge
    Werner-Reichardt-Centre for Integrative Neuroscience, University Tübingen
Journal of Vision September 2016, Vol.16, 330. doi:https://doi.org/10.1167/16.12.330
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Matthias Kammerer, Matthias Bethge; DeepGaze II: A big step towards explaining all information in image-based saliency. Journal of Vision 2016;16(12):330. https://doi.org/10.1167/16.12.330.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When free-viewing scenes, the first few fixations of human observers are driven in part by bottom-up attention. Over the last decade various models have been proposed to explain these fixations. We recently standardized model comparison using an information-theoretic framework and were able to show that these models captured not more than 1/3 of the explainable mutual information between image content and the fixation locations, which might be partially due to the limited data available (Kuemmerer et al, PNAS, in press). Subsequently, we have shown that this limitation can be tackled effectively by using a transfer learning strategy. Our model "DeepGaze I" uses a neural network (AlexNet) that was originally trained for object detection on the ImageNet dataset. It achieved a large improvement over the previous state of the art, explaining 56% of the explainable information (Kuemmerer et al, ICLR 2015). A new generation of object recognition models have since been developed, substantially outperforming AlexNet. The success of "DeepGaze I" and similar models suggests that features that yield good object detection performance can be exploited for better saliency prediction, and that object detection and fixation prediction performances are correlated. Here we test this hypothesis. Our new model "DeepGaze II" uses the VGG network to convert an image into a high dimensional representation, which is then fed through a second, smaller network to yield a density prediction. The second network is pre-trained using maximum-likelihood on the SALICON dataset and fine-tuned on the MIT1003 dataset. Remarkably, DeepGaze II explains 88% of the explainable information on held out data, and has since achieved top performance on the MIT Saliency Benchmark. The problem of predicting where people look under free-viewing conditions could be solved very soon. That fixation prediction performance is closely tied to object detection informs theories of attentional selection in scene viewing.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×