September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Places: An Image Database for Deep Scene Understanding
Author Affiliations
  • Bolei Zhou
    MIT
  • Agata Lapedriza
    Universitat Oberta de Catalunya
  • Antonio Torralba
    MIT
  • Aude Oliva
    MIT
Journal of Vision August 2017, Vol.17, 296. doi:https://doi.org/10.1167/17.10.296
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bolei Zhou, Agata Lapedriza, Antonio Torralba, Aude Oliva; Places: An Image Database for Deep Scene Understanding. Journal of Vision 2017;17(10):296. https://doi.org/10.1167/17.10.296.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The rise of multi-million-item dataset initiatives has enabled machine learning algorithms to reach near-human performances at object and scene recognition. Here we describe the Places Database, a repository of 10 million pictures, labeled with semantic categories and attributes, comprising a quasi-exhaustive list of the types of environments encountered in the world. Using state of the art Convolutional Neural Networks (CNN), we show performances at natural image classification from images collected in the wild from a smart phone as well as the regions used by the model to identify the type of scene. Looking into the representation learned by the units of the neural networks, we find that meaningful units representing shapes, objects, and regions emerge as the diagnostic information to represent visual scenes. With its high-coverage and high-diversity of exemplars, Places offers an ecosystem of visual context to guide progress on currently intractable visual recognition problems. Such problems could include determining the actions happening in a given environment, spotting inconsistent objects or human behaviors for a particular place, and predicting future events or the cause of events given a scene.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×