August 2009
Volume 9, Issue 8
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2009
Where in the world? Human and computer geolocation of images
Author Affiliations
  • James Hays
    Computer Science Department, Carnegie Mellon University
  • Alexei Efros
    Computer Science Department, Carnegie Mellon University
Journal of Vision August 2009, Vol.9, 969. doi:https://doi.org/10.1167/9.8.969
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      James Hays, Alexei Efros; Where in the world? Human and computer geolocation of images. Journal of Vision 2009;9(8):969. https://doi.org/10.1167/9.8.969.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
 

In this work we measure how accurately humans can localize arbitrary photographs on the Earth and contrast this against a baseline computational method.

 

Previous work has studied the placement of scenes into semantic categories (e.g. kitchen, bedroom, forest, etc...) both by humans and computers. With moderate numbers of categories, simple texture-based methods can group scenes almost as well as humans (Renninger 2004, Oliva 2005). The success of computational methods is not a result of any high-level understanding of scenes, but rather the ease of which these hand-defined categories can be separated by low-level features.

 

In this study we examine human performance at organizing scenes according to geographic location on the Earth rather than hand-defined semantic category. Participants are shown novel images and asked to pick the location on a globe where the photograph was taken. This task is difficult - many scenes are geographically ambiguous while others require high-level scene understanding and knowledge of cultural or architectural trends across the Earth. On the other hand, photographs of landmarks are easy to geolocate for both humans and computers.

 

We compare and contrast human performance with a data-driven computational method using 6.5 million geolocated photographs. For a novel photograph, the algorithm finds the most similar scenes according to the scene gist descriptor, texton histogram, and other features. A voting scheme produces a geolocation estimate from the locations of matching scenes.

 

Image geolocation is one of few high-level visual tasks where computational methods are competitive with humans. While humans are superior at using high-level scene information (e.g. traffic direction, text language, tropical flora, etc...) our computational method has a geolocated visual memory larger than almost any human. We break down the performance of humans and computers according to scene type and analyze the situations in which humans and computers are disparate in performance.

 
Hays, J. Efros, A. (2009). Where in the world? Human and computer geolocation of images [Abstract]. Journal of Vision, 9(8):969, 969a, http://journalofvision.org/9/8/969/, doi:10.1167/9.8.969. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×