September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Large-scale identification of the visual features used for object recognition with ClickMe.ai
Author Affiliations
  • Drew Linsley
    Cognitive Linguistic & Psychological Sciences Department, Brown University
  • Dan Shiebler
    Cognitive Linguistic & Psychological Sciences Department, Brown UniversityTwitter Cortex
  • Sven Eberhardt
    Cognitive Linguistic & Psychological Sciences Department, Brown UniversityAmazon
  • Andreas Karagounis
    Cognitive Linguistic & Psychological Sciences Department, Brown University
  • Thomas Serre
    Cognitive Linguistic & Psychological Sciences Department, Brown University
Journal of Vision September 2018, Vol.18, 414. doi:https://doi.org/10.1167/18.10.414
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Drew Linsley, Dan Shiebler, Sven Eberhardt, Andreas Karagounis, Thomas Serre; Large-scale identification of the visual features used for object recognition with ClickMe.ai. Journal of Vision 2018;18(10):414. https://doi.org/10.1167/18.10.414.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Identifying the visual features driving object recognition remains an experimental challenge. Classification images and related methods exploit the correlation between noise perturbations across thousands of stimulus repetitions and behavioral responses to identify image locations that strongly influence observers' decisions. These methods are powerful but inefficient, making them ill-suited for a large-scale exploration of visual features for object recognition. Here, we describe ClickMe.ai, a web-based experiment for large-scale collection of feature-importance maps for object images. ClickMe.ai pairs human participants with computer partners to recognize images from a large dataset. The experiment consisted of rounds of gameplay, where participants used the mouse to reveal image locations to their computer partners. Participants were awarded points based on how quickly their computer partner recognized the target object as an incentive for them to select features that are most diagnostic for visual recognition. We aggregated data over several months of gameplay – yielding nearly half a million feature-importance maps that are consistent across players. We validated the diagnosticity of the visual features revealed by ClickMe.ai with a rapid categorization experiment, in which the proportion of visible features was systematically masked during object recognition. This demonstrated that features identified by ClickMe.ai were sufficient and more informative for object recognition than those found to be salient. Finally, we found that image regions identified by ClickMe.ai are distinct from those used by a deep convolutional network (DCN), a leading machine vision architecture which is approaching human recognition accuracy. We further describe a method for cueing a DCN to these image regions identified by ClickMe.ai while they are trained to discriminate between natural object categories. DCNs trained in this way learned object representations that were significantly more similar to humans' and coincided with more effective predictions of human decisions during visual psychophysics tasks.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×