August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
How well do Deep Neural Networks model Human Vision?
Author Affiliations
  • John Clevenger
    Department of Psychology, University of Illinois
  • Diane Beck
    Department of Psychology, University of Illinois
Journal of Vision September 2016, Vol.16, 176. doi:https://doi.org/10.1167/16.12.176
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      John Clevenger, Diane Beck; How well do Deep Neural Networks model Human Vision?. Journal of Vision 2016;16(12):176. https://doi.org/10.1167/16.12.176.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recently there has been dramatic improvement in computer-vision object recognition. In the 2015 ImageNet challenge, the best performing model (GoogLeNet) had a top-5 classification accuracy of 93%, a 20% improvement over 2010. This increase is due to the continued development of convolutional neural networks (CNN). Despite these advances, it's unclear whether these biologically-inspired models recognize objects similarly to humans. To begin investigating this question, we compared GoogLeNet and human performance on the same images. If humans and CNNs share recognition processes, we should find similarities in which images are difficult/easy across groups. We used images taken from the 2015 ImageNet challenge, spanning a variety of categories. Importantly, half were images that GoogLetNet correctly classified in the 2015 ImageNet challenge and half were images that it incorrectly classify. We then tested human performance on these images using a cued detection task. In order to avoid ceiling effects, the images were briefly presented (< 100 ms, determined per subject) and masked. A category name was shown either before or after the image and people were asked whether or not the image matched the category (which it did half the time). We found that people required 2.5 times more exposure time to recognize images when the category was cued before the image rather than after, consistent with a role for top-down knowledge/expectation in human recognition. However, at the image-level accuracy was highly correlated across pre and post-cues (r =.82), indicating that some images are harder than others regardless of how they are cued. Importantly, people were substantially better at recognizing the images that GoogLetNet correctly (85%) rather than incorrectly (58%) categorized. This might be suggestive of shared processes. However, within the set of images that GoogLeNet got incorrect, human performance ranged from 9% to 100%, indicating substantial departure between human and machine.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×