September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Deep Convolutional Networks do not Make Classifications Based on Global Object Shape
Author Affiliations
  • Nicholas Baker
    University of California, Los Angeles
  • Hongjing Lu
    University of California, Los Angeles
  • Gennady Erlikhman
    University of Nevada, Reno
  • Philip Kellman
    University of California, Los Angeles
Journal of Vision September 2018, Vol.18, 904. doi:10.1167/18.10.904
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip Kellman; Deep Convolutional Networks do not Make Classifications Based on Global Object Shape. Journal of Vision 2018;18(10):904. doi: 10.1167/18.10.904.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Deep convolutional networks (DCNNs) have achieved previously unseen performance in object classification, raising questions about whether DCNNs operate similarly to human vision. In biological vision, shape is arguably the most important cue for recognition. We tested whether DCNNs utilize object shape information. In Experiments 1 and 2, we tested DCNNs on shapes lacking typical context and surface texture, using glass figurines and silhouettes. The network showed no ability to classify glass figurines but correctly classified some silhouettes. Specific aspects of the results led us to hypothesize that DCNNs do not distinguish object's bounding contours from other edge information, and that DCNNs access some local shape features, but not global shape. In Experiment 3, we scrambled correctly classified silhouette images to test classification accuracy when local features were preserved but global shape was disrupted. DCNNs gave the same classification labels despite disruptions of global form that reduced human accuracy to 28%. In Experiment 4, we retrained the decision layer of a DCNN to discriminate between circles and squares. Then, we tested the network on circles composed of local half-square elements and squares composed of half-circle elements. The network classified the former as squares and the latter as circles. In Experiment 5, we attempted to retrain the decision layer of a DCNN to discriminate between circles and ellipses. The network was unable to learn this discrimination, maintaining chance performance even after extended training. These results provide evidence that DCNNs may have access to some local shape information in the form of local edge relations, but they have no access to global object shapes.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×