September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Deep Neural Network Selectivity for Global Shape
Author Affiliations
  • Nicholas Baker
    York University
  • James Elder
    York University
Journal of Vision September 2021, Vol.21, 2285. doi:https://doi.org/10.1167/jov.21.9.2285
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nicholas Baker, James Elder; Deep Neural Network Selectivity for Global Shape. Journal of Vision 2021;21(9):2285. https://doi.org/10.1167/jov.21.9.2285.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Background: Deep convolutional neural networks (DCNNs) trained to classify objects have reached remarkable levels of performance and are predictive of brain response in both human and non-human primates. However, DCNNs rely more on texture than shape relative to humans (Baker, Lu, Erlikhman & Kellman, 2018; Geirhos et al., 2018), and also appear to be biased toward local shape features (Baker et al., 2020, but see Keshvari et al. 2019). Here we employ a novel method to test for DCNN selectivity for the global shape of an object. Method: We used a dataset of animal silhouettes from 10 animal categories. To assess selectivity for global shape, we created two variants of these stimuli that disrupt the global configuration of the object while largely preserving local features. In the first variant, we flipped the top portion of the object left-to-right but maintained its smooth connection with the bottom of the object, thus disrupting global shape but preserving object coherence. In the second variant we also shifted the top portion laterally so that both global shape and global coherence were disrupted. We then analyzed the classification performance of the Resnet50 DCNN (He et al., 2016) on these stimuli, using two different training curricula: ImageNet alone, and ImageNet + Stylized Imagenet, which has been reported to improve performance on silhouettes (Geirhos et al., 2018). Results: We found that disrupting global shape while maintaining local shape and object coherence induced a ~60% drop in classification performance, while also disrupting coherence induced an additional ~80% drop. Interestingly, co-training on Stylized Imagenet did not mitigate these impacts and reduced performance on silhouettes overall. Implications: While prior work suggests that DCNNs are biased toward texture and local shape features, our findings suggest that at least some ImageNet-trained DCNNs are profoundly selective for global shape and object coherence.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×