December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Human-like signatures of contour integration in deep neural networks
Author Affiliations & Notes
  • Fenil Doshi
    Harvard University
  • Talia Konkle
    Harvard University
  • George Alvarez
    Harvard University
  • Footnotes
    Acknowledgements  NSF PAC COMP-COG 1946308, NSF CAREER BCS-1942438
Journal of Vision December 2022, Vol.22, 4222. doi:https://doi.org/10.1167/jov.22.14.4222
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Fenil Doshi, Talia Konkle, George Alvarez; Human-like signatures of contour integration in deep neural networks. Journal of Vision 2022;22(14):4222. https://doi.org/10.1167/jov.22.14.4222.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Deep neural networks have become the de facto models of human visual processing, but currently lack human-like representations of global shape information. For humans, it has been proposed that global shape representation starts with early mechanisms of contour integration. For example, people are able to integrate over local features and detect extended contours embedded in noisy displays, with high sensitivity for straight lines and systematically decreasing sensitivity as contours become increasingly curvilinear (Field et al., 1993). Here, we tested whether deep neural networks have contour detection mechanisms with these human-like perceptual signatures. Considering a deep convolutional neural network trained to do object recognition (Alexnet), we find that the pre-trained layer-wise feature spaces have little to no capacity to detect extended contours. However, when the network was fine-tuned to detect the presence or absence of a hidden contour, the fine-tuned feature spaces were able to perform contour-detection nearly perfectly. Further, using a gradient-based visualization method – guided backpropagation – we find that these fine-tuned classifiers are indeed identifying the full contour, rather than leveraging some unexpected strategy to succeed at the task. Critically, we also found that the scope of fine-tuning was key to achieving human-like contour detection: networks trained only to detect relatively straight contours naturally showed human-like graded accuracy to detect increasingly curvilinear contours, while networks fine-tuned to across the full range of curvature values, or at intermediate curvature levels only, showed distinctly non-human-like signatures, with peaks at the trained curvatures. These results provide a computational argument that human contour detection may actually rely on mechanisms solely designed to amplify relatively linear contours. Further, these results demonstrate that convolutional neural network architectures are capable of proper contour detection, but do not have the relevant inductive biases to develop these contour-integration mechanisms in service of object classification tasks.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×