September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Deep Neural Networks as a Computational Model for the Human Perception of Visual Symmetry
Author Affiliations
  • Yoram Bonneh
    Bar-Ilan University
  • Christopher Tyler
    Smith-Kettlewell Eye Research Institute
Journal of Vision September 2021, Vol.21, 1882. doi:https://doi.org/10.1167/jov.21.9.1882
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yoram Bonneh, Christopher Tyler; Deep Neural Networks as a Computational Model for the Human Perception of Visual Symmetry. Journal of Vision 2021;21(9):1882. doi: https://doi.org/10.1167/jov.21.9.1882.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Background: Deep neural network (DNN) models developed for image classification have been suggested as biologically inspired models for visual processing (Yamins & DiCarlo, 2016). Here we apply a pre-trained visual DNN to model the mechanisms of symmetry perception in the human brain, where symmetry signals are found only in upper levels of the visual hierarchy (Tyler et al., 2005). Methods: To assess pure symmetry independent of recognizable objects, we used the standard ImageNet-trained VGG network model to compute the average L2 distance from zero symmetry for 500 random-dot symmetry images with one, two, and four axes of symmetry, relative to the L2 distances for zero-symmetry random-dot images. Results: The DNN L2 distances were 1.5 and 2 dB for one- and two- axis symmetry respectively, and up to 15 dB for four-axis symmetry. These effects were highly significant in the DNN upper layers (from layer fc6), but absent in the lower layers, except for 4-axis symmetry. We further found (1) a significant effect for partial symmetry down to 20%; (2) increasing symmetry effect with image size, saturating for larger images; (3) vertical axis predominance for 1-axis symmetry; (4) a surprising robustness of the DNN symmetry response to large gaps around the symmetry axes. In comparison, humans show no reduction in symmetry detection with gaps over very long ranges when scaled for cortical magnification (Tyler, 1996). Conclusions: These findings demonstrate effortless replication in a DNN of the human ability to perceive abstract symmetry, even in random-dot patterns devoid of recognizable objects, suggesting that symmetry is an emergent property of DNN networks trained to capture regularities in the natural environment. Although many visual objects incorporate structural symmetries, they are often distorted in natural images due to arbitrary viewpoints, making it remarkable that the network finds the symmetries so effectively with no symmetry training.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×