Abstract
Background: Deep neural network (DNN) models developed for image classification have been suggested as biologically inspired models for visual processing (Yamins & DiCarlo, 2016). Here we apply a pre-trained visual DNN to model the mechanisms of symmetry perception in the human brain, where symmetry signals are found only in upper levels of the visual hierarchy (Tyler et al., 2005). Methods: To assess pure symmetry independent of recognizable objects, we used the standard ImageNet-trained VGG network model to compute the average L2 distance from zero symmetry for 500 random-dot symmetry images with one, two, and four axes of symmetry, relative to the L2 distances for zero-symmetry random-dot images. Results: The DNN L2 distances were 1.5 and 2 dB for one- and two- axis symmetry respectively, and up to 15 dB for four-axis symmetry. These effects were highly significant in the DNN upper layers (from layer fc6), but absent in the lower layers, except for 4-axis symmetry. We further found (1) a significant effect for partial symmetry down to 20%; (2) increasing symmetry effect with image size, saturating for larger images; (3) vertical axis predominance for 1-axis symmetry; (4) a surprising robustness of the DNN symmetry response to large gaps around the symmetry axes. In comparison, humans show no reduction in symmetry detection with gaps over very long ranges when scaled for cortical magnification (Tyler, 1996). Conclusions: These findings demonstrate effortless replication in a DNN of the human ability to perceive abstract symmetry, even in random-dot patterns devoid of recognizable objects, suggesting that symmetry is an emergent property of DNN networks trained to capture regularities in the natural environment. Although many visual objects incorporate structural symmetries, they are often distorted in natural images due to arbitrary viewpoints, making it remarkable that the network finds the symmetries so effectively with no symmetry training.