**Abstract**:

**Abstract**
**Symmetry is a biologically relevant, mathematically involving, and aesthetically compelling visual phenomenon.** **Mirror symmetry detection is considered particularly rapid and efficient, based on experiments with random noise.** **Symmetry detection in natural settings, however, is often accomplished against structured backgrounds.** **To measure salience of symmetry in diverse contexts, we assembled mirror symmetric patterns from 101 natural textures. Temporal thresholds for detecting the symmetry axis ranged from 28 to 568 ms indicating a wide range of salience (1/Threshold). We built a model for estimating symmetry-energy by connecting pairs of mirror-symmetric filters that simulated cortical receptive fields. The model easily identified the axis of symmetry for all patterns. However, symmetry-energy quantified at this axis correlated weakly with salience. To examine context effects on symmetry detection, we used the same model to estimate approximate symmetry resulting from the underlying texture throughout the image. Magnitudes of approximate symmetry at flanking and orthogonal axes showed strong negative correlations with salience, revealing context interference with symmetry detection. A regression model that included the context-based measures explained the salience results, and revealed why perceptual symmetry can differ from mathematical characterizations. Using natural patterns thus produces new insights into symmetry perception and its possible neural circuits.**

*contour symmetries,*in which the outlines of shapes are symmetric (Wilson & Wilkinson, 2002), and

*pattern symmetries,*that have dense symmetric point correspondences (Barlow & Reeves, 1979). We measured temporal thresholds for identifying the symmetry orientation, and quantified salience as the inverse of the threshold.

*class*of eight symmetric and 24 nonsymmetric images through the process illustrated in Figure 1: Each image was divided into quadrants. First, each quadrant was reflected about a vertical axis, and then each of the original quadrants was added to each of the reflected quadrants (pixel-by-pixel). Four symmetric images resulted from each quadrant added to its own reflection. Twelve nonsymmetric images resulted from each of the four quadrants added to the reflection of each of the other three quadrants. Then the same process was repeated with reflections about the horizontal axis doubling the number of test images in an image class. Two directions of reflection assured that a dominant orientation within an image class would not provide a reliable cue to the direction of symmetry. Symmetric test images were presented with the axis oriented vertically (randomly 0° or 180°) or horizontally (90° or 270°). Nonsymmetric images were used as masks and were presented at 0°, 90°, 180°, or 270°, determined randomly. Since the nonsymmetric masks were generated through the same procedure as the symmetric stimuli, they have similar spatial-frequency and orientation statistics. Tests and masks were square grayscale images subtending 6.2 degrees of visual angle. Each image was histogram-equalized, and normalized for maximum contrast.

*r*= 0.68,

*p*< 0.0001, (

*SE*= 0.07). Using the standard definition of Salience as perceptual prominence, or likelihood of being noticed, we assume that the more prominent a feature is, the faster it can be seen, so 1/Temporal-Threshold quantifies Salience, just as 1/Contrast-Threshold quantifies Sensitivity. In order to determine the sources of variation in symmetry salience, we calculated measures of mirror symmetric energy and distracting factors across the complete set of images.

^{1}. To simulate oriented, odd-symmetric, cortical filters in a computationally efficient manner, we used steerable pyramids (Simoncelli & Freeman, 1995) at six orientations spaced uniformly around the circle, and four spatial scales beginning from the finest possible at the pixel size and then becoming progressively coarser by an octave in spatial frequency. All 24 pyramid filters were correlated with the image at every pixel. For every pixel, only the orientation and output of the filter with the absolute maximum response at each scale was retained, with its sign intact. This nonlinear operation drastically reduces the amount of computation required for subsequent analyses. For every candidate axis, pixels equidistant to the axis were compared for the orientations of the filters with the maximum response. If the two orientations were related by a mirror reflection, then an AND junction was activated (Figure 4). At each AND junction, if the outputs of the two filters were equal within a small tolerance, their outputs were summed into a symmetry-energy index (To inhibit inaccuracies in image rendering, pixel registration, rounding and other such factors from inactivating the AND operation, the tolerance was set at difference/sum < 0.05. Given the variance of neural responses, this tolerance seems realistic. It made little difference if the tolerance was set at 0.025). We calculated symmetry-energy for vertical and horizontal axes at every pixel within the central half of the image. In the absence of canonical data about how to degrade weights for symmetry-energy with distance from the axis, we used uniform weighting from the axis to one-fourth of the size of the image on both sides.

*n*×

*n*image, there are 2

*n*possibilities for the axis. The maximum response of our model provided the location of the correct axis for every one of the 101 × 8 = 808 patterns. Our model's search for the symmetry axis also yields measures of symmetry-energy at flanking and orthogonal axes. Therefore to examine the robustness of the axis-selectivity of the model, we calculated two ratios for the 101 generating patterns: symmetry-energy around the primary axis where it was maximum (“Primary Symmetry-Energy”) divided by energy around the orthogonal axis where it was maximum (“Orthogonal Symmetry-Energy”), and energy around the maximum axis divided by energy around the parallel flanking axis with the next largest magnitude of symmetry-energy (“Flanking Symmetry-Energy”). Figure 5 shows that both ratios are substantially over unity for all patterns (mean ratios, 14.2 and 8.4, ranges 4.1–24.70 and 2.7–15.2), demonstrating that the model easily identified the correct axis for patterns differing widely in spatial frequency, randomness and structure. Note that the relationships between salience and the distracting ratios are essentially linear.

*r*= −0.38,

*df*= 100,

*p*< 0.01). Thus the simplest explanation of symmetry salience as being proportional to symmetry-energy around the principal axis, can be rejected. Clearly, some other measures are necessary to explain observer performance.

*df*= 100,

*p*< 0.0001) with salience, and the ratio of axis symmetry-energy to flanking symmetry-energy had a positive correlation of 0.71 (

*df*= 100,

*p*< 0.0001) (Figure 5). These correlation magnitudes clearly show that the distracting effects of flanking and orthogonal approximate symmetries overcome the effects of symmetry-energy at the main axis. The correlations of Salience are higher with the Flanking-Ratio and Orthogonal-Ratio than with the un-normalized maximum flanking and orthogonal symmetry-energies (0.56 and 0.61), possibly because the ratios reflect the weighing of the relative energies in the decision. If the effects of the distracting symmetry energies are partialled out, the correlation between axis symmetry and salience is essentially zero. The partial correlation of Salience with Primary Symmetry-Energy when controlling for Flanking and Orthogonal Symmetry-Energy, is −0.03, implying that salience is essentially a function of the two distractors.

*R*

^{2}= 0.63 (

*df*= 97,

*p*< 0.0001) should be compared to the mean between-observer correlation (

*R*

^{2}= 0.46). The coefficient of the Primary Symmetry-Energy in the regression equation was vanishingly small. The combined effect of the two distractors is greater than each individual effect, even though the correlation between the distractors is 0.94. This analysis shows that when the image is also approximately symmetric around other axes, that reduces the salience of the primary axis and observers take longer to discern the correct axis and report its orientation. Salience differences for symmetry across a wide variety of natural patterns can thus be explained by these context effects.

*r*= 0.38,

*p*< 0.0001) between duration thresholds and the frequency of maximum energy, but this effect is much weaker than the flanking and orthogonal distracting effects, and adding spatial frequency to the regression did not substantially increase

*R*

^{2}.

*Vision Research*

*,*19

*,*783–793. [CrossRef] [PubMed]

*Visual Cognition*

*,*1

*,*377–400. [CrossRef]

*Journal of Theoretical Biology*

*,*38

*,*205–287. [CrossRef] [PubMed]

*Textures: A photographic album for artists and designers*. Mineola, NY: Dover Publications.

*The symmetries of things*. Wellesley, MA: A. K. Peters.

*Vision Research*

*,*37

*,*2915–2930. [CrossRef] [PubMed]

*Spatial Vision*

*,*8

*,*393–413. [CrossRef] [PubMed]

*Psychological Research*

*,*44

*,*199–212. [CrossRef] [PubMed]

*Neuron*

*,*73

*,*415–434. [CrossRef] [PubMed]

*Nature*

*,*382

*,*458–461. [CrossRef] [PubMed]

*Vision Research*

*,*38

*,*3795–3803. [CrossRef] [PubMed]

*Proceedings of the Royal Society of London, Series B*

*,*231 (1263)

*,*251–288. [CrossRef]

*The Journal of Physiology*

*,*195

*,*215–243. [CrossRef] [PubMed]

*Foundations of cyclopean perception*. Chicago: University of Chicago Press.

*Building high-level features using large scale unsupervised learning*. In Langford J. Pineau J. (Eds.), Proceedings of the 29th International Conference on Machine Learning, Edinburgh, Scotland, pp. 81–88. New York: Omnipress.

*Symmetry, causality, mind*. Cambridge, MA: MIT Press.

*Journal of Chemical Information and Computer Sciences*

*,*36

*,*367–376.

*The analysis of sensations*. Mineola, NY: Dover Publications. (Original work published 1886).

*Proceedings of the Royal Society of London*,

*Series B*

*,*200

*,*269–294. [CrossRef]

*The American Naturalist*

*,*151

*,*174–192. [CrossRef] [PubMed]

*Proceedings of the Royal Society of London. Series B: Biological Sciences*

*,*263(1366)

*,*105–110. [CrossRef]

*Evolution and Human Behavior*

*,*20 (5), 295–307. [CrossRef]

*3D shape: Its unique place in visual perception*. Cambridge, MA: MIT Press.

*Journal of Vision*, 10 (1): 9, 1–16, http://www.journalofvision.org/content/10/1/9, doi:10.1167/10.1.9. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*

*,*40

*,*2621–2644. [CrossRef] [PubMed]

*Psychonomic Bulletin and Review*

*,*5 (4), 659–669. [CrossRef]

*NeuroReport*

*,*11 (10)

*,*2133–2138. [CrossRef] [PubMed]

*IEEE Transactions on Pattern Analysis and Machine Intelligence*

*,*29 (3)

*,*411–426. [CrossRef] [PubMed]

*Neuroscience*

*,*21

*,*151–165. [CrossRef] [PubMed]

*IEEE Second International Conference on Image Processing*. Washington, DC.

*Proceedings of the Royal Society of London, Series B*

*,*258

*,*267–271. [CrossRef]

*The American Naturalist*

*,*158 (3), 300–307. [CrossRef] [PubMed]

*On growth and form*. Mineola, NY: Dover Publications. (Dover reprint of 1942 2nd ed. 1st ed., 1917).

*Symmetry*

*,*2

*,*1510–1543. [CrossRef]

*Science*

*,*311

*,*670–674. [CrossRef] [PubMed]

*Human symmetry perception and its computational analysis*. Utrecht, The Netherlands: VSP.

*Symmetries of culture*. Seattle, WA: University of Washington Press.

*Readings in perception*. (pp. 115–135). Princeton, NJ: Van Nostrand.

*British Journal of Mathematical and Statistical Psychology*

*,*18 (1), 1–10. [CrossRef] [PubMed]

*Vision Research*

*,*42

*,*589–597. [CrossRef] [PubMed]