Free
Research Article  |   June 2010
Visual detection of symmetry of 3D shapes
Author Affiliations
Journal of Vision June 2010, Vol.10, 4. doi:10.1167/10.6.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Tadamasa Sawada; Visual detection of symmetry of 3D shapes. Journal of Vision 2010;10(6):4. doi: 10.1167/10.6.4.

      Download citation file:


      © 2015 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

This study tested perception of symmetry of 3D shapes from single 2D images. In Experiment 1, performance in discrimination between symmetric and asymmetric 3D shapes from single 2D line drawings was tested. In Experiment 2, performance in discrimination between different degrees of asymmetry of 3D shapes from single 2D line drawings was tested. The results showed that human performance in the discrimination was reliable. Based on these results, a computational model that performs the discrimination from single 2D images is presented. The model first recovers the 3D shape using a priori constraints: 3D symmetry, maximal 3D compactness, minimum surface area, and maximal planarity of contours. Then the model evaluates the degree of symmetry of the 3D shape. The model provided good fit to the subjects' data.

Introduction
The following three types of symmetry are directly related to shape perception: mirror, rotational, and translational (Figure 1) (Mach, 1906/1959). Each type of symmetry is formally defined as invariance under some transformation: reflection, rotation, or translation. For example, in the case of mirror symmetry, one symmetric half of the object can be thought of as a mirror reflection of the other half. In two dimensions (2D), an axis of symmetry plays a role of a mirror. In three dimensions (3D), a plane of symmetry plays the role of a mirror. 
Figure 1
 
Three types of symmetry related to shape perception (after Mach, 1906/1959). (a) Mirror symmetry is invariant with respect to reflection. A reflection transformation of a shape about its axis of symmetry results in a shape congruent with the original shape. (b) Rotational symmetry is invariant with respect to rotation. (c) Translational symmetry is invariant with respect to translation.
Figure 1
 
Three types of symmetry related to shape perception (after Mach, 1906/1959). (a) Mirror symmetry is invariant with respect to reflection. A reflection transformation of a shape about its axis of symmetry results in a shape congruent with the original shape. (b) Rotational symmetry is invariant with respect to rotation. (c) Translational symmetry is invariant with respect to translation.
Systematic study of perception of symmetry started with Mach (1906/1959) and was followed by the Gestalt psychologists (Koffka, 1935; Wertheimer, 1923). It has been shown that mirror symmetry is detected by the human visual system more reliably than other types of symmetry (for reviews, see Wagemans, 1997; Zabrodsky, 1990). This preference for mirror symmetry seems to be reasonable because many natural and man-made objects in the real world are mirror symmetric or at least approximately mirror symmetric. It follows that detecting and recognizing 3D mirror-symmetric objects is important. In this paper, we will use “symmetry” to mean “mirror symmetry.” 
In the past, visual perception of symmetry has been studied primarily using symmetric retinal images. The human visual system can reliably detect symmetry on the retina, especially when the axis of symmetry is vertical and at the center of the retina (e.g., Barlow & Reeves, 1979; Jenkins, 1983; Julesz, 1971). However, the retinal image of a symmetric shape “out there” is symmetric only for a small set of viewing directions. 
Consider first the case of a retinal image of a 2D (planar) symmetric shape. The line segments connecting pairs of symmetric points of a 2D symmetric shape are called symmetry line segments. Symmetry line segments are parallel to one another, and their midpoints are collinear. Furthermore, the line connecting the midpoints is orthogonal to the symmetry line segments and it coincides with the symmetry axis of the 2D shape. If a planar symmetric shape is slanted relative to the observer, its retinal image is asymmetric and is called 2D skewed symmetry (Kanade, 1981). In an orthographic image, the projections of symmetry line segments are parallel to one another and their midpoints are collinear. However, the line connecting the midpoints is not orthogonal to the projections of the parallel line segments. In a perspective image, the projections of symmetry line segments are not parallel and their midpoints are not collinear (Sawada & Pizlo, 2008a; Wagemans, van Gool, & d'Ydewalle, 1992). Specifically, the lines representing the projections of symmetry line segments intersect at a single point called the vanishing point. It was shown that the performance in detection of 2D symmetric shapes based on a single retinal image that itself was skewed symmetric is reliable but is worse than in detection of symmetry on the retina (Sawada & Pizlo, 2008a; Wagemans, van Gool, & d'Ydewalle, 1991; Wagemans et al., 1992). Skewed symmetry in prior experiments was produced by using perspective, orthographic, or projective images. It is perspective projection, which correctly simulates the rules of geometrical optics (Pizlo, Rosenfeld, & Weiss, 1997a, 1997b). A number of studies demonstrated that the visual system uses the rules of perspective projection in shape perception (Kaiser, 1967; Pizlo & Salach-Golyska, 1995; Wagemans, Lamote, & van Gool, 1997; Yang & Kubovy, 1999). An orthographic projection is an approximation to a perspective projection. This approximation is good when the range in depth of the simulated object is small compared to the viewing distance. A projective transformation on the retina is produced when a perspective image on a computer screen is viewed from a wrong vantage point (Pirenne, 1970; Pizlo, 2008). This fact is related to the well-known theorem of projective geometry that a composition of two perspective projections is itself a projective rather than perspective projection. Interestingly, the detection of symmetry based on skewed symmetric retinal image is slightly more reliable with orthographic than perspective images (Sawada & Pizlo, 2008a). The detection of symmetry seems least reliable with projective images (see examples in Figure 2). This result suggests that the visual system uses the rules of orthographic rather than perspective projection in detection of symmetry from skewed symmetric images. This makes sense considering that orthographic projection is computationally simpler than perspective projection and under normal viewing conditions is likely to provide a good enough approximation. Next, we explain the properties of 3D symmetry. Perception of symmetric 3D shapes and the detection of 3D symmetry from a single 2D retinal image received very little, if any, attention in the past. 
Figure 2
 
A symmetric 2D shape (a) and its transformations when it is slanted relative to the observer: (b) perspective transformation and (c) orthographic transformation. Slant is 70 degrees and tilt is 85 degrees in both these transformations. When the reader's line of sight is orthogonal to this page at the cross F and the viewing distance is equal to the distance between F and D, the retinal images of both (b) and (d) are perspective images of the shape in (a) with slant 70 and tilt 85 degrees. When the reader's line of sight is orthogonal to this page at any other point than F, the retinal image of (d) is a projective image of the shape in (a).
Figure 2
 
A symmetric 2D shape (a) and its transformations when it is slanted relative to the observer: (b) perspective transformation and (c) orthographic transformation. Slant is 70 degrees and tilt is 85 degrees in both these transformations. When the reader's line of sight is orthogonal to this page at the cross F and the viewing distance is equal to the distance between F and D, the retinal images of both (b) and (d) are perspective images of the shape in (a) with slant 70 and tilt 85 degrees. When the reader's line of sight is orthogonal to this page at any other point than F, the retinal image of (d) is a projective image of the shape in (a).
Consider a symmetric 3D shape, like that in Figure 3. The 2D retinal image of a symmetric 3D shape is itself symmetric only if the line of sight lies on the symmetry plane (Figure 3a). For other viewing orientations, the 2D retinal image is asymmetric and will be called 3D skewed symmetry (Figure 3b). The line segments, which connect pairs of 3D symmetric points, are called 3D symmetry line segments. They all are parallel to the normal of the common symmetry plane, and they are bisected by this plane. It follows that the 3D symmetry line segments are parallel to one another and their midpoints are coplanar. Unlike the 2D case, the 3D symmetry line segments are not coplanar and their midpoints are not collinear. Next, consider a 2D orthographic image of a symmetric 3D shape. The projections of 3D symmetry line segments (called henceforth 2D symmetry line segments) are parallel to one another. Unlike the 2D case, the midpoints of 2D symmetry line segments are not collinear. In a perspective image, the 2D symmetry line segments are not parallel: Their extrapolations intersect at the vanishing point. 
Figure 3
 
A symmetric polyhedron, its symmetry plane and symmetry line segments. The 3D symmetry line segments are parallel to one another, and their projections (2D symmetry line segments) are also parallel to one another under an orthographic condition. The 3D symmetry line segments are parallel to the normal of the symmetry plane and they are bisected by the symmetry plane.
Figure 3
 
A symmetric polyhedron, its symmetry plane and symmetry line segments. The 3D symmetry line segments are parallel to one another, and their projections (2D symmetry line segments) are also parallel to one another under an orthographic condition. The 3D symmetry line segments are parallel to the normal of the symmetry plane and they are bisected by the symmetry plane.
Despite the fact that the 2D retinal image of a symmetric 3D shape is asymmetric, symmetry of the 3D shape is a useful constraint in 3D shape recovery (Pizlo, 2008; Pizlo, Sawada, Li, Kropatsch, & Steinman, 2010). It has been shown that symmetry of a 3D shape facilitates human performance of 3D shape recognition (Chan, Stevenson, Li, & Pizlo, 2006; Liu & Kersten, 2003; Pizlo & Stevenson, 1999; van Lier & Wagemans, 1999; Vetter, Poggio, & Bülthoff, 1994). These prior studies showed that the human observer can reliably recognize the same symmetric 3D shape from different viewpoints. Note that the 2D retinal image of a 3D shape changes when the viewpoint changes. The reliable performance of 3D shape recognition requires veridical perception of the 3D shape from each 2D image. Li, Pizlo, and Steinman (2009) proposed a model that can recover a veridical 3D shape from a single 2D image of a symmetric 3D shape. Their model uses symmetry of the 3D shape as an implicit constraint; i.e., the recovered 3D shape is always symmetric. They showed that the 3D shape recovered by the model is very similar to a 3D shape perceived by the observer from the same 2D image. However, a question arises as to how the human visual system knows whether the 2D image was produced by a 3D symmetric shape. Either the symmetry of the 3D shape is determined before the 3D shape is recovered or the 3D shape recovery maximizes the 3D symmetry (rather than assume that the shape is symmetric), and the symmetry of the 3D shape is a byproduct of the shape recovery process. Clearly, detecting symmetry of a 3D shape is important for a human observer, and our everyday life experience suggests that we can accomplish this task very well. However, there has been no systematic study testing human performance in detecting symmetry of a 3D shape from a single 2D image. 
It is trivially true that any 2D image of a symmetric 3D shape is consistent with infinitely many asymmetric 3D shapes. In order to see this, imagine displacing one point of a symmetric 3D shape along the line connecting this point with its image. The new 3D shape will be asymmetric, but its 2D image will not change. It is less trivial, but also true, that there are asymmetric 3D shapes such that each of their 2D images is consistent with infinitely many symmetric 3D shapes. In this study, we show that the human observer can reliably discriminate between symmetric and asymmetric 3D shapes from single 2D images. The human performance cannot be predicted from the features of the 2D retinal image, even though the 2D retinal image is the only visual data provided to the observer. Clearly, the human visual system “goes beyond the information given” (Bartlett, 1932): It uses a priori constraints that determine the likelihood of the 3D interpretations of a given 2D image. We present a new computational model that explains the nature of these constraints. Performance of the model in a 3D symmetry discrimination task is very similar to the performance of human subjects. 
Experiment 1: Discrimination between symmetric and asymmetric 3D shapes from single 2D images and from kinetic depth effect
Methods
Subjects
Three subjects (including the author) were tested. All subjects had prior experience as subjects in psychophysical experiments. TS received extensive practice before being tested. TS and ZP knew the purpose of the experiment. RV was naïve about the purpose. All subjects had normal or corrected-to-normal vision. 
Apparatus
The stimuli were shown on an LCD monitor with 1280 × 1024 resolution and 60 Hz refresh rate. The subject viewed the monitor with the right eye from a distance of 40 cm in a dark room. The subject wore an eye patch on his left eye. The subject's head was supported by a chin-forehead rest. The subject's line of sight was orthogonal to the monitor. 
Stimuli
The 2D orthographic images (line drawings) of abstract symmetric and asymmetric polyhedra were used (Figure 4). Note that the 2D image was always asymmetric, regardless whether the image was produced by a symmetric or an asymmetric 3D shape. The subject was asked to judge the symmetry of the 3D shape, not the symmetry of the 2D image. The abstract shapes were used to avoid confounding effects of familiarity (Chan et al., 2006; Pizlo & Stevenson, 1999). Each symmetric polyhedron had 16 vertices. The positions of the vertices were generated randomly in 3D under the following restrictions: the vertices formed eight symmetric pairs, the faces of the polyhedron were planar, the polyhedron consisted of two small boxes, and one big box. The symmetric polyhedron had 28 straight edges and 14 planar faces, which were quadrangles. Eight out of the 28 edges connected the symmetric pairs of vertices. These eight edges were 3D symmetry line segments.1 
Figure 4
 
Orthographic images of a symmetric polyhedron (a) and of three types of asymmetric polyhedra generated by distorting a symmetric polyhedron. (a) A symmetric polyhedron. The faces of the symmetric polyhedron were planar. The 3D symmetry line segments were always parallel. (b) An asymmetric polyhedron in Condition-R. The faces were not planar and the “3D symmetry line segments” were not parallel. (c) An asymmetric polyhedron in Condition-N. The faces were planar but the “3D symmetry line segments” were not parallel. (d) An asymmetric polyhedron in Condition-P. The faces were planar and the “3D symmetry line segments” were parallel. Amount of the distortion for generating these asymmetric polyhedra (b–d) was the largest (L4).
Figure 4
 
Orthographic images of a symmetric polyhedron (a) and of three types of asymmetric polyhedra generated by distorting a symmetric polyhedron. (a) A symmetric polyhedron. The faces of the symmetric polyhedron were planar. The 3D symmetry line segments were always parallel. (b) An asymmetric polyhedron in Condition-R. The faces were not planar and the “3D symmetry line segments” were not parallel. (c) An asymmetric polyhedron in Condition-N. The faces were planar but the “3D symmetry line segments” were not parallel. (d) An asymmetric polyhedron in Condition-P. The faces were planar and the “3D symmetry line segments” were parallel. Amount of the distortion for generating these asymmetric polyhedra (b–d) was the largest (L4).
An asymmetric polyhedron was generated by distorting the symmetric one. There were three types of asymmetric polyhedra corresponding to three experimental conditions. In Condition-R (random distortion), the faces of an asymmetric polyhedron were not planar and the line segments which were 3D symmetry line segments in its original symmetric polyhedron were not parallel (Figure 4b). In Condition-N (non-parallel 3D symmetry line segments), the faces of an asymmetric polyhedron were planar but its “3D symmetry line segments” were not parallel (Figure 4c). In Condition-P (parallel 3D symmetry line segments), the faces of an asymmetric polyhedron were planar and its “3D symmetry line segments” were parallel (Figure 4d). This condition is especially interesting because every orthographic image of an asymmetric object of this type is consistent with a symmetric 3D interpretation. This means, that every image in this condition, regardless whether the image was produced by a symmetric or an asymmetric 3D shape, was consistent with both symmetric and asymmetric 3D interpretation. Will the subject be able to perform above chance level in this condition? 
The distortion that was used to produce an asymmetric polyhedron in each condition was made by displacing eight out of its 16 vertices. In Condition-N, the vertices were displaced along the directions of the edges connecting the vertices. In Condition-P, the vertices were displaced along the “3D symmetry line segments.” In Conditions-N and -P, two of the eight vertices were displaced first, and then the other six vertices were displaced so that the planarity of the faces was preserved. The amount of the distortion was controlled by restricting the range of the random displacement of the first two vertices in Condition-N and -P and of all eight vertices in Condition-R. 
We also tested the effect of kinetic depth effect on the discrimination. Note that a single 2D image is ambiguous but three or more 2D images are sufficient for reconstructing a unique 3D shape (Ullman, 1979). In each trial of this condition (called kinetic), the 3D polyhedron rotated around the vertical axis. The magnitude of rotation was 10 degrees. During the duration of the trial (0.5 sec), the subject was presented with 30 images. 
An orthographic projection was used to produce the 2D images of the polyhedra. Hidden contours were removed. Each polyhedron was randomly oriented in 3D space with the following two restrictions: at least one vertex of each pair of symmetric vertices, and at least ten vertices forming five symmetric pairs had to be visible. These constraints allow the computational model to recover the entire polyhedron, both the visible and the hidden parts (see 1). The polyhedra were drawn in white on a dark background with high contrast. The width of the contour was 2 pixels (0.6 mm). The image of the polyhedron subtended 16.5 degrees (11.5 cm) on average. 
Procedure
The method of signal detection was used. Each session consisted of 200 trials: 100 trials with an image of a symmetric polyhedron and 100 trials with an image of an asymmetric polyhedron, presented in a random order. There were 24 experimental conditions: three types of an asymmetric polyhedron (Condition-R vs. -N vs. -P) × four levels of distortion for generating the asymmetric polyhedra (L1–L4) × two types of display (static vs. kinetic). The levels of distortion corresponded to the extent by which the vertices have been moved: 0.07–0.14 cm (L1), 0.14–0.28 cm (L2), 0.28–0.56 cm (L3), and 0.56–1.12 cm (L4). Each session tested a single condition. Before each session, the subject was informed about the condition. Each session started with a block of 20 practice trials. The subject ran two sessions for each condition. The order of sessions was randomized. 
Each trial began with a fixation cross. After pressing the mouse button, the fixation cross disappeared and the stimulus was shown for 500 ms. The subject's task was to respond whether or not the presented polyhedron was symmetric. After each trial, a feedback about the accuracy of the response was given. The performance of the subject was evaluated by the discriminability measure d′ used in the signal detection theory and its standard error. Higher performance corresponds to higher values of d′; d′ = 0 represents chance performance and d′ = ∞ represents perfect performance. d′ was computed for each session. The standard error was computed from two values of d′. 
Results and discussion
Results of individual subjects and the averaged results are shown in Figure 5. The ordinate shows d′ and the abscissa shows the level of distortion of the asymmetric polyhedra. 2 The results of static and kinetic conditions are plotted separately. The three curves indicate the three asymmetry types: Condition-R (circles), -N (triangles), and -P (squares). The results were analyzed using a three-way ANOVA within-subjects design: conditions (R vs. N vs. P) × levels of distortion for generating asymmetric polyhedra (L1–L4) × type of display (static vs. kinetic). 
Figure 5
 
Results from Experiment 1. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. The three curves indicate the three types of asymmetric polyhedra. Results from the static condition are plotted on the left and those from the kinetic condition are plotted on the right. (a) Results of individual subjects. Error bars represent the standard errors calculated from two sessions for each condition. (b) Averaged results from all three subjects. Error bars represent the standard errors calculated from three subjects.
Figure 5
 
Results from Experiment 1. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. The three curves indicate the three types of asymmetric polyhedra. Results from the static condition are plotted on the left and those from the kinetic condition are plotted on the right. (a) Results of individual subjects. Error bars represent the standard errors calculated from two sessions for each condition. (b) Averaged results from all three subjects. Error bars represent the standard errors calculated from three subjects.
As expected, discrimination was easier when the distortion was greater (F3, 46 = 249.94, p < 0.001) and when the presentation was kinetic (F1, 46 = 24.42, p < 0.001). Performance was significantly higher than chance level for all asymmetry types even in static L1 condition, in which the distortion of asymmetric shapes was the smallest (R: t(5) = 5.10, p < 0.005; N: t(5) = 3.44, p < 0.05; P: t(5) = 3.29, p < 0.05). This means that the subjects could reliably discriminate between symmetric and asymmetric 3D polyhedra even when only one 2D image was provided. The main effect of the asymmetry type was also significant (F2, 46 = 105.18, p < 0.001). An interaction between the asymmetry type and the level of distortion was also significant (F6, 46 = 5.89, p < 0.001), but this interaction was most likely due to a floor effect. A posteriori test (Tukey HSD) showed that the difference between Conditions-N and -P was significant in L4 (p < 0.005) but was not significant in L1, L2, and L3 conditions (p > 0.05). The other interactions were not significant (p > 0.05). Performance was the best in Condition-R and the worst in Condition-P. This was expected. The most interesting result, however, is that the subject was able to perform above chance level in static Condition-P. Recall that in this condition, each image of an asymmetric polyhedron is consistent with a symmetric 3D interpretation. It follows that in all trials in this condition, the 2D image was consistent with both symmetric and asymmetric 3D interpretation regardless whether the 2D image was produced by a symmetric or an asymmetric 3D shape. So, mathematically, the images in this condition were completely ambiguous. How could the subject make the discrimination? It seems that the only way to make the discrimination reliably is to evaluate the likelihood of the symmetric and asymmetric 3D interpretations. The computational model described later in this paper, does this by recovering a 3D shape by means of maximizing a cost function that includes four a priori constraints. Once the 3D shape is recovered, its 3D symmetry is evaluated. 
Finally, our results show that although performance in the kinetic condition was higher than that in the static condition, the improvement was rather small. On average, the d′ improved by only 0.26. These results suggest that kinetic depth effect is of secondary importance in 3D symmetry perception. Pictorial information provided by a single 2D image plays a major role. 
Experiment 2: Discrimination between degrees of asymmetry of 3D shapes from single 2D images
Results of Experiment 1 show that the subject can reliably discriminate between symmetric and asymmetric 3D shapes even from single 2D images. The discrimination was easier when the asymmetric shape was distorted more. This suggests that 3D asymmetry is a continuous perceptual feature and that the degree of asymmetry can be perceptually judged. This has already been demonstrated in the case of 2D shapes (Barlow & Reeves, 1979; Tjan & Liu, 2005; see also Zabrodsky & Algom, 1994; Zimmer, 1984). Barlow and Reeves (1979) showed that the subjects could discriminate more asymmetric from less asymmetric 2D dot patterns on the retina. The experiment presented here is a 3D version of Barlow and Reeves' experiment. The main difference is that here 3D rather than 2D stimuli were used, and the stimuli were shapes rather than dots. 
Methods
The experimental method was the same as in Experiment 1 except as indicated below. All polyhedra used in Experiment 2 were generated the same way as the asymmetric polyhedra in Experiment 1. Recall that the asymmetric polyhedra in Experiment 1 were generated from symmetric ones by applying four levels of distortion (L1–L4). The level of the distortion represents the degree of asymmetry; the more distorted symmetric polyhedron is more asymmetric. Only static 2D images of the polyhedra were used in Experiment 2. 
Each session consisted of 200 trials with distorted symmetric polyhedra: A half of the polyhedra were more asymmetric than the other half. The subject's task was to respond whether the presented polyhedron was more or less asymmetric. Before each session, the subject ran a block of 20 practice trials. The more asymmetric polyhedra always came from level L4 of distortion. There were 9 experimental conditions: asymmetry type (Condition-R vs. -N vs. -P) × three levels of distortion for generating the less asymmetric polyhedra (L1–L3). Each condition was replicated twice. 
Results and discussion
Results of individual subjects and the averaged results are shown in Figure 6. The ordinate shows d′, and the abscissa shows levels of distortion of the less asymmetric polyhedra. The three curves indicate the asymmetry types. The results were analyzed using two-way ANOVA within-subjects design. 
Figure 6
 
Results from Experiment 2. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. The three curves indicate the three types of asymmetric polyhedra. (a) Results of individual subjects. Error bars represent the standard errors calculated from two sessions for each condition. (b) Averaged results from all three subjects. Error bars represent the standard errors calculated from three subjects.
Figure 6
 
Results from Experiment 2. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. The three curves indicate the three types of asymmetric polyhedra. (a) Results of individual subjects. Error bars represent the standard errors calculated from two sessions for each condition. (b) Averaged results from all three subjects. Error bars represent the standard errors calculated from three subjects.
As expected, discrimination was easier when the difference between the two degrees of the distortion was greater (F2, 16 = 99.99, p < 0.001). Performance was significantly higher than chance level for all asymmetry types even in L3 condition, in which the difference was the smallest (R: t(5) = 14.25, p < 0.001; N: t(5) = 10.07, p < 0.05; P: t(5) = 18.21, p < 0.05). This means that the subjects could reliably discriminate between two degrees of asymmetry of 3D shapes from single 2D images. The main effect of the asymmetry type was also significant (F2, 16 = 26.29, p < 0.001). An interaction between these two factors was not significant (F4, 16 = 1.76, p = 0.187). Performance was the best in Condition-R and the worst in Condition-P. As in Experiment 1, the subject was able to perform above chance level in Condition-P. These results show that the human visual can measure the degree of asymmetry of a 3D shape and use this measure in discriminating between more and less asymmetric 3D shapes. The nature of the perceptual metric for asymmetry will be discussed in the next section. 
One subject (TS) ran a control experiment in the presence of kinetic depth effect. The results are shown in Figure 7. On average, d′ improved by only 0.19. As in Experiment 1, the kinetic depth effect did not substantially improve the performance. 
Figure 7
 
Results of TS in the kinetic condition of Experiment 2 (the panel on right). Results of TS in the static condition (Experiment 2) are also plotted on left. Error bars represent the standard errors calculated from two sessions for each condition.
Figure 7
 
Results of TS in the kinetic condition of Experiment 2 (the panel on right). Results of TS in the static condition (Experiment 2) are also plotted on left. Error bars represent the standard errors calculated from two sessions for each condition.
Computational model of discrimination between symmetric and asymmetric 3D shapes from single 2D images
The 2D images of both the symmetric and asymmetric 3D shapes are asymmetric (except for degenerate views, which were excluded from our psychophysical and simulation experiments). Hence, discrimination between the symmetric and asymmetric 3D shapes could not be based on 2D symmetry of their 2D images (see 2 for the simulation results supporting this claim). Recall that the 3D symmetry line segments of a symmetric 3D shape are all parallel to one another and the same is true in any 2D orthographic projection of the 3D shape. This is the only invariant of 3D mirror symmetry. Consider an asymmetric 3D polyhedron with parallel line segments (Figure 4d). The parallelism of these line segments is also preserved in a 2D orthographic image. It is easy to show (see below) that any image of this asymmetric polyhedron is consistent, in principle, with a symmetric 3D interpretation. Interestingly, human subjects can discriminate between images of a symmetric 3D shape and asymmetric 3D shape with parallel line segments (Condition-P in Experiment 1). They could also judge the degree of asymmetry of a 3D shape with parallel line segments from a single 2D image (Experiment 2). These results suggest that the parallelism of 2D symmetry line segments, which is the only invariant of 3D mirror symmetry under orthographic projection, is not sufficient to explain human performance. Human subjects must use other properties of the 3D shape in their judgments. The model described below simulates the discrimination between symmetric and asymmetric 3D shapes by first recovering the 3D shape from a single 2D image. The recovery is based on four a priori constraints: symmetry of the 3D shape, planarity of contours, compactness of the 3D shape, and the 3D shape's surface area. All these constraints are used in a cost function whose maximum determines the perceived 3D shape. 
Measure of asymmetry of a 3D shape in the 2D image
The model consists of two main stages. In the first stage, a 3D polyhedron is recovered from a single 2D image. This stage is an elaboration of models proposed in our prior studies (Li et al., 2009; Sawada & Pizlo, 2008b). In the second stage, the asymmetry of the recovered polyhedron is measured and compared to a criterion in order to decide whether or not the recovered 3D shape is symmetric. A 2D orthographic image of a 3D polyhedron is used as input. Note that the model does not perform figure-ground organization of the image; the given 2D image is already organized. Specifically, the model is provided with the visible contours of the polyhedron and with the information about which contours form the faces of the polyhedron. For the case of a symmetric polyhedron, the model is also given the information about which vertices form symmetric pairs and are connected by 2D symmetry line segments. Recall that asymmetric polyhedra used in the psychophysical experiments were generated by distorting symmetric ones. Hence, the asymmetric polyhedra were approximately symmetric and the model was given the information about which pairs of vertices are approximately symmetric and which line segments are approximately 2D symmetry line segments. Performance of the model was evaluated using the same 3D simulated shapes and their 2D images that were used in Experiments 1 and 2. 
The 3D shape recovery from its single 2D image is an ill-posed problem; this problem is underconstrained. This means that there are always infinitely many 3D shapes that could have produced the 2D image. To recover a unique 3D shape, a priori constraints must be used in order to restrict the family of possible 3D shapes (Pizlo, 2008; Poggio, Torre, & Koch, 1985). The 3D shape recovery itself is performed, in the new model in two steps. First, a symmetric or an approximately symmetric 3D shape is recovered by using 3D symmetry and planarity of contours as implicit constraints (assumptions). The maximal 3D compactness and minimum surface area are treated as explicit constraints and used in a cost function, which is minimized by performing a one-parameter search. In the second step, all four constraints are used as explicit constraints in a cost function and the 3D shape is deformed by performing search in multi-parameter space so as to minimize the function. The first step, which could, in principle be skipped, provides a good starting point for the multi-parameter optimization that is likely to have multiple local minima. The first step is described next. It has already been shown that each of these four a priori constraints plays an important role in recovering a veridical 3D shape from a single 2D image (Li et al., 2009; Pizlo et al., 2010). Furthermore, it has been shown that a model of 3D shape recovery based on these constraints recovers shapes that are very similar to the shapes recovered by the subjects (Li et al., 2009). Therefore, the present study is not trying to verify psychological plausibility of these constraints. The main purpose of this study is to formulate a model of 3D symmetry discrimination. In these prior studies, 3D symmetry and planarity of contours were used as implicit constraints. As a result, a recovered 3D shape was symmetric and had planar faces. This is also the case in the first step of the model proposed in this paper. 
Recovering a symmetric or an approximately symmetric 3D shape from a single 2D image
The 3D shape is recovered using an algorithm described in our prior studies (Li et al., 2009; Sawada & Pizlo, 2008b; see also Vetter & Poggio, 1994). The technical explanation of this algorithm is described in 1. Here, I will present the main features of the algorithm. Recall that the information about which vertices of the polyhedron form possible symmetric pairs is given. Assume that the line segments connecting these pairs of the vertices are parallel to one another. Parallelism of symmetry line segments is an invariant of a symmetric 3D shape under the orthographic projection and the image with this invariant is consistent with a symmetric 3D shape. If the line segments are only approximately parallel, they are corrected by moving their endpoints so that they become parallel and consistent with a symmetric 3D shape (Figures 8a and 8b; Sawada & Pizlo, 2008b; Zabrodsky & Weinshall, 1997). The amount of the correction is minimal in the least square sense. From the corrected image, which now is an image of a symmetric 3D shape, a “virtual image” of the same 3D shape is computed (Vetter & Poggio, 1994). The virtual image of a symmetric 3D shape is computed by reflecting the original 2D image with respect to an arbitrary line (Figure 8c). It is important to point out that the virtual image is computed from the given real 2D image without knowing the 3D shape. Under an orthographic projection, these two images determine the 3D shape “out there” up to one unknown parameter. This parameter represents the 3D orientation of the symmetry plane and the aspect ratio of the 3D shape. In other words, a 2D orthographic image of a symmetric 3D shape determines a one-parameter family of symmetric 3D shapes. 
Figure 8
 
Process of recovery of a symmetric polyhedron. (a) An original image given to the model. (b) A corrected image; the vertices are moved so that the 2D symmetry line segments (dashed contours) become parallel. (c) The virtual image generated by reflecting the real (corrected) image. (d) Only symmetric pairs of vertices that are both visible, along with the edges connecting them are shown. These vertices can be recovered from the real (corrected) image (b) and the virtual image (c). (e) A visible vertex (black circle) whose symmetric counterpart is occluded. This visible vertex can be recovered by applying the constraint of planarity to the contour enclosing the face (the shaded face). (f) An occluded vertex (open circle) whose counterpart (black circle) is visible and recovered in (e). This occluded vertex can be recovered by reflecting the counterpart with respect to the symmetry plane of the polyhedron. (g) The recovered polyhedron is uncorrected by moving the vertices so that the polyhedron is consistent with the original 2D image.
Figure 8
 
Process of recovery of a symmetric polyhedron. (a) An original image given to the model. (b) A corrected image; the vertices are moved so that the 2D symmetry line segments (dashed contours) become parallel. (c) The virtual image generated by reflecting the real (corrected) image. (d) Only symmetric pairs of vertices that are both visible, along with the edges connecting them are shown. These vertices can be recovered from the real (corrected) image (b) and the virtual image (c). (e) A visible vertex (black circle) whose symmetric counterpart is occluded. This visible vertex can be recovered by applying the constraint of planarity to the contour enclosing the face (the shaded face). (f) An occluded vertex (open circle) whose counterpart (black circle) is visible and recovered in (e). This occluded vertex can be recovered by reflecting the counterpart with respect to the symmetry plane of the polyhedron. (g) The recovered polyhedron is uncorrected by moving the vertices so that the polyhedron is consistent with the original 2D image.
The process described above can be applied to symmetric pairs whose both vertices are visible (Figure 8d). If one of the vertices of a symmetric pair is occluded, these vertices can be recovered using two constraints: planarity of contours, which include the visible vertex, and symmetry of the 3D shape (Li et al., 2009; Mitsumoto, Tamura, Okazaki, Kajimi, & Fukui, 1992). The planarity constraint is applied to the contours enclosing the faces of the polyhedron (Figure 8e). If at least three vertices of a planar contour have already been recovered based on the symmetry constraint, the orientation of the plane containing this contour is known. Then, the coordinates of any other visible vertex lying on this plane can be computed as well. The occluded counterpart of this vertex is recovered by reflecting the visible vertex with respect to the symmetry plane of the 3D shape (Figure 8f). 
The symmetry and planarity constraints determine a one-parameter family of 3D shapes. A unique 3D shape is selected from this family as the shape that maximizes a weighted combination of 3D compactness and surface area (Li et al., 2009). Compactness of the 3D shape is defined as: 
compactness(H)=36πV3D(H)2S3D(H)3,
(1)
where H represents the 3D shape, V3D(H) and S3D(H) are the volume and the surface area of the 3D shape H (Hildebrandt & Tromba, 1996). Maximum 3D compactness, in conjunction with 3D symmetry constraint give the object its volume. Minimum surface area is defined as the minimum of the total surface area of the object. It is equivalent to the maximum of the reciprocal of the total surface area. This constraint tends to decrease the thickness of the 3D shape along the depth direction. It makes the 2D image more stable in the presence of small 3D rotations, and thus the 3D recovered shape is more likely; a small change of the viewing direction does not change the 2D image substantially (Li et al., 2009). In order to use the surface area of the 3D shape as a unit-free parameter, the reciprocal of the total surface area of the 3D shape was multiplied by the size of the surface area of the projected 3D shape to the 2D image: 
surface(H)=S2D(H)S3D(H),
(2)
where S3D(H) and S2D(H) are the surface area of the 3D shape H and the surface area of the projection of H to the 2D image. The symmetric 3D shape which maximizes compactness(H) + surface(H) is chosen from the family of the symmetric 3D shapes. 
The recovered symmetric shape is consistent with the original image only if the 2D symmetry line segments are exactly parallel to one another in the original 2D image. If they are only approximately parallel, the 2D image has been corrected (see above). As a result, the recovered symmetric 3D shape is not consistent with the original 2D image. In order to make it consistent, the 3D shape has to be uncorrected (distorted). The uncorrection is done by moving the endpoints of the 3D symmetry line segments of the recovered symmetric shape so that the resulting asymmetric shape becomes consistent with the original image (Figure 8g; Sawada & Pizlo, 2008b). Amount of the uncorrection is minimal in the least square sense. 
Second stage of the 3D shape recovery
The 3D shape recovered in the first step will be deformed in the second stage until it maximizes a cost function with four constraints: symmetry of the 3D shape, planarity of contours, maximum 3D compactness, and minimum surface area. Symmetry is defined here as a negative of an asymmetry of the 3D shape:  
s y m m e t r y (H) = a s y m m e t r y (H) = a | α a α c o u n t e r p a r t (a) | π · n a ,
(3)
where α a and α counterpart(a) are the corresponding 2D angles of contours, and n a is number of the 2D angles of the polyhedron H. If the recovered shape is perfectly symmetric, its two halves are identical and symmetry(H) is zero; otherwise, it is smaller than zero. Planarity constraint enforces the contours enclosing the faces of the 3D shape to be planar (Leclerc & Fischler, 1992; see also Hong, Ma, & Yu, 2004; Sawada, Li, & Pizlo, 2010). In a planar np-gon, the sum of all interior angles is equal to (np − 2)π. When a convex polygon is not planar, the sum of its angles is smaller.3 Hence, the departure from planarity can be measured by computing the difference between the sum of the angles and (np − 2)π. Planarity of each face is defined as negative of the departure from planarity. Planarity of faces of the 3D shape is defined here as an average of planarity from all faces: 
planarity(H)=f|(np2)·πa=1npαf,a|π·nf,
(4)
where f represents polygonal face of the 3D shape, αf,a is ath internal angle of face f, and nf is the number faces of the polyhedron H. For the other two constraints, maximum 3D compactness and minimum surface, Equations 1 and 2 are used. The following equation is used as an overall objective function: 
E(H)=compactness(H)+surface(H)+symmetry(H)+planarity(H).
(5)
 
The absolute value of each component in the cost function ranges between 0 and 1. Compactness and surface constraints are positive (or zero), while symmetry and planarity are negative (or zero). All components of the cost function are weighted equally. Other coefficients have been tried, but the simplest form with equal coefficients seemed to work best. The model searches for a 3D shape, which maximizes the cost function E(H). The dimensionality of the search is established as follows: 1 (the orientation of the symmetry plane) + 8 (the depth positions of the “3D symmetry line segments”) + n o (the number of the occluded vertices). Note that the orientation of the symmetry plane is characterized by two parameters: slant and tilt. Tilt is estimated as an average orientation of the 2D symmetry line segments (Zabrodsky & Weinshall, 1997). Hence, only slant is used in the search. All 3D symmetry line segments are restricted to be parallel to the normal of the symmetry plane, if the 2D symmetry line segments are parallel to one another in the image. Otherwise, the 3D symmetry line segments are restricted to be as parallel as possible. The positions of the occluded vertices are restricted to be along the 3D symmetry lines emanating from their symmetric counterparts. The model uses a steepest gradient descent method for finding the minimum of −E(H) (which is equivalent to maximum of E(H)). 
Once the 3D shape is recovered, the model evaluates its asymmetry using Equation 3. The range of this measure is from 0 to 1; the bigger this value, the more asymmetric the polyhedron. The decision as to whether or not the polyhedron is classified as symmetric is based on a criterion whose value is chosen so as to maximize the fit of the model to the subjects' results. 
Model fitting
The model of detecting 3D symmetry was applied to the 2D images that were used in Experiment 1 (static condition) and Experiment 2. For each image, the model recovered a 3D shape and computed its asymmetry. The discriminability measure d′ was computed for each session. The criterion for model's classification of the recovered 3D shape as “asymmetric” was chosen so as to minimize the sum of squared differences between the d′ of the model and that of the subject in each experiment. Hence, there was one free parameter for 12 data points in Experiment 1 and one free parameter for 9 data points in Experiment 2. 
The visual noise was emulated by randomly perturbing the orientations and lengths of the 2D symmetry line segments. The noise of the orientation of the line segments was approximated by a Gaussian probability density function with a zero mean. The standard deviation of this distribution was computed based on orientation discrimination thresholds (Mäkelä, Whitaker, & Rovamo, 1993). The noise of the length of the line segments was approximated by a Gaussian probability density function with a zero mean (Chan et al., 2006; Levi & Klein, 1990). The standard deviation was 3% of the eccentricity of the endpoints (Chan et al., 2006; Watt, 1987). The eccentricity was computed assuming that the eye was fixated at the center of the image. 
Results of the model superimposed on the average results of the subjects in Experiment 1 are shown in Figure 9 (top panel). The ordinate shows d′ and the abscissa shows levels of distortion of the asymmetric polyhedra. The results show that the model can account for human performance quite well. Specifically, the model's performance improves with the level of the distortion. The performance is the best with the asymmetric polyhedra in Condition-R and is the worst with the asymmetric polyhedra in Condition-P. The same trends are also observed in the results of the human subjects. Similarly good fit was found for results in Experiment 2 (the bottom panel). It is worth pointing out that the estimated response bias of the model was very similar to that of human subjects in each session. On average, the rate of “symmetric shape” responses of the model, when the 3D shape was symmetric, was 80%, whereas that of the human subjects was 79%. The judgments of both the model and the human subjects in the psychophysical experiments were slightly biased toward “symmetric shape” responses. 
Figure 9
 
Results of the model in the simulation experiment. The model was applied to the images used in the psychophysical experiments. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. Results from different types of asymmetric polyhedra are plotted in separate graphs. The results of the model (solid symbols) are superimposed on the averaged results of the subjects in the psychophysical experiments. Error bars represent the standard errors calculated from two sessions for each condition.
Figure 9
 
Results of the model in the simulation experiment. The model was applied to the images used in the psychophysical experiments. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. Results from different types of asymmetric polyhedra are plotted in separate graphs. The results of the model (solid symbols) are superimposed on the averaged results of the subjects in the psychophysical experiments. Error bars represent the standard errors calculated from two sessions for each condition.
Again, the condition-P is of special interest here. It can be seen that the performance of the model is above chance level and quite similar to that of the subjects. As pointed out above, the discrimination in Condition-P cannot be easily performed based on 2D properties of the image because each image in this condition is geometrically consistent with both symmetric and asymmetric 3D interpretations. The model proposed here first recovers a 3D shape which satisfies a priori constraints and then measures asymmetry of the recovered shape. As a result, maximal 3D compactness, maximal planarity of contours, and minimum surface area constraints contribute to perception of symmetry of a 3D shape. Even though these constraints are not directly related to 3D symmetry, they increase the likelihood of the recovered 3D shape. 
Two alternative models, suggested by the reviewers, were also tested and compared to the psychophysical results. First, the discrimination between symmetric and asymmetric 3D shapes can potentially be performed based on the asymmetry of their 2D images (2). Specifically, it is reasonable to expect that the degree of asymmetry of a 2D image of a 3D asymmetric shape is higher than that of a 3D symmetric shape. Second, a 2D image of an asymmetric 3D shape in Condition-P can be a degenerate (accidental) view of its symmetric 3D interpretation (3). If the 2D image is a degenerate view of a 3D shape, a small change of the viewing orientation of the 3D shape will cause a large change of its 2D image. It is known that the human visual system tends to avoid unstable interpretations, although this observation has never been directly tested (Freeman, 1994; Mach, 1906/1959). These two models were examined in simulation experiments (see 2 and 3 for more details). These experiments show that the asymmetry of the 2D image and the generic view assumption cannot explain our psychophysical results. Reliable discrimination between symmetric and asymmetric 3D shapes seems to require recovery a 3D shape using a priori constraints, as in the model presented above. 
Summary and discussion
This paper presents the first ever psychophysical study of discrimination between symmetric and asymmetric 3D shapes from a single 2D image. As pointed out in Introduction, perception of symmetry has been discussed in the literature for about 100 years. These previous studies concentrated on the case of symmetry on the retinal image (see Figure 1). These studies were then extended to the case of 2D skewed symmetry, following Kanade's (1981) observation that a single 2D orthographic image of a 2D mirror symmetric figure is easily recognized by a human observer as such (see Figure 2). Once a 2D skewed symmetry is identified, the 3D orientation of the 2D symmetric figure can be computed (Kanade, 1981; Saunders & Knill, 2001; Wagemans, 1992, 1993). This way, 2D skewed symmetry provided a tool for reconstructing 3D visible surfaces of an object from a single 2D image. 
The ability to tell the difference between symmetric and asymmetric 3D shapes is obviously important, so why has no one studied it? The answer is that the problem of discrimination between symmetric and asymmetric 3D shapes does not naturally fit in existing paradigms of 3D shape perception. A 2D image of a symmetric 3D object is itself symmetric only when the line of sight lies on the symmetry plane of the object. For all other viewing directions, the image of a symmetric object is not symmetric. Hence, 3D symmetry of a 3D object cannot be judged based on 2D symmetry of the 2D image of the 3D object (2). There is one invariant of 3D symmetry under orthographic projection: the symmetry line segments are parallel to one another in every orthographic image of a mirror-symmetric shape. This invariant could be used to discriminate between symmetric and asymmetric 3D shapes, and we showed that it is actually used by the human visual system (Sawada & Pizlo, 2008a). However, this invariant is not sufficient to explain human performance. In one of the conditions (Condition-P), this invariant was made ineffective. This was accomplished by generating asymmetric 3D shapes from symmetric ones in such a way that the line segments connecting the corresponding vertices of the symmetric shape remained parallel. We verified that any 2D orthographic image of such an asymmetric shape is consistent with infinitely many symmetric 3D interpretations. It follows that the performance of the subjects in discrimination between symmetric and asymmetric 3D shapes in this condition would have been quite poor if the performance were based on the 2D image properties (2). Performance could obviously improve if the 3D shapes were familiar. However, the shapes in our study were novel. Despite geometrical ambiguity of a single 2D image, the subjects were able to reliably discriminate between symmetric and asymmetric unfamiliar 3D shapes. It follows that conventional approaches to 3D shape perception, in which the subject's percept is derived directly from the 2D image, such as ideal observer or pattern matching, cannot be applied to the questions studied here. Our results clearly suggest that the visual system recovers a 3D shape from a 2D image and then evaluates whether the 3D shape is symmetric or asymmetric. Most prior 3D shape reconstruction approaches started with reconstructing visible surfaces of 3D objects (so-called 2.5D sketch, Marr, 1982). However, the visible surfaces of a 3D symmetric object are themselves almost never symmetric, for the same reason why the 2D images are not symmetric. It follows that the task of discriminating between symmetric and asymmetric novel 3D shapes cannot be solved by such models. As a result, the question as to how such discrimination might be performed has never come up in the context of these models. Clearly, this question is new; it is irrelevant in existing paradigms, and it became important only after we demonstrated that the percept of a 3D shape is based on recovering this shape by applying a 3D symmetry constraint (Pizlo, 2008; Pizlo et al., 2010). 
Consider an ecological justification for 3D symmetry and maximal 3D compactness constraints. Note that almost every animal in nature is symmetric. It has been argued that symmetry of the 3D shape of the animal improves its motor functions (Steiner, 1979). At the same time, multiple objects rarely form a 3D symmetric configuration (see Figure 10a). It follows that 3D symmetry is a property of the 3D shape of a single object. A 3D shape of a living organism has a continuous volume. Note that maximizing compactness of the 3D shape is equivalent to minimizing the surface area of the shape for a given volume. Minimizing the surface area for a given volume means that the interface (surface of the skin) between the inside and outside of the object is minimized. In such a case, the external interference with the object is minimized. It makes the object physically stable in thermodynamic and mechanical sense. These facts suggest that maximizing 3D symmetry and 3D compactness of a 3D recovered shape makes the 3D shape more likely as a 3D shape of a single natural object. This nature of 3D shapes is reflected in human shape perception. Prior studies showed that symmetry of a 3D shape facilitates recognition of the 3D shape (Chan et al., 2006; Liu & Kersten, 2003; Pizlo & Stevenson, 1999; van Lier & Wagemans, 1999; Vetter et al., 1994). McBeath, Schiano, and Tversky (1997) showed that randomly generated 2D images tend to be interpreted by human subjects as silhouettes of symmetric 3D objects. A number of prior studies showed that a perceived 2D shape is biased toward symmetry (Csathó, van der Vloed, & van der Helm, 2004; Freyd & Tversky, 1984; King, Meyer, Tangney, & Biederman, 1976). The same bias seems to work in 3D shape perception, as well (Kontsevich, 1996). In contrast to symmetry, there is not much evidence for the role of 3D compactness in 3D shape perception. 3D compactness was used as an a priori constraint in a model for 3D shape recovery proposed by Li et al. (2009; see also Li, 2009; Pizlo, 2008; Pizlo et al., 2010). Similarly, it was shown that perception of slant of a planar shape in a 3D scene can be explained by maximizing the 2D compactness of the planar interpretation (Brady & Yuille, 1988; Sawada & Pizlo, 2008a). 
Figure 10
 
Symmetric shapes composed of multiple objects. (a) Multiple natural organisms rarely compose a single 3D symmetric configuration in nature. Image from istockphoto.com. (b) In man-made world, there are also 3D symmetric configurations composed of multiple objects. Image from ashinari.com.
Figure 10
 
Symmetric shapes composed of multiple objects. (a) Multiple natural organisms rarely compose a single 3D symmetric configuration in nature. Image from istockphoto.com. (b) In man-made world, there are also 3D symmetric configurations composed of multiple objects. Image from ashinari.com.
The model proposed in this paper used an “organized” 2D image of a 3D shape for its input. Specifically, information about symmetric correspondences of contours and vertices of the 3D shape was given to the model. How can the human visual system derive this information from the 2D projection of the 3D shape? If the 3D shape is exactly symmetric, the 2D symmetry lines segments are all parallel to one another. However, in real images of real objects, the symmetry line segments will never be exactly parallel. Therefore, the parallelism of these line segments cannot be the only, or even the main feature analyzed. It was suggested that topological structure of the image should be analyzed, as well, before the 3D symmetry constraint is applied (Pizlo et al., 2010; Sawada et al., in preparation). Studying human performance in organizing the 2D image information and detecting symmetric correspondences will be addressed in our future work. 
Appendix A
An algorithm recovering an approximately symmetric polyhedron from a single 2D orthographic image
Let z = 0 be the image plane and the x- and y-axes of the 3D Cartesian coordinate system be the 2D coordinate system on the image plane. Consider a 3D polyhedron that is mirror symmetric with respect to a plane S in 3D space. We begin with the analysis of those symmetric pairs of vertices that are both visible. 3D symmetry line segments connecting symmetric pairs in 3D space are parallel to one another. The 2D symmetry line segments, which are the projection of the 3D symmetry line segments in the 2D image, are also parallel to one another under an orthographic projection. Let's set the direction of the x-axis so that the x-axis is parallel to the 2D symmetry line segments (note that this does not restrict the generality). If the polyhedron is approximately symmetric and the 3D and 2D symmetry line segments are only approximately parallel, the x-axis is set to a direction that is as parallel to the 2D symmetry line segments as possible in the least squares sense:  
min i (yiycounterpart(i) ) 2 p i = [ x i y i ] p c o u n t e r p a r t (i) = [ x c o u n t e r p a r t (i) y c o u n t e r p a r t (i) ] ,
(A1)
where p i = [ x i, y i] t and pcounterpart(i) = [ xcounterpart(i), ycounterpart(i)] t are 2D orthographic projections of vertices i and its symmetric counterpart(i). 
Correction of the 2D image
The 2D symmetry line segments are changed (corrected) to be parallel by displacing their endpoints (Zabrodsky & Weinshall, 1997): 
pi=[xiyi]=[xiyi+ycounterpart(i)2]ppair(i)=[xcounterpart(i)ycounterpart(i)]=[xcounterpart(i)yi+ycounterpart(i)2]=[xcounterpart(i)yi],
(A2)
where pi = [xi, yi]t and pcounterpart(i) = [xcounterpart(i), ycounterpart(i)]t are the projections of the vertices i and counterpart(i) after the correction. The corrected image is consistent with an orthographic image of a perfectly symmetric 3D shape. If the 2D symmetry line segments are all exactly parallel in the 2D image, pi = pi and pcounterpart(i) = pcounterpart(i)
Producing a virtual image
The method proposed by Vetter and Poggio (1994) is applied to the corrected image (Figure A1). The virtual image of the symmetric 3D shape is generated by reflecting the corrected image with respect to the y-axis: 
qi=[1001]pi=[xiyi]qcounterpart(i)=[1001]pcounterpart(i)=[xcounterpart(i)yi],
(A3)
where qi and qcounterpart(i) are the reflections of pi and pcounterpart(i) in the virtual image. The virtual image is a valid 2D image of the same 3D shape after the 3D shape has been rotated around the y-axis. Let the 3D coordinates of the symmetric pair of vertices i and counterpart(i) at the orientation for which the real image was obtained be vi = [xi, yi, zi]t and vcounterpart(i) = [xcounterpart(i), yi, zcounterpart(i)]t. Note that x- and y-values of vi and vcounterpart(i) are identical to those of pi and pcounterpart(i) under the orthographic projection. In the same way, let the 3D coordinates of the symmetric pair of vertices i and counterpart(i) at the orientation for which the virtual image was obtained be ui = [−xi, yi, zi]t and ucounterpart(i) = [−xcounterpart(i), yi, zcounterpart(i)]t. Then, the vertex ucounterpart(i) that corresponds to vi after the 3D rigid rotation can be written as follows: 
[xcounterpart(i)yizcounterpart(i)]=R3D[xiyizi],
(A4)
where R3D is a 3 × 3 rotation matrix. The R3D in Equation A4 represents the 3D rigid rotation around the y-axis: 
R3D=Ry(θ)=[cosθ0sinθ010sinθ0cosθ],
(A5)
where θ is the angle of the rotation around the y-axis. Note that the correspondence of points in the real and virtual images produced by the 2D reflection (i.e., pi maps to qi and pcounterpart(i) maps to qcounterpart(i)) is “opposite” to the correspondence produced by the 3D rotation (i.e., pi maps to qcounterpart(i) and pcounterpart(i) maps to qi) (see Figure A1). 
Figure A1
 
A real (corrected) image and a virtual image of a symmetric 3D shape. The virtual image is generated by reflecting the corrected image. p i and p counterpart(i) are 2D orthographic projections of vertices i and its symmetric counterpart(i) in the corrected image. q i and q counterpart(i) in the virtual image or produced by computing a reflection of p i and p counterpart(i), respectively. Note that the virtual image is a valid 2D image of the same 3D shape after a 3D rigid rotation. p i maps to q counterpart(i) and p counterpart(i) maps to q i as a result of the 3D rotation.
Figure A1
 
A real (corrected) image and a virtual image of a symmetric 3D shape. The virtual image is generated by reflecting the corrected image. p i and p counterpart(i) are 2D orthographic projections of vertices i and its symmetric counterpart(i) in the corrected image. q i and q counterpart(i) in the virtual image or produced by computing a reflection of p i and p counterpart(i), respectively. Note that the virtual image is a valid 2D image of the same 3D shape after a 3D rigid rotation. p i maps to q counterpart(i) and p counterpart(i) maps to q i as a result of the 3D rotation.
Recovering one-parameter family of symmetric polyhedra
From the first row of Equation A4, we obtain:  
x c o u n t e r p a r t (i) = [ cos θ sin θ ] t [ x i z i ] .
(A6)
From Equation A6, z i can be computed:  
z i = x i cos θ + x c o u n t e r p a r t (i) sin θ .
(A7)
Therefore, the vertex i of the symmetric 3D shape can be written as follows:  
v i = [ x i y i x i cos θ + x c o u n t e r p a r t (i) sin θ ] .
(A8)
From Equation A8, it can be seen that v i depends on θ. It means that symmetric 3D shapes that are consistent with the corrected image can be represented by a one-parameter family. The family is controlled by θ, which is the angle of rotation around the y-axis. 
The 3D shapes and orientations of the symmetric polyhedra
First, consider the orientation of the 3D symmetry line segments and the symmetry plane S of the symmetric 3D shape. The 3D symmetry line segments are parallel to the normal of the symmetry plane S and the midpoints of the 3D symmetry line segments are on S. The midpoint of a 3D symmetry line segment connecting the vertices i and counterpart(i) is  
m i = v i + v c o u n t e r p a r t (i) 2 = [ x i + x c o u n t e r p a r t (i) 2 y i x i + x c o u n t e r p a r t (i) 2 (1+cosθ) sin θ ] .
(A9)
From Equation A9, we can write an equation for S:  
z = x 1 + cos θ sin θ = x tan (π/2θ/2) .
(A10)
Equation A10 shows that S is a plane whose normal is perpendicular to the y-axis and the y-axis is on S. The normal of S can be written as follows:  
n s = [ tan (π/2θ/2) 0 1 ] .
(A11)
The slant σ s of S is defined as an angle between n s and the z-axis, and it can be computed as follows:  
cos (σs) = [ 0 0 1 ] n s n s = 1 tan 2 (π/2θ/2) + 1 = cos (π/2θ/2) σ s = π / 2 θ / 2 ,
(A12)
Recall that the 3D symmetry line segments are parallel to n s. Equation A12 shows that θ which is the parameter specifying the family 3D shapes is directly related to the orientation of the 3D symmetry line segments and the symmetry plane of the 3D shape. 
Next, consider the 3D shape itself. Let the intersection line between S and the zx-plane be s zx. Let the height of the object be its length along the y-axis and the width be its length along n s and the depth be its length along s zx. The y-axis, n s, and s zx are perpendicular to one another. Let the width W of the 3D shape be measured as the length of the longest 3D symmetry line segment. Consider the 3D symmetry line segment connecting a symmetric pair of the vertices i and counterpart(i). The length l i(θ) of this 3D symmetry line segment can be computed as follows:  
l i (θ) = v i v c o u n t e r p a r t (i) = x i x c o u n t e r p a r t (i) 0 (xixcounterpart(i)) (1cosθ) sin θ = 2 1 + cos θ | x i x c o u n t e r p a r t (i) | .
(A13)
Equation A13 shows that the length of each 3D symmetry line segment is scaled as the same function of θ. It means that the 3D shape is proportionally stretched along the direction of 3D symmetry line segments by the same factor, which is a function of θ: From Equation A13 we have:  
W (θ) = W (π/2) 1 + cos θ ,
(A14)
where W(θ) is the width of the 3D shape. W(π/2) is the width of the shape when θ = π/2. 
The depth of the 3D shape can be measured by measuring the longest distance between two vertices along s zx, which is on the zx-plane. Note that the 3D symmetry line segments are parallel to n s and n s is perpendicular to s zx. The depth of the 3D shape can be measured by computing the longest distance between two midpoints of the 3D symmetry line segments. The midpoints are coplanar on S, and n s is perpendicular to the y-axis. Hence, the distance d ij between two midpoints along s zx can be computed as follows:  
m i = [ x i + x c o u n t e r p a r t (i) 2 y i x i + x c o u n t e r p a r t (i) 2 (1+cosθ) sin θ ] m j = [ x j + x c o u n t e r p a r t (j) 2 y j x j + x c o u n t e r p a r t (j) 2 (1+cosθ) sin θ ] d i j (θ) = (xj+xcounterpart(j)2xi+xcounterpart(i) 2 ) 2 ((1+cossinθ)2+1) = | x j + x c o u n t e r p a r t (j) x i x c o u n t e r p a r t (i) | 2 2 1 cos θ ,
(A15)
where m i is a midpoint of a 3D symmetry line segment connecting i and counterpart(i), m j is a midpoint of a 3D symmetry line segment connecting j and counterpart(j), and d ij(θ) is the distance between m i and m j. Equation A15 shows that the distances between m i and m j are scaled as a function of θ. It means that the 3D shape is proportionally scaled along the depth axis as a function of θ: From Equation A15 we have:  
D (θ) = D (π/2) 1 cos θ ,
(A16)
where D(π/2) is the depth of the 3D shape when θ = π/2. From Equation A8, we see that the y-values of the vertices are independent of θ. It means that the height of the 3D shape is constant (and thus is independent of θ). 
From these observations, the 3D shape that belongs to the one-parameter family can be uniquely characterized by its aspect ratio, which is the width divided by the depth of the 3D shape. This aspect ratio can be computed as follows:  
A R (θ)=W(θ)D(θ)=AR(π/2) 1 cos θ 1 + cos θ ,
(A17)
where AR(π/2) is the aspect ratio of the 3D shape for θ = π/2. Equations A11, A12, and A17 show that θ, which is the parameter characterizing the family, specifies the orientation of the symmetry plane, orientation of the 3D symmetry line segments, and the aspect ratio of the 3D shape. 
Applying planarity constraint to recover hidden vertices
If all vertices are visible, this step is skipped. Symmetric pairs whose one vertex is visible and the other is hidden are recovered by applying two constraints: planarity constraint for the visible vertex and symmetry constraint for the hidden counterpart (Li et al., 2009; Mitsumoto et al., 1992). In order to use a planarity constraint, at least three vertices of a face on which the visible vertex is located have to be recovered first. Assume that the face is planar, and the orientation of the face is known. The z-value of the visible vertex is obtained by computing an intersection of this face and the projection line emanating from the image of this vertex. The hidden counterpart is recovered by reflecting the visible vertex with respect to the known symmetry plane of the 3D shape. 
Undoing the 2D correction in 3D space
When the projected symmetry lines are all parallel in the real 2D image, this step is skipped. When they are not parallel, the recovered symmetric 3D shape must be distorted so that its image agrees with the given 2D real image:  
v i = v i + Δ 3 D ,
(A18)
where Δ 3D is a 3D distortion, and v i is the position of the vertex i after the distortion. Let the 3D coordinates of Δ 3D be [Δ x, Δ y, Δ z] t. From Equation A2, Δ 3D = [0, y iy i, Δ z] t, where Δ z can be arbitrary. The magnitude of 3D distortion (Δ3D) is minimized when Δ z = 0. Hence, the minimally distorted symmetric shape which is consistent with the real 2D image can be written as follows:  
v i = v i + min (Δ3D) = [ x i y i x i cos θ + x c o u n t e r p a r t (i) sin θ ] .
(A19)
Note that the transformation of x and y coordinates in Equation A19 is an inverse transformation of that in Equation A2
Appendix B
Measuring 2D symmetry and 2D skewed symmetry of an image of a 3D shape
A 2D image of a symmetric 3D shape is symmetric only if the line of sight lies on the symmetry plane of the 3D shape (Figure 3a). At the same time, a 2D image of an asymmetric 3D shape is almost never symmetric. It is reasonable to expect that the degree of asymmetry of a 2D image is higher for asymmetric than for symmetric 3D shapes. As a result, it might have been possible for the subjects to discriminate between symmetric and asymmetric 3D shapes based on 2D asymmetry of their images. This hypothesis was tested in a simulation experiment. 
The 2D symmetry of an image of a 3D shape is defined here as a negative of its 2D asymmetry:  
s y m m e t r y 2 D (H) = a s y m m e t r y 2 D (H) = a | α a α c o u n t e r p a r t (a) | · v i s i b l e (a) · v i s i b l e (counterpart(a)) π · a v i s i b l e (a) · v i s i b l e (counterpart(a)) v i s i b l e (a) = { 1 0 i f a 2 D a n g l e a i s i f a 2 D a n g l e a i s v i s i b l e i n v i s i b l e i n a g i v e n 2 D i m a g e i n a g i v e n 2 D i m a g e ,
(B1)
where α a and αcounterpart(a) are the projections of the corresponding 2D angles of contours of the polyhedron H. In order to measure 2D symmetry of the image, only the visible line segments in the image were considered (Figure B1). If the 2D image of the 3D shape is perfectly symmetric, its two halves are identical and symmetry2D(H) is zero; otherwise, it is smaller than zero. 
Figure B1
 
Results of the model in the simulation experiment. The model was applied to the images used in the psychophysical experiments. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. Results from different types of asymmetric polyhedra are plotted in separate graphs. The results of the model (solid symbols) are superimposed on the averaged results of the subjects in the psychophysical experiments. Error bars represent the standard errors calculated from two sessions for each condition.
Figure B1
 
Results of the model in the simulation experiment. The model was applied to the images used in the psychophysical experiments. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. Results from different types of asymmetric polyhedra are plotted in separate graphs. The results of the model (solid symbols) are superimposed on the averaged results of the subjects in the psychophysical experiments. Error bars represent the standard errors calculated from two sessions for each condition.
The model used in this simulation experiment discriminated between symmetric and asymmetric 3D shapes based on 2D symmetry of the images as defined in Equation B1. Before computing 2D symmetry of the images, the image noise was added by randomly perturbing the orientations and lengths of the 2D symmetry line segments (see Model Fitting for more details). The discriminability measure d′ was computed for each session based on measured 2D symmetry of images used in the session. The criterion for classification of 2D symmetry was chosen so as to minimize the sum of squared differences between the d′ of the model and that of the subject in each experiment. 
Results of the model superimposed on the average results of the subjects are shown in Figure B1. The ordinate shows d′ and the abscissa shows levels of distortion of the asymmetric polyhedra. The model's performance shows similar effect of the levels of the distortion as that observed in human performance. However, the overall performance of the model is quite close to chance level in most conditions. This contrasts with performance of the subjects which was well above chance level in most conditions. Furthermore, the response bias of the model that maximized the fit to the subjects' results was very different from that of the human subjects. On average, the proportion of “symmetric shape” responses of the model, when the 3D shape was symmetric, was 13%. This means that the model almost never responded “symmetric” when the 3D shape was symmetric. This contrasts markedly with human responses, where the proportion of responses “symmetric” for symmetric shapes was equal to 79%. This extreme response bias of the model would not lead to successful performance in everyday life where most objects are symmetric and should be perceived as such. Changing the model's response bias so that it is identical to the subjects' response bias would make model's discriminability even worse (close to chance level in all conditions). All these results show that discrimination of 2D symmetry of images of 3D shapes cannot account for the human performance in discrimination between symmetric and asymmetric 3D shapes. 
Appendix C
Evaluating accidentalness of a 3D interpretation
Consider a 2D orthographic projection of a symmetric 3D shape. The 3D symmetry line segments of the symmetric 3D shape are all parallel to one another and their projections are also parallel to one another in the 2D orthographic image. Next, consider a 2D orthographic projection of an asymmetric 3D polyhedron with 3D parallel line segments (like the asymmetric shapes in our P condition; Figure 4d). The parallelism of these line segments is also preserved in the 2D orthographic image. Any image of this asymmetric polyhedron is consistent, in principle, with a symmetric 3D interpretation. Interestingly, human subjects can discriminate quite reliably between images of a symmetric 3D shape and asymmetric 3D shape with parallel line segments (Condition-P in Experiment 1). They could also reliably judge the degree of asymmetry of a 3D shape with parallel line segments from a single 2D image (Experiment 2). Our model can produce equally good discriminations by using four constraints to recover the 3D shapes before evaluating their symmetry. The constraints of planarity and compactness can override the symmetry constraint, resulting in a 3D asymmetric shape in cases where 3D symmetric interpretation is geometrically possible. 
It has been known that some possible 3D interpretations of a 2D image are not perceived by human observers if these interpretations are degenerate (accidental), i.e., not likely (Freeman, 1994; Mach, 1906/1959). A 2D image of a 3D scene is a degenerate view if the 2D image is unstable and a small change of the viewing orientation causes large difference of the 2D image (Freeman, 1994; Weinshall & Werman, 1997). A simple example described by Mach (1906/1959) used a straight-line segment on the retina with two possible interpretations: a straight-line segment out there, or a circle presented at a slant 90 degrees. This example illustrates the relation between stability of the 2D image and the generic viewpoint of the perceptual interpretation. 
A question arises as to whether the generic viewpoint of 3D recovered shape, itself, can account for human performance more directly. There are at least two different ways to use generic viewpoint in a model for simulating human performance in discrimination between symmetric and asymmetric 3D shapes. First, generic viewpoint can be used as a criterion for recovering 3D shapes. However, if generic viewpoint is used as the only criterion for the 3D shape recovery (i.e., as the only element in the cost function), the results of the recovery will be trivial. The most stable 3D interpretation of a line drawing is always the line drawing itself: the backprojection of the line drawing to a frontoparallel plane (Weinshall & Werman, 1997). A flat shape on a frontoparallel plane will lead to the smallest sensitivity of its 2D image to small 3D rotations. Second, generic viewpoint can be used as a criterion of symmetry discrimination after recovering a symmetric 3D shape from each 2D image. This is the method used here. The model takes a 2D image of a symmetric or asymmetric 3D shape and produces a symmetric 3D interpretation. We expect that a symmetric 3D interpretation of a 2D image produced by an asymmetric 3D shape will lead to less stable 2D image compared to an asymmetric 3D interpretation. Hence, we compute the stability of the 2D image and compare it to a criterion. A 2D image that leads to a symmetric 3D interpretation for which the image is stable is classified as an image of a symmetric 3D shape. A 2D image that leads to a symmetric 3D interpretation for which the image is unstable is classified as an image of an asymmetric 3D shape. The simulation experiment described below tests whether the human performance of 3D symmetry discrimination can be accounted by this model. 
Quantitative measure of sensitivity of a 2D image of a 3D shape is formulated first. Consider a 2D image of a 3D shape and another 2D image of the same 3D shape viewed from a different viewing direction. A change of the viewing direction can be represented by a 3D rotation of the line of sight around the center of the 3D object. The 3D rotation of the line of sight is characterized by two angles: slant (σ) and tilt (τ). Slant is the angle between the first line of sight and the rotated line of sight. Hence, slant specifies the amount of the rotation. Tilt is the angle between the projection of the rotated line of sight to the original image and the x-axis of the original image. Tilt specifies the axis of rotation, around which the line of sight is rotated. Hence, the 2D image of the 3D shape after the rotation of the line of sight can be written as image(H, σ, τ). The image(H, 0, τ) represents the original image. A difference between these two images is computed here as a sum of absolute differences between projections of 2D angles in image(H, 0, τ) and image(H, σ, τ):  
d i f f e r e n c e (image(H,0,τ),image(H,σ,τ)) = a | α a (0,τ) α a (σ,τ) | π · n a ,
(C1)
where n a is number of the 2D angles of the polyhedron H, and α a(0, τ) and α a(σ, τ) are the projections of a 2D angle of contours of H in image(H, 0, τ) and image(H, σ, τ). Note that a small σ causes a large difference between the images if image(H, 0, τ) is unstable. Stability of image(H, 0, τ) is defined here as a negative of its instability, which is a sum of the differences defined in Equation C1 computed for a small change of slant (here 1 degree) and over all tilts:  
s t a b i l i t y (image(H,0,τ)) = i n s t a b i l i t y (image(H,0,τ)) = 0 2 π d i f f e r e n c e (image(H,0,τ),image(H,Δσ,τ)) Δ σ d τ .
(C2)
The stability defined by Equation C2 varies a lot across 3D shapes and their 2D images. Therefore, we normalize the stability, before we compare it to a criterion (this normalization substantially improves the performance of the model):  
n o r m a l i z e d _ s t a b i l i t y (image(H,0,0)) = s t a b i l i t y (image(H,0,0)) M i n S t a b i l i t y (H) M a x S t a b i l i t y (H) M i n S t a b i l i t y (H) ,
(C3)
where MaxStability(H) and MinStability(H) are maximum and minimum stability of images of the polyhedron H among images of 2562 different viewing orientations of H. The 2562 viewing orientations were derived by connecting the center of a viewing sphere and 2562 points that are almost uniformly distributed on the surface of the sphere (see Ballard & Brown, 1982, pp. 492–493). 
The model in this simulation experiment discriminated between symmetric and asymmetric 3D shapes based on the normalized stability of 3D symmetric interpretations of 2D images. The model was applied to 2D images used in Experiment 1 (static condition) and Experiment 2. The normalized stability of each 3D shape was computed based on a symmetric 3D recovery from the 2D image (see the section Recovering a symmetric or an approximately symmetric 3D shape from a single 2D image and 1). Specifically, for each 2D image, regardless whether it was an image of a 3D symmetric or asymmetric shape, a 3D symmetric shape was recovered and used to compute the stabilities of the 2D images. Before recovering the symmetric 3D shape, the image noise was added by randomly perturbing the orientations and lengths of the 2D symmetry line segments in order to emulate the visual noise (see Model Fitting for more details). The discriminability measure d′ was computed for each session based on the normalized stabilities. The criterion for symmetric versus asymmetric response of the model was chosen so as to minimize the sum of squared differences between the d′ of the model and that of the subject in each experiment. 
Results of the model superimposed on the average results of the subjects are shown in Figure C1. The ordinate shows d′ and the abscissa shows levels of distortion of the asymmetric polyhedra. The model's performance shows similar trends to the results of the subjects. However, the overall performance of the model was substantially poorer than the human performance. Furthermore, the estimated response bias of the model was very different from that of human subjects in each session. On average, the rate of “symmetric shape” responses of the model, when the 3D shape was symmetric, was 15%, whereas that of human subjects was 79%. Again, this response bias of the model would not lead to successful performance in everyday life where most objects are symmetric and should be perceived as such. Changing the model's response bias so that it is identical to the subjects' response bias would make model's discriminability close to chance level. These results clearly show that it is unlikely that the stability of symmetric 3D interpretations of 2D images is the primary factor in the discrimination between symmetric and asymmetric 3D shapes from single 2D images. A priori constraints, such as 3D compactness, seem to be critical in this task. It is important to point out, however, that the generic viewpoint model implemented here is not necessarily the only, or even the best way to do it. Therefore, this simulation experiment should not be treated as an ultimate test rejecting the generic viewpoint approach to the problem of discriminating between 3D symmetric and asymmetric shapes. 
Figure C1
 
Results of the model in the simulation experiment. The model was applied to the images used in the psychophysical experiments. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. Results from different types of asymmetric polyhedra are plotted in separate graphs. The results of the model (solid symbols) are superimposed on the averaged results of the subjects in the psychophysical experiments. Error bars represent the standard errors calculated from two sessions for each condition.
Figure C1
 
Results of the model in the simulation experiment. The model was applied to the images used in the psychophysical experiments. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. Results from different types of asymmetric polyhedra are plotted in separate graphs. The results of the model (solid symbols) are superimposed on the averaged results of the subjects in the psychophysical experiments. Error bars represent the standard errors calculated from two sessions for each condition.
Acknowledgments
The author thanks Zyg Pizlo for helpful comments and suggestions. The author also thanks Johan Wagemans and the other anonymous reviewer for useful suggestions, in particular, for suggesting the additional tests described in 2 and 3. This project was supported by the National Science Foundation, Department of Energy and the Air Force Office of Scientific Research. 
Commercial relationships: none. 
Corresponding author: Tadamasa Sawada. 
Email: sawada@psych.purdue.edu. 
Address: 703 3rd street, West Lafayette, IN 47907-2081, USA. 
Footnotes
Footnotes
1   1The method of generating the symmetric polyhedra was almost the same to that in Chan et al. (2006), Li et al. (2009), and Pizlo and Stevenson (1999). The only difference is that the symmetric polyhedron used in these prior studies had coplanar bottom faces, but that used in this study did not.
Footnotes
2   2We also plotted the results using overall proportion correct, rather than d′, as a dependent variable. These two dependent variables led to the same conclusions.
Footnotes
3   3Note that all faces in our polyhedra are convex. If faces are not convex, more general non-planarity measures are available.
References
Ballard D. H. Brown C. M. (1982). Computer vision. Englewood Cliffs, NJ: Prentice-Hall.
Barlow H. B. Reeves B. C. (1979). The versatility and absolute efficiency of detecting mirror symmetry in random dot displays. Vision Research, 19, 783–793. [PubMed] [CrossRef] [PubMed]
Bartlett F. C. (1932). Remembering: A study in experimental and social psychology. Cambridge: Cambridge University Press.
Brady M. Yuille A. (1988). Inferring 3D orientation from 2D contour (An experimental principl In. In Richards W. (Ed.), Natural computation. (pp. 99–106). Cambridge: MIT Press.
Chan M. W. Stevenson A. K. Li Y. Pizlo Z. (2006). Binocular shape constancy from novel views: The role of a priori constraints. Perception & Psychophysics, 68, 1124–1139. [PubMed] [CrossRef] [PubMed]
Csathó A. van der Vloed G. van der Helm P. A. (2004). The force of symmetry revisited: Symmetry-to-noise ratios regulate (asymmetry effects. Acta Psychologica, 117, 233–250. [PubMed] [CrossRef] [PubMed]
Freeman W. T. (1994). The generic viewpoint assumption in a framework for visual perception. Nature, 368, 542–545. [PubMed] [CrossRef] [PubMed]
Freyd J. Tversky B. (1984). Force of symmetry in form perception. American Journal of Psychology, 97, 109–126. [PubMed] [CrossRef] [PubMed]
Hildebrandt S. Tromba A. (1996). The parsimonious universe. New York: Springer.
Hong W. Ma Y. Yu Y. (2004, May). Reconstruction of 3-D deformed symmetric curves from perspective images without discrete features. Proceedings of European Conference on Computer Vision, Prague.
Jenkins B. (1983). Component processes in the perception of bilaterally symmetric dot textures. Perception & Psychophysics, 34, 433–440. [PubMed] [CrossRef] [PubMed]
Julesz B. (1971). Foundation of cyclopean perception. Chicago: University of Chicago Press.
Kaiser P. K. (1967). Perceived shape and its dependency on perceived slant. Journal of Experimental Psychology, 75, 345–353. [PubMed] [CrossRef] [PubMed]
Kanade T. (1981). Recovery of the three-dimensional shape of an object from a single view. Artificial Intelligence, 17, 409–460. [CrossRef]
King M. Meyer G. E. Tangney J. Biederman I. (1976). Shape constancy and a perceptual bias towards symmetry. Perception & Psychophysics, 19, 129–136. [CrossRef]
Koffka K. (1935). Principles of Gestalt psychology. New York: Harcourt, Brace, & World.
Kontsevich L. L. (1996). Symmetry as a depth cue. In Tyler C. W. (Ed), Human symmetry perception and its computational analysis. (pp. 331–347). Utrecht, Netherlands: VSP BV.
Leclerc Y. G. Fischler M. A. (1992). An optimization-based approach to the interpretation of single line drawings as 3D wire frames. International Journal of Computer Vision, 9, 113–136. [CrossRef]
Levi D. M. Klein S. A. (1990). The role of separation and eccentricity in encoding position. Vision Research, 30, 557–585. [PubMed] [CrossRef] [PubMed]
Li Y. (2009). Perception of parallelepipeds: Perkins's law. Perception, 38, 1767–1781. [PubMed] [CrossRef] [PubMed]
Li Y. Pizlo Z. Steinman R. M. (2009). A computational model that recovers the 3D shape of an object from a single 2D retinal representation. Vision Research, 49, 979–91. [PubMed] [CrossRef] [PubMed]
Liu Z. Kersten D. (2003). Three-dimensional symmetric shapes are discriminated more efficiently than asymmetric ones. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 20, 1331–1340. [PubMed] [CrossRef] [PubMed]
Mach E. (1906/1959). The analysis of sensations and the relation of the physical to the psychical. New York: Dover.
Mäkelä P. Whitaker D. Rovamo J. (1993). Modelling of orientation discrimination across the visual field. Vision Research, 33, 723–730. [PubMed] [CrossRef] [PubMed]
Marr D. (1982). Vision. San Francisco: Freeman.
McBeath M. K. Schiano D. J. Tversky B. (1997). Three-dimensional bilateral symmetry bias in judgments of figural identity and orientation. Psychological Science, 8, 217–223. [CrossRef]
Mitsumoto H. Tamura S. Okazaki K. Fukui Y. (1992). 3-D reconstruction using mirror images based on a plane symmetry recovering method. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14, 941–946. [CrossRef]
Pirenne M. H. (1970). Optics, painting, & photography. Cambridge: Cambridge University Press.
Pizlo Z. (2008). 3D shape: Its unique place in visual perception. Cambridge: MIT Press.
Pizlo Z. Rosenfeld A. Weiss I. (1997a). The geometry of visual space: About the incompatibility between science and mathematics. Computer Vision and Image Understanding, 65, 425–433. [CrossRef]
Pizlo Z. Rosenfeld A. Weiss I. (1997b). Visual space: Mathematics, engineering, and science. Computer Vision and Image Understanding, 65, 450–454. [CrossRef]
Pizlo Z. Salach-Golyska M. (1995). 3-D shape perception. Perception & Psychophysics, 57, 692–714. [PubMed] [CrossRef] [PubMed]
Pizlo Z. Sawada T. Li Y. Kropatsch W. Steinman R. M. (2010). New approach to the perception of 3D shape based on veridicality, complexity, symmetry and volume. Vision Research, 50, 1–11. [PubMed] [CrossRef] [PubMed]
Pizlo Z. Stevenson A. K. (1999). Shape constancy from novel views. Perception & Psychophysics, 61, 1299–1307. [PubMed] [CrossRef] [PubMed]
Poggio T. Torre V. Koch C. (1985). Computational vision and regularization theory. Nature, 317, 314–319. [PubMed] [CrossRef] [PubMed]
Saunders J. A. Knill D. C. (2001). Perception of 3D surface orientation from skew symmetry. Vision Research, 41, 3163–3183. [PubMed] [CrossRef] [PubMed]
Sawada T. Li Y. Pizlo Z. (in preparation). Any 2D image is consistent with 3D symmetric interpretations.
Sawada T. Pizlo Z. (2008a). Detection of skewed symmetry. Journal of Vision, 8, (5):14, 1–18, http://www.journalofvision.org/content/8/5/14, doi:10.1167/8.5.14. [PubMed] [Article] [CrossRef]
Sawada T. Pizlo Z. (2008b). Detecting mirror-symmetry of a volumetric shape from its single 2D image. Proceedings of the Workshop on Perceptual Organization in Computer Vision, IEEE International Conference on Computer Vision and Pattern Recognition, Anchorage, AK, June 23, 2008.
Steiner G. (1979). Spiegelsymmetrie der tierkörper. Naturwissenschaftliche Rundschau, 32, 481–485.
Tjan B. S. Liu Z. (2005). Symmetry impedes symmetry discrimination. Journal of Vision, 5, (10):10, 888–900, http://www.journalofvision.org/content/5/10/10, doi:10.1167/5.10.10. [PubMed] [Article] [CrossRef]
Ullman S. (1979). The interpretation of visual motion. Cambridge: MIT Press.
van Lier R. Wagemans J. (1999). From images to objects: Global and local completions of self-occluded parts. Journal of Experimental Psychology: Human Perception and Performance, 25, 1721–1741. [CrossRef]
Vetter T. Poggio T. (1994). Symmetric 3D objects are an easy case for 2D object recognition. Spatial Vision, 8, 443–453. [PubMed] [CrossRef] [PubMed]
Vetter T. Poggio T. Bülthoff H. H. (1994). The importance of symmetry and virtual views in three-dimensional object recognition. Current Biology, 4, 18–23. [PubMed] [CrossRef] [PubMed]
Wagemans J. (1992). Perceptual use of nonaccidental properties. Canadian Journal of Psychology, 46, 236–279. [PubMed] [CrossRef] [PubMed]
Wagemans J. (1993). Skewed symmetry: A nonaccidental property used to perceive visual forms. Journal of Experimental Psychology: Human Perception and Performance, 19, 364–380. [CrossRef] [PubMed]
Wagemans J. (1997). Characteristics and models of human symmetry detection. Trends in Cognitive Sciences, 1, 346–352. [CrossRef] [PubMed]
Wagemans J. van Gool L. d'Ydewalle G. (1991). Detection of symmetry in tachistoscopically presented dot patterns: Effects of multiple axes and skewing. Perception & Psychophysics, 50, 413–427. [PubMed] [CrossRef] [PubMed]
Wagemans J. van Gool L. d'Ydewalle G. (1992). Orientation effects and component processes in symmetry detection. Quarterly Journal of Experimental Psychology, 44A, 475–508. [CrossRef]
Wagemans J. Lamote C. van Gool L. (1997). Shape equivalence under perspective and projective transformations. Psychonomic Bulletin & Review, 4, 248–253. [CrossRef] [PubMed]
Watt R. J. (1987). Scanning from coarse to fine spatial scales in the human visual system after the onset of a stimulus. Journal of the Optical Society of America A, Optics and Image Science, 4, 2006–2021. [PubMed] [CrossRef] [PubMed]
Weinshall D. Werman M. (1997). On view likelihood and stability. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 97–108. [CrossRef]
Wertheimer M. (1923). Investigations on the gestalt of shape. Psychologische Forschung, 4, 301–350. [CrossRef]
Yang T. Kubovy M. (1999). Weakening the robustness of perspective: Evidence for a modified theory of compensation in picture perception. Perception & Psychophysics, 61, 456–467. [PubMed] [CrossRef] [PubMed]
Zabrodsky H. (1990). Symmetry—A review (Tech. Rep. No. 90-16). CS Department, The Hebrew University of Jerusalem.
Zabrodsky H. Algom D. (1994). Continuous symmetry: A model for human figural perception. Spatial Vision, 8, 455–467. [PubMed] [CrossRef] [PubMed]
Zabrodsky H. Weinshall D. (1997). Using bilateral symmetry to improve 3D reconstruction from image sequences. Computer Vision and Image Understanding, 67, 48–57. [CrossRef]
Zimmer A. C. (1984). Foundations for the measurement of phenomenal symmetry. Gestalt Theory, 6, 118–157.
Figure 1
 
Three types of symmetry related to shape perception (after Mach, 1906/1959). (a) Mirror symmetry is invariant with respect to reflection. A reflection transformation of a shape about its axis of symmetry results in a shape congruent with the original shape. (b) Rotational symmetry is invariant with respect to rotation. (c) Translational symmetry is invariant with respect to translation.
Figure 1
 
Three types of symmetry related to shape perception (after Mach, 1906/1959). (a) Mirror symmetry is invariant with respect to reflection. A reflection transformation of a shape about its axis of symmetry results in a shape congruent with the original shape. (b) Rotational symmetry is invariant with respect to rotation. (c) Translational symmetry is invariant with respect to translation.
Figure 2
 
A symmetric 2D shape (a) and its transformations when it is slanted relative to the observer: (b) perspective transformation and (c) orthographic transformation. Slant is 70 degrees and tilt is 85 degrees in both these transformations. When the reader's line of sight is orthogonal to this page at the cross F and the viewing distance is equal to the distance between F and D, the retinal images of both (b) and (d) are perspective images of the shape in (a) with slant 70 and tilt 85 degrees. When the reader's line of sight is orthogonal to this page at any other point than F, the retinal image of (d) is a projective image of the shape in (a).
Figure 2
 
A symmetric 2D shape (a) and its transformations when it is slanted relative to the observer: (b) perspective transformation and (c) orthographic transformation. Slant is 70 degrees and tilt is 85 degrees in both these transformations. When the reader's line of sight is orthogonal to this page at the cross F and the viewing distance is equal to the distance between F and D, the retinal images of both (b) and (d) are perspective images of the shape in (a) with slant 70 and tilt 85 degrees. When the reader's line of sight is orthogonal to this page at any other point than F, the retinal image of (d) is a projective image of the shape in (a).
Figure 3
 
A symmetric polyhedron, its symmetry plane and symmetry line segments. The 3D symmetry line segments are parallel to one another, and their projections (2D symmetry line segments) are also parallel to one another under an orthographic condition. The 3D symmetry line segments are parallel to the normal of the symmetry plane and they are bisected by the symmetry plane.
Figure 3
 
A symmetric polyhedron, its symmetry plane and symmetry line segments. The 3D symmetry line segments are parallel to one another, and their projections (2D symmetry line segments) are also parallel to one another under an orthographic condition. The 3D symmetry line segments are parallel to the normal of the symmetry plane and they are bisected by the symmetry plane.
Figure 4
 
Orthographic images of a symmetric polyhedron (a) and of three types of asymmetric polyhedra generated by distorting a symmetric polyhedron. (a) A symmetric polyhedron. The faces of the symmetric polyhedron were planar. The 3D symmetry line segments were always parallel. (b) An asymmetric polyhedron in Condition-R. The faces were not planar and the “3D symmetry line segments” were not parallel. (c) An asymmetric polyhedron in Condition-N. The faces were planar but the “3D symmetry line segments” were not parallel. (d) An asymmetric polyhedron in Condition-P. The faces were planar and the “3D symmetry line segments” were parallel. Amount of the distortion for generating these asymmetric polyhedra (b–d) was the largest (L4).
Figure 4
 
Orthographic images of a symmetric polyhedron (a) and of three types of asymmetric polyhedra generated by distorting a symmetric polyhedron. (a) A symmetric polyhedron. The faces of the symmetric polyhedron were planar. The 3D symmetry line segments were always parallel. (b) An asymmetric polyhedron in Condition-R. The faces were not planar and the “3D symmetry line segments” were not parallel. (c) An asymmetric polyhedron in Condition-N. The faces were planar but the “3D symmetry line segments” were not parallel. (d) An asymmetric polyhedron in Condition-P. The faces were planar and the “3D symmetry line segments” were parallel. Amount of the distortion for generating these asymmetric polyhedra (b–d) was the largest (L4).
Figure 5
 
Results from Experiment 1. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. The three curves indicate the three types of asymmetric polyhedra. Results from the static condition are plotted on the left and those from the kinetic condition are plotted on the right. (a) Results of individual subjects. Error bars represent the standard errors calculated from two sessions for each condition. (b) Averaged results from all three subjects. Error bars represent the standard errors calculated from three subjects.
Figure 5
 
Results from Experiment 1. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. The three curves indicate the three types of asymmetric polyhedra. Results from the static condition are plotted on the left and those from the kinetic condition are plotted on the right. (a) Results of individual subjects. Error bars represent the standard errors calculated from two sessions for each condition. (b) Averaged results from all three subjects. Error bars represent the standard errors calculated from three subjects.
Figure 6
 
Results from Experiment 2. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. The three curves indicate the three types of asymmetric polyhedra. (a) Results of individual subjects. Error bars represent the standard errors calculated from two sessions for each condition. (b) Averaged results from all three subjects. Error bars represent the standard errors calculated from three subjects.
Figure 6
 
Results from Experiment 2. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. The three curves indicate the three types of asymmetric polyhedra. (a) Results of individual subjects. Error bars represent the standard errors calculated from two sessions for each condition. (b) Averaged results from all three subjects. Error bars represent the standard errors calculated from three subjects.
Figure 7
 
Results of TS in the kinetic condition of Experiment 2 (the panel on right). Results of TS in the static condition (Experiment 2) are also plotted on left. Error bars represent the standard errors calculated from two sessions for each condition.
Figure 7
 
Results of TS in the kinetic condition of Experiment 2 (the panel on right). Results of TS in the static condition (Experiment 2) are also plotted on left. Error bars represent the standard errors calculated from two sessions for each condition.
Figure 8
 
Process of recovery of a symmetric polyhedron. (a) An original image given to the model. (b) A corrected image; the vertices are moved so that the 2D symmetry line segments (dashed contours) become parallel. (c) The virtual image generated by reflecting the real (corrected) image. (d) Only symmetric pairs of vertices that are both visible, along with the edges connecting them are shown. These vertices can be recovered from the real (corrected) image (b) and the virtual image (c). (e) A visible vertex (black circle) whose symmetric counterpart is occluded. This visible vertex can be recovered by applying the constraint of planarity to the contour enclosing the face (the shaded face). (f) An occluded vertex (open circle) whose counterpart (black circle) is visible and recovered in (e). This occluded vertex can be recovered by reflecting the counterpart with respect to the symmetry plane of the polyhedron. (g) The recovered polyhedron is uncorrected by moving the vertices so that the polyhedron is consistent with the original 2D image.
Figure 8
 
Process of recovery of a symmetric polyhedron. (a) An original image given to the model. (b) A corrected image; the vertices are moved so that the 2D symmetry line segments (dashed contours) become parallel. (c) The virtual image generated by reflecting the real (corrected) image. (d) Only symmetric pairs of vertices that are both visible, along with the edges connecting them are shown. These vertices can be recovered from the real (corrected) image (b) and the virtual image (c). (e) A visible vertex (black circle) whose symmetric counterpart is occluded. This visible vertex can be recovered by applying the constraint of planarity to the contour enclosing the face (the shaded face). (f) An occluded vertex (open circle) whose counterpart (black circle) is visible and recovered in (e). This occluded vertex can be recovered by reflecting the counterpart with respect to the symmetry plane of the polyhedron. (g) The recovered polyhedron is uncorrected by moving the vertices so that the polyhedron is consistent with the original 2D image.
Figure 9
 
Results of the model in the simulation experiment. The model was applied to the images used in the psychophysical experiments. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. Results from different types of asymmetric polyhedra are plotted in separate graphs. The results of the model (solid symbols) are superimposed on the averaged results of the subjects in the psychophysical experiments. Error bars represent the standard errors calculated from two sessions for each condition.
Figure 9
 
Results of the model in the simulation experiment. The model was applied to the images used in the psychophysical experiments. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. Results from different types of asymmetric polyhedra are plotted in separate graphs. The results of the model (solid symbols) are superimposed on the averaged results of the subjects in the psychophysical experiments. Error bars represent the standard errors calculated from two sessions for each condition.
Figure 10
 
Symmetric shapes composed of multiple objects. (a) Multiple natural organisms rarely compose a single 3D symmetric configuration in nature. Image from istockphoto.com. (b) In man-made world, there are also 3D symmetric configurations composed of multiple objects. Image from ashinari.com.
Figure 10
 
Symmetric shapes composed of multiple objects. (a) Multiple natural organisms rarely compose a single 3D symmetric configuration in nature. Image from istockphoto.com. (b) In man-made world, there are also 3D symmetric configurations composed of multiple objects. Image from ashinari.com.
Figure A1
 
A real (corrected) image and a virtual image of a symmetric 3D shape. The virtual image is generated by reflecting the corrected image. p i and p counterpart(i) are 2D orthographic projections of vertices i and its symmetric counterpart(i) in the corrected image. q i and q counterpart(i) in the virtual image or produced by computing a reflection of p i and p counterpart(i), respectively. Note that the virtual image is a valid 2D image of the same 3D shape after a 3D rigid rotation. p i maps to q counterpart(i) and p counterpart(i) maps to q i as a result of the 3D rotation.
Figure A1
 
A real (corrected) image and a virtual image of a symmetric 3D shape. The virtual image is generated by reflecting the corrected image. p i and p counterpart(i) are 2D orthographic projections of vertices i and its symmetric counterpart(i) in the corrected image. q i and q counterpart(i) in the virtual image or produced by computing a reflection of p i and p counterpart(i), respectively. Note that the virtual image is a valid 2D image of the same 3D shape after a 3D rigid rotation. p i maps to q counterpart(i) and p counterpart(i) maps to q i as a result of the 3D rotation.
Figure B1
 
Results of the model in the simulation experiment. The model was applied to the images used in the psychophysical experiments. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. Results from different types of asymmetric polyhedra are plotted in separate graphs. The results of the model (solid symbols) are superimposed on the averaged results of the subjects in the psychophysical experiments. Error bars represent the standard errors calculated from two sessions for each condition.
Figure B1
 
Results of the model in the simulation experiment. The model was applied to the images used in the psychophysical experiments. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. Results from different types of asymmetric polyhedra are plotted in separate graphs. The results of the model (solid symbols) are superimposed on the averaged results of the subjects in the psychophysical experiments. Error bars represent the standard errors calculated from two sessions for each condition.
Figure C1
 
Results of the model in the simulation experiment. The model was applied to the images used in the psychophysical experiments. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. Results from different types of asymmetric polyhedra are plotted in separate graphs. The results of the model (solid symbols) are superimposed on the averaged results of the subjects in the psychophysical experiments. Error bars represent the standard errors calculated from two sessions for each condition.
Figure C1
 
Results of the model in the simulation experiment. The model was applied to the images used in the psychophysical experiments. The ordinate shows d′, and the abscissa shows levels of distortion of asymmetric polyhedra. Results from different types of asymmetric polyhedra are plotted in separate graphs. The results of the model (solid symbols) are superimposed on the averaged results of the subjects in the psychophysical experiments. Error bars represent the standard errors calculated from two sessions for each condition.
© 2010 ARVO
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×