Human vision is highly sensitive to bilateral symmetry in 2-D images. It is, however, not clear yet whether this visual sensitivity relates to symmetry of 3-D objects or whether it relates to symmetry of the 2-D image itself. We used a stereoscopically presented stimulus and a 3-D bisection task that enable us to dissociate object symmetry from image symmetry. The bisection stimulus consisted of three parallel lines, of which two lines were located in one depth plane and the third line in another. Bisection judgments were different for horizontal and vertical lines, which can be explained by taking into account the distinct viewpoints of the left and right eyes for either of the visible sides of the 3-D object. Image symmetry from a monocular vantage point predicts 3-D bisection better than object symmetry. We conclude that observers use either of the two monocular 2-D images separately but not a single cyclopean view—nicely dovetailing with what they do when they assess both 3-D visual direction and 3-D shape—to assess 3-D symmetry.

**a**,

**b**, and

**c**

_{1}) constitute a symmetric object if the background line is placed at location

**c**

_{1}. The circular path through

**c**

_{1}(in red) indicates locations of the background line for which object symmetry exists for a range of disparities. The plane of object symmetry is oriented perpendicular to the plane depicted in Figure 1B. It includes the central line of the stimulus (

**b**) and bisects

**ac**

_{1}. The plane of object symmetry depends on the disparity of the lines, but it is independent of viewpoint

**v**. Contrastingly, image symmetry depends on

**v**. The lines of the bisection stimulus (

**a**,

**b**, and

**c**

_{2}) constitute a symmetric image for the viewer

**v**if the background line is placed at location

**c**

_{2}. The radial path through

**c**

_{2}(in blue) indicates locations of the background line for which the retinal image is symmetric for a range of disparities. The axis of image symmetry coincides with the central stimulus line

**b**oriented perpendicular to the plane of Figure 1B. The red and blue line segments in Figure 1B show that object symmetry and image symmetry make different predictions, particularly for uncrossed (far) disparities.

^{1}from the literature show that judgments of 2-D shapes and angles as they are in the plane of a picture depend on their 3-D interpretation. The “turning the table” illusion described by Shepard (1990) shows that the drawn 2-D shape of a tabletop depends on the 3-D orientation of the table. Nundy, Lotto, Coppola, Shimpi, and Purves (2000) showed that identical 2-D angles between lines in a picture are perceived quite differently dependent on the 3-D context of the lines. Here, we will show that, different from the judgments in these examples, bisection concerns the 2-D rather than the 3-D interpretation of images. This result suggests that image symmetry represents significant information by itself and is not just the by-product of object symmetry.

^{2}) dots (size, 4 × 4 min) that were distributed on a dark (0.3 cd/m

^{2}) background. The stereograms embedded a square (8° × 8°) that was centered at a rectangular background (36° wide × 27° high). The square floated either in front or behind the background (crossed or uncrossed disparity). Disparity between square and background was varied in seven equidistant steps between −30 min and 30 min. Three lines (8° × 4 min) were binocularly visible. Two lines were placed on the square: one at its edge and the other line at a fixed distance of twice the disparity between square and background from the first one. The third line was placed in the background. Subjects could displace this line in the background's depth plane using keys. The experiments were run in a darkened room in which the monitor was the only visible object.

*α*

_{L}in the left eye and

*α*

_{R}in the right eye) indicated the angular separation between the fixed lines (

**a**and

**b**in Figure 1B). The other set (

*β*

_{L}and

*β*

_{R}) indicated the angular separation between the central line (

**b**) and the movable line (

**c**). The (vertical) angular separation between horizontal lines

**b**and

**c**was computed as

*β*=

*β*

_{L}=

*β*

_{R}. The mean (horizontal) angular separation between vertical lines

**b**and

**c**, for which

*β*

_{L}≠

*β*

_{R}, was computed as

*β*= (

*β*

_{L}+

*β*

_{R})/2, which implies that the cyclopean eye was used as the reference. In addition, the horizontal disparity (

*d*) between lines

**b**and

**c**was computed as

*d*= |

*β*

_{L}−

*β*

_{R}|. All angles between

**a**,

**b**, and

**c**were taken to be positive to allow easy comparison between the bisection settings collected from the various edges of the central square of the stereogram. The disparity of

**c**relative to

**a**and

**b**was assigned a positive (negative) sign for locations of

**c**in front of (behind) the screen.

*t*tests. Differences between dot-density conditions and disparity conditions were tested with within-subject, two-way ANOVAs (Dot Density × Disparity) on the bisection settings for each condition. An effect of dot density would indicate that visual directions depend on the presence of half-occlusions.

*α*–

*β*( Figure 1B) as a function of disparity. Figure 2 shows the mean bisection judgments for horizontal and vertical lines made by one subject. As Figure 2 shows, the bisection results were different for horizontal and vertical lines and for crossed and uncrossed disparities. The absolute differences between

*α*and

*β*were much larger for vertical than for horizontal lines. For uncrossed disparity,

*α*–

*β*was always positive, which means that the angle between the more distant line and the central line was set at smaller values than the angle between the lines having equal distances to the viewer. For crossed disparity,

*α*–

*β*was negative for horizontal lines and positive for vertical lines. For horizontal lines,

*α*–

*β*showed a linear relationship with disparity (orange line). For vertical lines,

*α*–

*β*was always positive (blue lines), which implies that angles between lines at unequal depths (

*β*) were always set smaller than angles between equal-depth lines (

*α*).

*F*(3, 139) < 1.76,

*p*> .15; lower edge:

*F*(3, 139) < 2.45,

*p*> .06. There was a significant effect of disparity in five of the six subjects, upper edge:

*F*(6, 139) > 8.82,

*p*< .001; lower edge:

*F*(6, 139) > 20.95,

*p*< .001. For the horizontal lines, the mean slope of the bisection judgments as function of disparity was −0.18.

*F*(3, 139) < 1.92,

*p*> .13; right edge:

*F*(3, 139) < 2.27,

*p*> .08. Separate

*t*tests showed that the means of

*α*–

*β*were significantly different from zero for each dot density (

*p*< .05). There was a main effect of disparity, left edge:

*F*(6, 139) > 11.51,

*p*< .001; right edge:

*F*(6, 139) > 56.32,

*p*< .001. Separate

*t*tests showed that

*α*–

*β*was significantly different from zero for disparities of −30, −20, 10, 20, and 30 min (

*p*< .05) but not for disparities of 0 and 10 min (

*p*> .12). Means of

*α*–

*β*were positive for all disparities but slightly less positive for positive than negative disparities.

*p*> .18) and variance ratio (

*p*> .07) indicated that it was not unlikely that both data sets were drawn from the same population.

*β*) were always set smaller than angles between equal-depth lines (

*α*). The question, then, is, how can different viewpoints explain this finding? In the computations of

*α*–

*β,*we assumed the cyclopean eye to be the viewpoint. However, if, instead, the subjects used the left or right eye as the reference, our computations must be adjusted to reflect their bisection judgments. Figure 4A shows the translation of the equation

*β*=

*α*− |

*d*/2| from the

*α*–

*β*against the disparity plot of Figure 3B into direction and distance coordinates. The relationship

*β*=

*α*− |

*d*/2| is equivalent with

*α*=

*β*

_{L}if

*d*< 0 and with

*α*=

*β*

_{R}if

*d*> 0, as shown by the dashed blue and red lines of Figure 4A. Figure 4B shows the bisection judgments of Figure 3B that are, now, recomputed relative to the dashed blue and red lines. The green areas in Figure 4A sketch the locations of the movable line for the bisection judgments shown in Figures 3B and 4B. Comparison of the recomputed judgments for vertical lines ( Figure 4B) with the bisection judgments for horizontal lines ( Figure 3A) shows that, now, the results are very similar for vertical and horizontal lines, wherein both show a small negative slope as a function of disparity.

*α*–

*β*as a function of disparity for both types of symmetry ( Figure 5). Image symmetry predicts that

*α*–

*β*= 0 for all disparities, indicated by the blue horizontal line. The predictions for object symmetry depend on the viewing distance, which was 50 cm in the experiments. However, it may be that judgment of object symmetry is related to the perceptual rather than to the physical viewing distance. To find out at what distances our subjects perceived the bisection stimuli, we computed the predictions of object symmetry for viewing distances of 30, 50, and 70 cm ( Figure 5, red lines). The curves of

*α*–

*β*that predict object symmetry are parabolas with vertices that are offset by a small positive disparity (shifted to the right by about 5 min in Figure 5). The offset is caused by the fact that the 3-D bisection stimuli were viewed in slightly eccentric directions and that zero disparity was defined relative to the image plane (superposition on the screen). Figure 5 shows that image symmetry (blue line) and object symmetry (red lines) make very different predictions for

*α*–

*β*as a function of disparity. Object symmetry predicts that, for positive and negative disparities larger than 20 min,

*α*–

*β*should be positive and should progressively increase with increasing disparity. Comparison of the bisection judgments shown in Figures 3A (horizontal lines) and 4B (vertical lines) with these predictions suggests that the experimental data are best described by image symmetry. Furthermore, Figure 4 shows that 3-D bisection is best explained by monocular image symmetry. This result implies that monocular rather than cyclopean viewpoints are used for bisection judgments. Which eye's image is symmetrical appeared to be related to the 3-D layout of the bisection stimulus and not to a specific eye or class of disparity.

*α*–

*β*as a function of disparity ( Figures 3A and 4B). We speculate that image symmetry of nonequidistant stimuli may be affected by the well-known phenomenon of size constancy, which involves that the perceived depth of an object influences its perceived size (Howard & Rogers, 2002). Size-constancy effects imply that if depth is well defined, people judge the size of a more distant object as larger than that of an object of equal retinal size located at a nearer distance (Carlson, 1962; Epstein, 1963). We speculate that the size-constancy effects cause the phenomenon that separations between two non-equidistant lines are judged differently from separations between two equidistant lines.

^{1}The mentioned geometric illusions can be viewed at, for instance, http://www.lottolab.org, http://www.michaelbach.de/ot/index.html, and http://viperlib.york.ac.uk.